Rusting My (Academic) Code

Written by J. David Smith
Published on 23 February 2017

A couple of weeks ago I mentioned several methods that I'd been (trying) to use to help improve the verifiability of my research. This is the first in an (eventual) series on the subject. Today's topic? Rust.

One of the reasons it took me so long to write this post is the difficulties inherent in describing why I chose Rust over another, more traditional language like Java or C/C++. Each previous time, I motivated the selection by first covering the problems of those languages. I have since come to realize that that is not a productive approach – each iteration invariably devolved into opining about the pros and cons of various trade-offs made in language design. In this post, I am instead going to describe the factors that led to me choosing Rust, and what practical gains it has given me over the past two projects I've worked on.

In the Beginning...

First, some background on who I am, what I do, and what constraints I have on my work. I am a graduate student at the University of Florida studying Optimization/Security on Online Social Networks under Dr. My T. Thai. Most problems I work on are framed as (stochastic) graph optimization problems, which are almost universally NP-hard. We typically employ approximation algorithms to address this, but even then the resulting algorithm can still be rather slow to experiment with due to a combination of difficult problems, large datasets and the number of repetitions needed to establish actual performance.

This leads to two often-conflicting constraints: the implementations musts be performant to allow us to both meet publication deadlines and compete with previous implementations, and the implementations must also be something we can validate as correct. Well, ideally. Often, validating code is only done by the authors and code is not released. Most code in my field is in either C or C++, with the occasional outlier in Java when performance is less of a concern, in order to satisfy that first constraint. However, after repeatedly having to work in other's rushed C/C++ codebases, I got fed up. Enough of this! I thought, There must be something better!

I set about re-implementing the state-of-the-art approximation algorithm (SSA) for the Influence Maximization problem. Hung Nguyen, Thang Dinh, My Thai.
“Stop-and-Stare: Optimal Sampling Algorithms for Viral Marketing in Billion-Scale Networks.”
In the Proceedings of SIGMOD 2016.
This method has an absurdly-optimized C++ implementation, using all sorts of tricks to eek out every possible bit of performance. Solving this problem – even approximately – on a network with 40+ million nodes is no small feat, and the implementation shows the effort they put into getting there. This implementation formed my baseline both for performance and correctness: every language I rewrote it in should produce (roughly) identical solutions, The algorithm uses non-deterministic sampling, so there is room for some error after all the big contributors have been found. and every feasible language would have to get in the same ballpark in terms of memory usage and performance.

In my spare time over the space of a couple of months, I implemented SSA in Chez Scheme, Clojure, Haskell, OCaml and, eventually, Rust. My bent towards functional programming shows clearly in this choice of languages. I'm a lisp weenie at heart, but unfortunately no Scheme implementation I tried nor Clojure could remotely compete with the C++ implementation. Haskell and OCaml shared the problem that graph processing in purely-functional languages is just an enormous pain in the ass, in addition to not hitting my performance goals Some of this is due to my unfamiliarity with optimizing in these languages. However, I didn't want to get into that particular black magic just to use a language that was already painful for use with graphs. – though they got much closer. Then I tried Rust. Holy. Shit.

The difference was immediately apparent. Right off the bat, I got performance in the ballpark of the baseline (once I remembered to compile in release mode). It was perhaps 50% slower than the C++ implementation. After a few rounds of optimization, I got that down to about 20-25%. Unfortunately, I no longer have the timing info. As that isn't the focus of this post, I also haven't re-run these experiments. Had I been willing to break more safety guarantees, I could have applied several of the optimizations from the baseline to make further improvement. Some of this performance is due to the ability to selectively use mutability to speed up critical loops, and some is due to the fact that certain graph operations are simply easier to express in an imperative fashion, leading me to write less wasteful code. What's more: it also has a similar type system – with its strong guarantees – to the system of ML-family languages that I adore, combined with a modern build/packaging tool and a thriving ecosystem. It seemed that I'd found a winner. Now, for the real test.

The First Paper: Batched Stochastic Optimization

I was responsible for writing the implementation for the next paper I worked on. The code – and paper – aren't out yet. I'll hear back in early March as to whether it was accepted. We were extending a previous work Xiang Li, J David Smith, Thang Dinh, My T. Thai. “Privacy Issues in Light of Reconnaissance Attacks with Incomplete Information.”
In the Proceedings of WI 2016.
to support batched updates. As the problem was again NP-hard with dependencies on the ordering of choices made, this was no small task. Our batching method ended up being exponential in complexity, but with a strict upper bound on the size of the search space so that it remained feasible. This was my first testbed for Rust as a language for my work.

Immediately, it paid dividends. I was able to take advantage of a number of wonderful libraries that allowed me to jump straight from starting work to implementing our method. Performance was excellent, parallelism was easy, and I was able to easily log in a goddamn parseable format. It was wonderful. Even the time I spent fighting the borrow checker A phase I will note was very temporary; I rarely run into it anymore. failed to outrun the benefits gained by being able to build on others' work. I had a real testing framework, which caught numerous bugs in my batching code. Even parallelism, which generally isn't much of a problem for such small codebases, was no harder than applying OpenMP. Indeed, since I needed read-write locks it was in fact easier as OpenMP lacks that feature (you have to use pthread instead). It was glorious, and I loved it.

Still do, to be honest.

The Second Work: Generic Analysis

The next paper I worked on was theory driven. I showed a new method to estimate a lower bound on approximation quality for a previously unstudied Not exactly new, but general bounds for this class of problem had not really been considered. As a result, the greedy approximation algorithm hadn't seen much use on it. class of problems. The general idea is that we're maximizing an objective function f, subject to some constraints where we know that all maximal solutions have the same size. I needed to be able to transparently plug in different values of f, operating on different kinds of data, with different constraints. Further, for efficiency's sake I also needed a way to represent the set of elements that would need updating after each step.

Rust gave me the tools to do this. I defined a trait Objective to abstract over the different possible values of f. Each implementor could handle building their own internal representation, and with associated types I could easily allow each kind to operate on their own kind of data.

It worked. Really well, actually.

I wrote a completely generic greedy algorithm in terms of this trait, along with some pretty fancy analysis. Everything just...worked, and with static dispatch I paid very little at runtime for this level of indirection. At, least, as soon as it compiled.

Not All Fun & Games

As great as my time with Rust has been, there are still a few flaws I feel compelled to point out.

Borrow Checking is Sometimes Extremely Painful

While in most cases the borrow checker became something I dealt with subconsciously, in a few it was still an extraordinary pain in the ass. The first, and what seems to be the most common, is the case of having callbacks on structs. Some of this comes down to confusion over the correct syntax (e.g. Fn(...) -> X or fn(...) -> X), but I mentally recoil at the thought of trying to make a struct with callbacks on it. This was not a pleasant experience.

The second arose in writing the constructor for an InfMax objective using the SSA sampling method. There are multiple different diffusion models for this problem, each represented as a struct implementing Iterator. I wanted my InfMax struct to own the Graph object it operated on, and pass a reference of this to the sampling iterator. The borrow checker refused.

To this day, I still don't know how to get that to work. I ultimately caved and had InfMax store a reference with lifetime 'a, giving the sampler a reference with a lifetime 'b such that 'a: 'b (that is, 'a is at least as long as 'b). While this workaround took only a moderate amount of time to find, I still lost quite a bit trying to get it to work as I wanted. Giving InfMax a reference never caused any issues, but I still find it annoying.

Trait Coherence

In order to use the bit-set library, I wanted to force the Element associated type of each Objective to satisfy Into<usize>. The NodeIndex type from petgraph does not, but has a method fn index(self) -> usize. Due to the rules about when you can implement traits (namely, that you can't implement traits when both the trait and the type are non-local), this was impossible without forking or getting a PR into the petgraph repository.

Forking was my ultimate solution, and hopefully a temporary one. There isn't a clear way to work around this Implementing a helper trait was frustrating because you can't implement both From<T: Into<usize>> for Helper and From<NodeIndex> for Helper, presumably because at a future point NodeIndex could implement From/Into<usize>? , and yet there also isn't a clear way to make this work without allowing crates to break each other.

Fortran FFI & Absent Documentation

I almost feel bad complaining about this, because it is so, so niche. But I'm going to. I had to interface with a bit (read: nearly 4000 lines) of Fortran code. In theory, since the Fortran ABI is sufficiently similar to C's, I ought to be able to just apply the same techniques as I use for C FFI. As it happens, this is mostly correct. Mostly.

Fortran FFI is poorly documented for C, so as one might imagine instructions for interfacing from Rust were nearly absent. It turns out that, in Fortran, everything is a pointer. Even elementary types like int. Of course, I also had the added complexity that matrices are stored in column-major order (as opposed to the row-major order used everywhere else). This led to a lengthy period of confusion as to whether I was simply encoding my matrices wrong, wasn't passing the pointers correctly, or was failing in some other way (like passing f32s instead of f64s).

It turns out that you simply need to flatten all the matrices, pass everything by pointer, and ensure that no call is made to functions with unspecified-dimensional matrices. Calling functions with matrices of the form REAL(N, M) works provided N and M are either variables in scope or parameters of the function.

Numeric-Cast-Hell

The inability to multiply numbers of different types together without as is immensely frustrating. I know why this is – I've been bitten by a / b doing integer division before – but formulae littered with x as f64 / y as f64 are simply difficult to read. Once you've figured out the types everything needs to be, this can be largely remedied by ensuring everything coming into the function has correct type and pre-casting everything else. This helps dramatically, though in the end I often found myself just making everything f32 to save myself the trouble (as that was what most values already were). The inverted notation for things like pow and log (written x.pow(y) and x.log(y) in Rust) further hinders readability.

Verifiability & The Big Wins for Academic Use

I began this post by referencing the verifiability of simulations (and other code). Through all this, I have focused more on the usability wins rather than the verifiability. Why is that? Fundamentally, it is because code is only as verifiable as it is written to be. The choice of language only impacts this in how well it constrains authors to “clearly correct” territory. In comparison to C/C++, Rust has clear advantages. There is no risk of accidental integer division, very low risk of parallelism bugs due to the excellent standard primitives, and nearly-free documentation. Some commenting required. Accidentally indexing outside of bounds (a problem I've run into before) is immediately detected, and error handling is rather sane.

Further, the structure of Rust encourages modularization and allows formerly-internal modules to be pulled out into their own crates, which can be independently verified for correctness and re-used with ease – even without uploading them to crates.io. A concrete example of this is my Reverse Influence Sampling Iterators, which were originally (and still are, unfortunately) a local module from my most recent paper and have subsequently found use in the (re-)implementation of several recent influence maximization algorithms.

This modularity is, I think, the biggest win. While reviewing and verifying a several-thousand-line codebase associated with a paper is unlikely to ever be practical, a research community building a set of common libraries could not only reduce the need to continually re-invent the wheel, but also limit the scope of any individual implementation to only the novel portion. This may reduce codebase sizes to the point that review could become practical.

Case in Point: the original implementation of the TipTop algorithm was nearly 1500 lines of C++. My more recent Rust implementation is 253. Admittedly, this is without the weighted sampling allowed in the C++ implementation. However, that [isn't hard to implement][weighted].

(Please forgive my lack of comments, that was tight paper deadline.)
This gain is due to the fact that I didn't have to include a graph representation or parsing, sampling, logging (the logs are machine-readable!), or command-line parsing.

In Conclusion...

For a while, I was worried that my exploration of alternate languages would be fruitless, that I'd wasted my time and would be stuck using C++ for the remainder of grad school. Words cannot describe how happy I am that this is not the case. Despite its flaws, using Rust has been an immeasurable improvement over C++ both in terms of productivity and sanity. Further, while verifiability is very much secondary in my choice of language, I believe the tools and safety that Rust provides are clearly advantageous in this area.

Adding New Capabilities to Kiibohd

Written by J. David Smith
Published on 13 January 2016

One of the reasons I bought my ErgoDox was because I'd be able to hack on it. Initially, I stuck to changing the layout to Colemak and adding bindings for media keys. Doing this with the Kiibohd firmware is reasonably straightforward: clone the repository, change or add some .kll files KLL itself is pretty straightforward, although the remapping rules at times are cumbersome. They follow the semantics of vim remap rather than the more-sane-for-mapping-the-entire-keyboard noremap , and then recompile and reflash using the provided shell scripts.

This week, I decided to finally add a new capability to my board: LCD status control. One thing that has irked me about the Infinity ErgoDox is that the LCD backlights remain on even when the computer is off. As my computer is in my bedroom, this means that I have two bright nightlights unless I unplug the keyboard before going to bed.

Fortunately, the Kiibohd firmware and KLL language support adding capabilities, which are C functions conforming to a couple of simple rules, and exposing those capabilities for keybinding. This is how the stock Infinity ErgoDox LCD and LED control is implemented, and how I planned to implement my extension. However, the process is rather poorly documented and presented some unexpected hurdles. Ultimately, I got it working and wanted to document the process for posterity here. Once I get a better understanding of the process, I will contribute this information back to the Github wiki The rest of this post will cover in detail how to add a new capability LCDStatus(status) that controls the LCD status. LCDStatus(0/1/2) will turn off/turn on/toggle the LCD.

Background & Setting Up

Before attempting to add a capability, make sure you can compile the stock firmware and flash it to a keyboard successfully. The instructions on the Kiibohd repository are solid, so I won't reproduce them here.

An important note before beginning is that it is possible to connect to the keyboard via a serial port. On Linux, this is typically /dev/ttyACM0. The command screen /dev/ttyACM0 as root will allow one to connect, issue commands, and view debug messages during development.

This post is specifically concerned with implementing an LCDStatus capability. If you don't have an LCD to control the status of, then this obviously will be nonsense. However, much of the material (e.g. on states and state types) may still be of use.

The Skeleton of a Capability

Capabilities in Kiibohd are simply C functions that conform to an API: void functions with three parameters: state, stateType, and args. At the absolute minimum, a capability will look like this:

void my_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
}

The combinations of state and stateType describe the keyboard state: This information is buried in a comment in Macro/PartialMap/kll.h

`stateType` `state` meaning
0x00 (Normal)0x00key depressed
0x00 (Normal)0x01key pressed
0x00 (Normal)0x02key held
0x00 (Normal)0x03key released
0x01 (LED)0x00off
0x01 (LED)0x01on
0x02 (Analog)0x00key depressed
0x02 (Analog)0x01key released
0x02 (Analog)0x10 - 0xFFLight Press - Max Press
0x03-0xFEReserved
0xFF (Debug)0xFFPrint capability signature

Every capability should implement support for the debug state. Without this, the capability will not show up in the capList debug command.

void my_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
    if ( state == 0xFF && stateType == 0xFF ) {
        print("my_capability(arg1, arg2)");
        return;
    }
}

Within this skeleton, you can do whatever you want! The full power of C is at your disposal, commander.

Turning Out the Lights

The LCD backlights have three channels corresponding to the usual red, green, and blue. These are unintuitively named FTM0_C0V, FTM0_C1V, and FTM0_C2V. These names refer to the documentation for the LCD itself, so in that context they make sense. To turn them off, we simply zero them out:

void LCD_status_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
    if ( state == 0xFF && stateType == 0xFF ) {
        print("my_capability(arg1, arg2)");
        return;
    }
    FTM0_C0V = 0;
    FTM0_C1V = 0;
    FTM0_C2V = 0;
}

With this addition, we have a capability that adds new functionality! I began by adding this function to Scan/STLcd/lcd_scan.c, because I didn't and still don't want to mess with adding new sources to CMake. Now we can expose this simple capability in KLL:

LCDStatus => LCD_status_capability();

It can be bound to a key just like the built-in capabilities:

U"Delete": LCDStatus();

If you were to compile and flash this firmware, then pressing Delete would now turn off the LCD instead of deleting. On the master half. We will get to communication later.

Adding Some Arguments

The next step in our quest is to add the status argument to the capability. This is pretty straightforward. First, we will update the KLL to reflect the argument we want:

LCDStatus => LCD_status_capability( status : 1 );

The status : 1 in the signature defines the name and size of the argument in bytes. The name isn't used for anything, but should be named something reasonable for all the usual reasons.

Then, our binding becomes:

U"Delete": LCDStatus( 0 );

Processing the arguments in C is, unfortunately, a bit annoying. The third parameter to our function (*args) is an array of uint8_t. Since we only have one argument, we can just dereference it to get the value. However, there are examples of more complicated arguments in lcd_scan.c illustrating how not-nice it can be.

void LCD_status_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
    if ( state == 0xFF && stateType == 0xFF ) {
        print("my_capability(arg1, arg2)");
    }

    uint8_t status = *args;
    if ( status == 0 ) {
        FTM0_C0V = 0;
        FTM0_C1V = 0;
        FTM0_C2V = 0;
    }
}

Figuring out how to restore the LCD to a reasonable state is less straightforward. What I chose to do for my implementation was to grab the last state stored by LCD_layerStackExact_capability and use that capability to restore it. In practice, it doesn't matter if you even can restore it: any key that changes the color of the backlight also changes its magnitude. The default ErgoDox setup has colors for each partial map, and I'd imagine most people would put a function like this off of the main typing map because of its infrequent utility. As a result, the mere act of pressing the modifier to activate this capability will turn the backlight back on. However, I implemented it anyway just in case. layerStackExact uses two variables to track its state:

uint16_t LCD_layerStackExact[4];
uint8_t LCD_layerStackExact_size = 0;

It also defines a struct which it uses to typecast the *args parameter.

typedef struct LCD_layerStackExact_args {
	uint8_t numArgs;
	uint16_t layers[4];
} LCD_layerStackExact_args;

We can turn the LCD back on by calling the capability with the stored state. Note that I copied the array, just to be safe. I'm not sure if it is necessary but I didn't want to have to try to debug corrupted memory.

void LCD_status_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
    if ( state == 0xFF && stateType == 0xFF ) {
        print("my_capability(arg1, arg2)");
    }

    uint8_t status = *args;
    if ( status == 0 ) {
        FTM0_C0V = 0;
        FTM0_C1V = 0;
        FTM0_C2V = 0;
    } else if ( status == 1 ) {
        LCD_layerStackExact_args stack_args;
        stack_args.numArgs = LCD_layerStackExact_size;
        memcpy(stack_args.layers, LCD_layerStackExact, sizeof(LCD_layerStackExact));
        LCD_layerStackExact_capability( state, stateType, (uint8_t*)&stack_args );
    }
}

Now binding a key to LCDStatus(1) would turn on the LCDs.

Creating Some State

Like most mostly-functional programmers, I abhor state. Don't like it. Don't want it. Don't want to deal with it. However, if we want to implement a toggle that's exactly what we'll need. We simply create a global variable (ewww, I know! But we can deal) LCD_status and set it to the appropriate values. Then toggling is as simple as making a recursive call with !LCD_status.

uint8_t LCD_status = 1; // default on
void LCD_status_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
    if ( state == 0xFF && stateType == 0xFF ) {
        print("my_capability(arg1, arg2)");
    }

    uint8_t status = *args;
    if ( status == 0 ) {
        FTM0_C0V = 0;
        FTM0_C1V = 0;
        FTM0_C2V = 0;
        LCD_status = 0;
    } else if ( status == 1 ) {
        LCD_layerStackExact_args stack_args;
        stack_args.numArgs = LCD_layerStackExact_size;
        memcpy(stack_args.layers, LCD_layerStackExact, sizeof(LCD_layerStackExact));
        LCD_layerStackExact_capability( state, stateType, (uint8_t*)&stack_args );
        LCD_status = 1;
    } else if ( status == 2 ) {
        status = !LCD_status;
        LCD_status_capability( state, stateType, &status );
    }
}

Binding a key to LCDStatus(2) will now...do nothing (probably). Why? The problem is that the capability will continuously fire while the key is held down, and the microcontroller is plenty fast enough to fire arbitrarily many times during even a quick tap. So, we will guard the toggle The other two options move the keyboard to a fixed state and thus don't need to be protected. with an additional condition:

else if ( status == 2 && stateType == 0 && state == 0x03 ) {
    // ...
}

Release (0x03) is the only state that fires only once, so we check for that. Alas, even after fixing this, we still only have one LCD bent to our will! What about the other?

Inter-Keyboard Communication

The two halves of an Infinity ErgoDox are actually completely independent and may be used independently of one another. However, if the two halves are connected then they can communicate by sending messages back and forth. If both halves are separately plugged into the computer, then they can't communicate. I haven't delved into the network code, but I assume it is probably serial like the debug communication.

Very Important Note: You must flash both halves of the keyboard to have matching implementations of the capability when using communication.

The code to communicate changes relatively little from case to case but is rather long to reconstruct by hand. Therefore, I basically just copied it from LCD_layerStackExact_capability, changed the function it referred to, and called it a day. Wonderfully, that worked! Well, sort of. It turns out that not guarding against recursion caused weird issues where it would work with the right-hand being master, but not the left. It took a long time to debug because the error was unrelated to the fix (guarding against the recursive case).

#if defined(ConnectEnabled_define)
  // Only deal with the interconnect if it has been compiled in
  if ( status == 0 || status == 1 ) {
     // skip in the recursive case

    if ( Connect_master )
      {
        // generatedKeymap.h
        extern const Capability CapabilitiesList[];

        // Broadcast LCD_status remote capability (0xFF is the broadcast id)
        Connect_send_RemoteCapability(
            0xFF,
            LCD_status_capability_index,
            state,
            stateType,
            CapabilitiesList[ LCD_status_capability_index ].argCount,
            &status);
      }
  }
#endif

The magic constant LCD_status_capability_index ends up available through some build magic that I haven't delved into yet.

The Final Result

Putting all of that code together, we get:

uint8_t LCD_status = 1; // default on
void LCD_status_capability(uint8_t state, uint8_t stateType, uint8_t *args) {
    if ( state == 0xFF && stateType == 0xFF ) {
        print("my_capability(arg1, arg2)");
    }

    uint8_t status = *args;
    if ( status == 0 ) {
        FTM0_C0V = 0;
        FTM0_C1V = 0;
        FTM0_C2V = 0;
        LCD_status = 0;
    } else if ( status == 1 ) {
        LCD_layerStackExact_args stack_args;
        stack_args.numArgs = LCD_layerStackExact_size;
        memcpy(stack_args.layers, LCD_layerStackExact, sizeof(LCD_layerStackExact));
        LCD_layerStackExact_capability( state, stateType, (uint8_t*)&stack_args );
        LCD_status = 1;
    } else if ( status == 2 && stateType == 0 && state == 0x03 ) {
        status = !LCD_status;
        LCD_status_capability( state, stateType, &status );
    }

#if defined(ConnectEnabled_define)
    // Only deal with the interconnect if it has been compiled in
    if ( status == 0 || status == 1 ) {
       // skip in the recursive case

      if ( Connect_master )
        {
          // generatedKeymap.h
          extern const Capability CapabilitiesList[];

          // Broadcast LCD_status remote capability (0xFF is the broadcast id)
          Connect_send_RemoteCapability(
              0xFF,
              LCD_status_capability_index,
              state,
              stateType,
              CapabilitiesList[ LCD_status_capability_index ].argCount,
              &status);
        }
    }
#endif
}

I have a working implementation of this on my fork of kiibohd. I'm looking forward to adding more capabilities to my keyboard now that I've gotten over the initial learning curve. I've already got a couple in mind. Generic Mod-key lock, anyone?

Thoughts on XCOM: Enemy Within and XCOM: Long War

Written by J. David Smith
Published on 05 January 2016

At this point, I consider XCOM: Enemy Unknown/Within to be one of my favorite games of the past few years, if not an all-time favorite. I'm not going to talk much about why XCOM is a good game; that has been covered more than adequately. I am rather disappointed that it took til 2015 for me to discover it, but I am immensely glad that I did. I have over 100 hours on Steam at this point, far surpassing any other recent single-player game. Prior to XCOM the most recent game I'd put this much time into was Dragon Age: Origins. I beat the game on Classic Ironman recently, and after a few several many attempts at Impossible Ironman, I was in the mood for something new. Still XCOM, mind you, but new. After looking a bit at the Second Wave options, XCOM has a variety of options that tweak the gameplay, such as reducing the accuracy of injured soldiers ("Red Fog") or gradually increasing accuracy as you approach a complete flanking ("Aiming Angles"). I am looking forward to unlocking item-loss on death ("Total Loss"; an omission that surprised me until I learned about the Second Wave options) when I get around to beating Impossible I decided to take a look at that mod I kept hearing about: Long War.

Ho. Lee. Shit.

More Like Long Changelog

To say that the number of changes made by Long War is many does a great disservice to the word. The changes in Long War are legion. They are multitudes. This isn't merely a set of tweaks and additions. Rather, it is very nearly a total conversion.

Once I recovered from my shock and the number of changes, my first fleeting thought was one of concern. Was this just a kitchen sink mod? A realization of some long-time fan's laundry-list of changes to make XCOM more like its predecessor? An in-my-opinion misguided attempt to make a game about saving Earth from space aliens more realistic? Further inspection only increased these concerns. Why have Assault Rifles and Battle Rifles and Carbines? What did adding 4 more classes bring to the table? Why require 10 corpses for an autopsy instead of 1? Not that it matters, I have so fucking many corpses and wrecks that they could ask for 50 and I could still do most of the autopsies. These thoughts made me hold off on diving into it. I started (and failed) another Impossible Ironman campaign first, then I downloaded and installed Long War.

They strongly recommend that you start back on Normal, because as I mentioned above: the changes are significant. So I did. I learned about the new systems, and steamrolled mission after mission with the new classes. That isn't to say they're imbalanced, just that if you play enough Impossible then Normal becomes pretty straightforward, even if it is a bit harder than Enemy Within Normal. One thing that struck me very early on is the quality of several of the UI/UX changes made by the Long War team. Scanning for contacts now stops right before a mission expires, giving you the chance to wait for a new tech to finish or a soldier to get their ass out of bed. When a unit (alien or human) enters Overwatch, it is now shown next to their healthbar, which means that the player is no longer out of luck if they happen to look away during the alien turn. They also added a "Bronzeman" mode that strikes a good medium between savescum and Ironman modes. It that behaves very much like the default in Fire Emblem: you are able to restart the mission, but not re-load in-mission saves. I would love to have these changes by themselves in the base game, and I do believe that some (like the Overwatch change) made it into XCOM 2. The changes made to the actual gameplay don't fundamentally alter the way it plays at a tactical level, although they do change the squad compositions that are effective. However, there are long-term ramifications to some of the changes that notably impacted my enjoyment of the game.

To Know Your Face

Far and away the most damaging change to the game is Fatigue. Not the only bad change, though. I could probably write an entire post on why Steady Weapon is awful design. In XCOM, when a soldier is injured in a mission they must take some time off after to heal up. As a result, it is prudent to have at least a B-list of soldiers that you can sub in for that Major Sniper that you barely saved from bleeding out. In Long War, when a soldier is not injured they still must take 3-5 days off (more for psionic soldiers actually wanting to use their psychic powers). This means that not only do you need a well-prepared B-list, but also a C-list. On the surface, this seems kind of cool. I would phrase that more subtly, but I already gave my thesis away. It means that you have to try more strategies, with more combinations of units as they rotate through various states of wounded, gravely wounded, and fatigued. However, it also means that you need a lot more soldiers. I never had more than 20 at a time in Enemy Unknown or Enemy Within. In Long War, you start with something like 40.

This Other changes, like increased squad size and muddy class identities, also play into this. However, fatigue is far and away the biggest cause, so I'm focusing on that. unintentionally changes something that I very much liked about XCOM: the impractical, unsustainable, and outright damaging attachment I had to my soldiers. I don't always know their names (I'm really very bad with names), but I know their faces. I remember The Volunteer from my first successful (Ironman) campaign. I remember the struggles, the near misses. She was the sole survivor of the tutorial mission, which I'd forgotten to disable and one of the only women I recruited in the entire campaign. Yet she turned out be psychic and despite numerous barely-stopped bleed-out timers When an XCOM solder is reduced to 0 HP, they have a chance to instead bleed out. They become effectively dead (as far as your tactics are concerned), but unless you get to them with a Medkit and stabilize them within 3 turns, they become actually dead. she managed to survive the entire campaign.

After ten hours with Long War, I didn't know any of my soldiers. I lost a Corporal (rank 3 in LW) and two Lance Corporals (rank 2) in a single mission (two in a single turn) due to lucky shots from Thin Men, and didn't feel the urge to restart it. It wasn't even that I had replacements for them all, as I didn't have any medics at all once my Medic Corporal died. I was utterly detached from them. I didn't know any of them, not like before. This ultimately seemed to neuter the tension of each and every mission, turning a game whose bog-standard abduction missions I could play for hours into one where ten hours over two sessions felt like a slog. The moment-to-moment tension of one missed shot dooming a soldier is lost when you no longer care particularly about any of your soldiers. I uninstalled the mod after the second session – and then immediately spent several hours longer than I'd intended playing a new Classic Second Wave Ironman campaign.

This, of course, does not make Long War bad. As I mentioned above, it nears the level of being a total conversion. In fact, I think it is most aptly called exactly that. The Polygon quote on the project page is quite telling:

"Turns XCOM: Enemy Within into nothing short of a serviceable turn-based military alien invasion strategy wargaming simulator." - Polygon

I did not enjoy Long War because I was looking for more of what I liked about XCOM: Enemy Within. I wanted more of the XCOM that was almost Fire Emblem with guns and aliens, not a "military alien invasion strategy wargaming simulator". All told, I think Long War is one of the best mods I've ever seen. However, this is not the mod I was looking for.

The Actual Point

But that's not what I wanted to write about. All of this is just the backdrop. You see, XCOM 2 is coming out soon, and internet comment sections – being the cesspits that they are – are full of a specific breed of comment that gets under my skin. Not the comments recommending that fans try Long War. No, those are fine, at least on principal if not in practice. Good, even, as the mod they are pushing is in fact rather good.

My problem is the legion of comments that follow one of a few varieties: Paraphrased because this scrublord didn't bother screenshotting or bookmarking the comments when they were first seen, and digging through internet comments for another couple of hours doesn't seem particularly appetizing. Nobody needs screenshots of commenters being shitlords anyway.

All have the same underlying assumption: that people have the same tastes as the commenter. This self-projection is unfortunately endemic online, especially within the gaming community (where I've seen more "you're wrong because you like a thing that I don't" fights than anywhere else by a large margin).

Empathize, for a moment, with a mythical person that is considering picking up XCOM. Saving the Earth from aliens sounds cool, and they like strategy and tactical games, so it seems like a natural fit. Maybe this person that would agree with these comments; they would find Long War to be generally superior to the base and might skip the sequel in favor of the Long War team's game. Although from the sounds of it, the new kid on the block is just getting started. I would honestly be surprised if most Long War fans didn't pick up both XCOM 2 and the Long War team's product. But maybe – maybe – they wouldn't. This isn't merely hypothetical: if I had installed Long War immediately, I never would have gotten the hundred-plus hours of gameplay out of XCOM that I did. I barely lasted ten hours in Long War. The moment I knew it was over for me is when I began spending more time wondering when LW was going to get interesting than thinking about optimal strategies for filling aliens with holes. Again, I'm not saying that Long War is bad, merely that it isn't what I'm looking for.

The moral here is that internet commenters need to stop being shitlords and consider the fact that not all players – even only considering those that like a particular game – like games for the same reasons. So don't say "Just install Long War. You'll thank me later." Instead, consider "If you like XCOM, try the Long War mod. It's bloody fantastic." Don't imply that the intersection of people who like XCOM and people who like Long War is total. It isn't – I am proof of that – and it could drive people away from experiencing a pretty fantastic game.

Evaluating JavaScript in a Node.js REPL from an Emacs Buffer

Written by J. David Smith
Published on 01 June 2014

For my internship at IBM, we're going to be doing a lot of work on Node.js. This is awesome: Node is a great platform. However, I very quickly discovered that the state of Emacs ↔ Node.js integration is dilapidated at best (as far as I can tell, at least).

A Survey of Existing Tools

One of the first tools I came across was the swank-js / slime-js combination. However, when I (after a bit of pain) got both setup, slime promptly died when I tried to evaluate the no-op function: (function() {})().

Many pages describing how to work with Node in Emacs seem woefully out of date. However, I did eventually find nodejs-repl via package.el. This worked great right out of the box! However, it was missing what I consider a killer feature: evaluating code straight from the buffer.

Buffer Evaluation: Harder than it Sounds

Most of the languages I use that have a REPL are Lisps, which makes choosing what code to run in the REPL when I mash C-x C-e pretty straightforward. The only notable exceptions are Python (which I haven't used much outside of Django since I started using Emacs) and JavaScript (which I haven't used an Emacs REPL for before). Thankfully, while the problem is actually quite difficult, a collection of functions from js2-mode, which I use for development, made it much easier.

The first thing I did was try to figure out how to evaluate things via Emacs Lisp. Thus, I began with this simple function:

(defun nodejs-repl-eval-region (start end)
  "Evaluate the region specified by `START' and `END'."
  (let ((proc (get-process nodejs-repl-process-name)))
    (comint-simple-send proc (buffer-substring-no-properties start end))))

It worked! Even better, it put the contents of the region in the REPL so that it was clear exactly what had been evaluated! Whole-buffer evaluation was similarly trivial:

(defun nodejs-repl-eval-buffer (&optional buffer)
  "Evaluate the current buffer or the one given as `BUFFER'.

`BUFFER' should be a string or buffer."
  (interactive)
  (let ((buffer (or buffer (current-buffer))))
    (with-current-buffer buffer
      (nodejs-repl-eval-region (point-min) (point-max)))))

I knew I wasn't going to be happy with just region evaluation, though, so I began hunting for a straightforward way to extract meaning from a js2-mode buffer.

js2-mode: Mooz is my Savior

Mooz has implemented JavaScript parsing in Emacs Lisp for his extension js2-mode. What this means is that I can use his tools to extract meaningful and complete segments of code from a JS document intelligently. I experimented for a while in an Emacs Lisp buffer. In short order, it became clear that the fundamental unit I'd be working with was a node. Each node is a segment of code not unlike symbols in a BNF. He's implemented many different kinds of nodes, but the ones I'm mostly interested in are statement and function nodes. My first stab at function evaluation looked like this:

(defun nodejs-repl-eval-function ()
  (interactive)
  (let ((fn (js2-mode-function-at-point (point))))
    (when fn
      (let ((beg (js2-node-abs-pos fn))
            (end (js2-node-abs-end fn)))
        (nodejs-repl-eval-region beg end)))))

This worked surprisingly well! However, it only let me evaluate functions that the point currently resided in. For that reason, I implemented a simple reverse-searching function:

(defun nodejs-repl–find-current-or-prev-node (pos &optional include-comments)
  "Locate the first node before `POS'.  Return a node or nil.

If `INCLUDE-COMMENTS' is set to t, then comments are considered
valid nodes.  This is stupid, don't do it."
  (let ((node (js2-node-at-point pos (not include-comments))))
    (if (or (null node)
            (js2-ast-root-p node))
        (unless (= 0 pos)
          (nodejs-replfind-current-or-prev-node (1- pos) include-comments))
      node)))

This searches backwards one character at a time to find the closest node. Note that it does not find the closest function node, only the closest node. It'd be pretty straightforward to incorporate a predicate function to make it match only functions or statements or what-have-you, but I haven't felt the need for that yet.

My current implementation of function evaluation looks like this:

(defun nodejs-repl-eval-function ()
  "Evaluate the current or previous function."
  (interactive)
  (let* ((fn-above-node (lambda (node)
                         (js2-mode-function-at-point (js2-node-abs-pos node))))
        (fn (funcall fn-above-node
             (nodejs-replfind-current-or-prev-node
              (point) (lambda (node)
                        (not (null (funcall fn-above-node node))))))))
    (unless (null fn)
      (nodejs-repl-eval-node fn))))

You Know What I Meant!

My next step was to implement statement evaluation, but I'll leave that off of here for now. If you're really interested, you can check out the full source.

The final step in my rather short adventure through buffevaluation-land was a *-dwim function. DWIM is Emacs shorthand for Do What I Mean. It's seen throughout the environment in function names such as comment-dwim. Of course, figuring out what the user means is not feasible – so we guess. The heuristic I used for my function was pretty simple:

This is succinctly represent-able using cond:

(defun nodejs-repl-eval-dwim ()
  "Heuristic evaluation of JS code in a NodeJS repl.

Evaluates the region, if active, or the first statement found at
or prior to the point.

If the point is at the end of a line, evaluation is done from one
character prior.  In many cases, this will be a semicolon and will
change what is evaluated to the statement on the current line."
  (interactive)
  (cond
   ((use-region-p) (nodejs-repl-eval-region (region-beginning) (region-end)))
   ((= (line-end-position) (point)) (nodejs-repl-eval-first-stmt (1- (point))))
   (t (nodejs-repl-eval-first-stmt (point)))))

The Beauty of the Emacs Development Process

This whole adventure took a bit less than 2 hours, all told. Keep in mind that, while I consider myself a decent Emacs user, I am by no means an ELisp hacker. Previously, the extent of my ELisp has been one-off advice functions for my .emacs.d. Being a competent Lisper, using ELisp has always been pretty straightforward, but I did not imagine that this project would end up being so simple.

The whole reason it ended up being easy is because the structure of Emacs makes it very easy to experiment with new functionality. The built-in Emacs Lisp REPL had me speeding through iterations of my evaluation functions, and the ability to jump to functions by name with a single key-chord was invaluable. This would not have been possible if I had been unable to read the context from the sources of comint-mode, nodejs-repl and js2-mode. Even if I had just been forced to grep through the codebases instead of being able to jump straight to functions, it would have taken longer and been much less enjoyable.

The beautiful part of this process is really how it enables one to stand on the shoulders of those who came before. I accomplished more than I had expected in far, far less time than I had anticipated because I was able to read and re-use the code written by my fellows and precursors. I am thoroughly happy with my results and have been using this code to speed up prototyping of Node.js code. The entire source code can be found here.

World of Warcraft's Recruit-a-Friend Reward Structure is Flawed

Written by J. David Smith
Published on 05 April 2014

What instigated this post?

Last night, an unnamed redditor asked the WoW sub-reddit what the fastest way to level these days is. Why? Because their girlfriend "has been wanting to start playing wow with me". Seems reasonable, right? S/he goes on to ask about RaF.

I immediately jump in and try to head off a disaster in the making. "What disaster?" one may ask. Simple: RaF dungeon spamming isn't fun. In fact, I wrote that "Personally, I wouldn't even use RaF because of how it completely screws up the structure of the early game." This set the gears in my head to whizzing frantically. What changed that made a really cool system actively harm the game? And – more importantly – how can it be fixed?

What is Recruit-a-Friend?

In order to answer those questions, it is important to understand what the RaF system actually does. Blizzard's FAQ does a good job of describing the system. There are actually a lot of perks to using RaF, but there is one in particular that really hurts the game: triple XP.

For levels 1 - 85, while in a group and similarly leveled the recruiter and recruitee gain 3 times the normal amount of experience. This isn't simply mob kill experience either: quest experience is also affected. The result these days is that – if you aren't spamming dungeons to power-level – you out-level zones just as you're starting to get their stories. To understand the impact of this effect, we need to first dig deeper into what the reward structure for WoW is.

I Saved Westfall and all I got was this stupid T-Shirt!

World of Warcraft is not unique in its structure. You help people, kill monsters and collect rewards. There are two general classes of rewards in WoW:

  1. Power-increasing rewards

    These rewards increase the player's overall power level (although perhaps not immediately). Examples of this are loot (literal character power), gold (economic power) and experience (character power – albeit slightly delayed).
  2. Emotional rewards

    These rewards tug on the player's heart-strings. Whether it's saving an adorable little orphan boy or laughing maniacally as you help Theldurin punch Deathwing in the face, these ones make you feel good (or bad) for having done whatever it was you did. Type 1 rewards are a subset of this reward class.

In my experience, the latter are much more important than the former. This is upheld by observations of the reaction to the Madness of Deathwing fight and Deathwing in general. While players got more powerful than ever before, there was something missing. Emotional reward was lacking, and it showed.

How does this relate to Recruit-a-Friend?

The RaF system increases the gain rate of a particular Type 1 reward: experience. However, it not only causes problems with the rate of gain of other Type 1 rewards, but often outright prevents the gain of Type 2 rewards!

Recently, I leveled through Darkshore. Starting at level 11, I finished the quest achievement at level 24. Had I been using RaF, I'd have only made it through the first 1/3rd of the quests in that time. This would have left the story hanging and broken the illusion of world-changing impact that Blizzard has worked so hard to create.

As a result, emotional investment can become a liability preventing enjoyment rather than a boon aiding it. It's like reading the first third of every Spider-Man comic in order to 'catch up' to the current. Sure, one would reach your goal faster, but at the cost of enjoying the process of reading comic books. Even once you were caught up, you wouldn't understand all of the stuff going on in the current issue.

I've seen situations where one player wants to get their significant other into the game using RaF. In every case I've seen where the core benefit of RaF is used to its fullest (ie by dungeon spamming), the SO quits playing. Therefore, I believe that the overall benefit of RaF for the new player is non-existent and in many cases it even causes damage to their perception and enjoyment of the game.

Two Birds, One Stone

The solution to this problem is relatively simple. While simply removing the XP bonus would go a long way towards preventing the damage currently being done by RaF, why stop at simple prevention when it can be used to make the game genuinely more enjoyable?

Think back, ye die-hard WoW fans: what problem always crops up when questing as a group? Yes, that one. You know it well. Someone plays while the others are away, gets ahead in both experience and quests and is either forced to wait for the group to catch up, retread the content you just did, or leave the group behind.

With long-time players, this isn't much of a problem. We have alts, we have mains, and we can always do something else while the group is offline. For a new player, however, such options are severely lacking. PvP grants experience, dungeons grant experience, even gathering mats to level crafting grants experience these days! Imagine if the Priest class is the only one that really clicks with your friend. Are you going to ask them to not play when you aren't online? To roll an alt? A second priest?

This problem is solved relatively well by the combination of massively boosted XP and level granting: the increased XP rate encourages moving on to other quest chains with relative frequency and level granting ensures that the older player can keep up (most of the time). However, if triple XP is removed from the system, then the problem again rears its ugly head because the player no longer has such an incentive to move on in the middle of a quest chain.

Sure, the two players can remain evenly leveled, but what about quest progress? Forcing the new player to retread content is not exactly ideal, so why not allow the new player to catch the older one up not only in levels but also in quests?

What I am proposing is this:

This would prevent XP gain from completely overriding any other sort of reward in the game and would allow new players to continue questing with their friends without worrying about quest dependencies and level discrepancies. To my view, this would be superior to the current system – especially since the store is now the go-to way to pay for a fast 90. However, one question remains to be answered.

Why was it designed this way in the first place?

World of Warcraft is not the game that it once was. In ye olden days, when Azeroth was yet young and paladins still only had 2 buttons for the first 40 levels, there were fewer quest chains and it was common – up til Outland, at least – to complete a zone without having out-leveled it. In that era, there were far fewer tales of merit told in the quests.

Way back then – near a full 6 years ago – tripling the experience rate made sense. It meant that you'd have to do one zone to get through a level range instead of 2.5-3. Still, those days are gone and now, with the world designed to take one player through a level range in one zone, it no longer makes sense.

Here's hoping that Blizzard fixes this system soon. It bothers me to think of the people potentially missing a great experience because something that should be rewarding can easily become the opposite. With all of the dramatic WoD changes incoming, this could be the perfect time to do it!