How To Set Up an Encrypted, Compressed Filesystem in Arch Linux

Written by J David Smith
Published on 15 August 2015

The best example I have of this is a large dataset I'm downloading from a REST API as we speak. The current uncompressed size is 25G. The amount of space used on this partition has only increased by about 5G so far.The size is reported by du -hs, which does not report compressed size on a btrfs-compressed partition

This document is intended to be a guide on how to set up a disk (especially a SSD, which will best take advantage of the features) to use both encryption and compression. Please read the entire thing once at least before attempting installation. In particular, in step 4+ there are gaps in the process where 'normal' installation continues (and for which I have not duplicated the normal instructions). While none of them are irreversible, it will be easier if you understand everything before diving in.

The Goal

One frustration I've always had with FS setup guides is that they often don't start with what they intend to give you. I will not make that mistake. The ultimate result of this guide should be a fresh Arch Linux installation with:

WARNING: It is very important that you do not use a swapfile on btrfs! It will not work! You have been warned!

Note: Much of the LVM-on-LUKS material is now covered on the Arch Wiki, which I did not realize when beginning to write. The material used to be much more scattered. I pieced together much of the contents of this post from reading various blogs and the dm-crypt wiki page.

Step 0: Pre-Setup

BACK UP YOUR DATA!

Unless you are working with a brand-new drive, do a double check to confirm that you have all the data you need. Unlike normal formatting, where blocks are typically touched in an ordered fashion, encrypted data will be spread across the drive. Thus, the chance to retrieve data will very quickly vanish!

With that said, grab the latest Arch CD and burn it to a disc. Boot from it.Remember to pull up this document on a phone or other computer, or to print it off!

Step 1: Initial Partitioning

Using your favorite partition editor (I personally am a fan of parted), create 2 partitions:

  1. /boot (See this page for UEFI systems)
  2. A blank partition consuming the rest of the drive (or some portion of it. Your choice)

For simplicity, I will use sda1/2 to refer to these partitions. In the real world, it is best to use their UUIDs to reference them.

Step 2: Encryption

Setting up disk encryptionAgain, I make no promises about the security of your data! The default cryptsetup settings are pretty solid, but not necessarily optimal! is surprisingly easy with cryptsetup.

  1. # cryptsetup luksFormat /dev/sda2
    This command sets up encryption on /dev/sda2. It should prompt you for a passphrase.You can replace it with a key on a flash drive or some other setup later. Setting LUKS up to use anything other than the default passphrase setup is outside the scope of this guide Please remember this!
  2. # cryptsetup open –type luks /dev/sda2 vg
    This command sets up a mapping from /dev/mapper/vg to the (decrypted) contents of the drive.

Step 3: LVM

To create a set of LVMI use LVM here because – last I knew – swap partitions can't be on BTRFS sub-volumes. Since LVM is already needed, there isn't much point in adding yet another layer of indirection with BTRFS sub-volumes on top of LVM volumes. volumes:

  1. # pvcreate /dev/mapper/vg
    This command creates an LVM physical volume. See the man page for more details on what that actually means.
  2. # vgcreate vg /dev/mapper/vg
    This command creates a volume group on the physical volume at /dev/mapper/vg.
  3. # lvcreate -L <N>G vg -n swap
    lvcreate creates a logical volume in a volume group. Again, see the man page for more details on the actual meaning of the terminology.
    Replace <N> by the amount of RAM you have. So if you had 4GB, it'd be -L 4G.
  4. # lvcreate -L 30G vg -n root This partition will be used for /. I like having a fairly large amount of space, especially as some dev kits (looking at you, Android) clock in at rather heinous sizes.
  5. # lvcreate -l +100%FREE vg -n home Finally, use the rest of the space for home.
  6. # mkfs.btrfs /dev/vg/root; mkfs.btrfs /dev/vg/home; mkswap /dev/vg/swap
    Create the filesystems on each of the partitions. Compression is set after creation.

Step 4: Compression

Continue with the normal installation with two exceptions:

When mounting either btrfs volume, use the -o compress=lzo option to mount.In fact, existing btrfs partitions can be compressed on the fly simply by setting compress=lzo or compress=zlib in /etc/fstab This will enable compression of newly-written data.

When generating the /etc/fstab file, add the compress=lzo option to the 4th column. If you are using an SSD, adding noatime,discard,ssdNote that enabling discard has security ramifications! Discard will remove any chance of claiming plausible deniability and will reveal some of the usage patterns of the disk. Discard will not reveal any data. In my case, I find it worthwhile to make this tradeoff in order to extend the life of the drive. is also recommended. When labeling the drives in /etc/fstab the command lsblk -o NAME,LABEL,UUID can be used to locate the LABELs or UUIDs of your volumes. *It is strongly recommended that you use those instead of the dev-path* format!

Step 5: Bootloader

Continue with normal installation until you are setting up the boot loader.If this is your first time setting up a boot loader on UEFI, it may seem as if the world has suddenly become a confusing and dangerous place. I recommend using systemd-boot (formerly known as gummiboot). Any feelings about systemd aside, it is really simple and easy to use. See the Arch Wiki for more info.

Step 5.1: Configure mkinitcpio

Two hooks need to be added to mkinitcpio: encrypt and lvm2. Add them – in that order – to the HOOKS line of /etc/mkinitcpio.conf after the keyboard hook and before the filesystem hook. If you also want to set up hibernation, add the resume hook just before the filesystem hook. If you are using an alternate keymap (like colemak or dvorak), add the keymap hook immediately before the keyboard hook.

The placement of the hooks is important! They are run in the order they are listed. This ordering makes sure that the keyboard is enabled before decryption is attempted – otherwise no passphrase could be entered – and that decryption occurs before filesystems are mounted.

Run mkinitcpio -p linux to rebuild the initramfs.

Step 5.2: Configure the Kernel Parameters

Any bootloader you use should provide a way to configure kernel parameters see the relevant wiki page for details on how to do it for your specific bootloader. There are three parameters that are important:

My entire (working!) kernel parameter line is:

cryptdevice=/dev/sda2:vg:allow-discards root=/dev/vg/root quiet rw resume=/dev/vg/swap

Step 6: Finish & Enjoy!

Everything should be in order, so finish the installation process and reboot. If you have set things up correctly, then after booting you should be greeted with a prompt for your passphrase.

That's it! Your / and /home partitions are both transparently compressed and encrypted (in that order), and your swap partition is encrypted!Additionally: if you followed the instructions to enable hibernation, then `systemctl hibernate` should work and rebooting should prompt for your passphrase before resuming.

On my laptop, compressing /home has gotten me 15-30% more storage (depending on what I have on home at any given time – large text files like JSON data compress better than small text files or binary data like videos). If I were using zlib instead of lzo or used the compress-force mount option, it'd be even more. A 15% storage gain may not seem like much, but that's an extra 30GB of space on my 200GB /home partition. Given that SSDs are typically smaller than their magnetic-platter siblings, every additional byte helps.

Why I Stopped Using ES6

Written by J David Smith
Published on 18 July 2015

Pushing ClojureScript or Elm didn't seem like a great way to spend my time, so I instead chose to toy with another relatively new bit of technology: EcmaScript 6. This page has a great overview of the new features coming to JavaScript with ES6, but most of them haven't actually made it in yet. I used the Babel transpiler to compile the code down from ES6 to ES5.

I was initially going to title this post "Why I Stopped Using Babel", but that would make it sound like there was some problem with Babel. I have had no issues whatsoever with Babel. The transpilation time was almost negligible (~1s for my code, combined with ~4s of browserify run time), it didn't perceivably impact performance (even when I was profiling inner loops written in ES6), and it never caused any bugs. On the whole, Babel is excellent software and if you want to use ES6 now, I highly recommend it. But there's the catch: you have to want to use ES6 now. And slowly, over the course of a couple of months, my desire to do so was sapped away (through no fault of Babel, and almost no fault of ES6).

The problems I had were mostly with integration. Two very important pieces of my workflow are Tern and Istanbul. Tern provides auto completion and type-guessing that is integrated into Emacs. Istanbul provides code coverage reports. Neither of them support ES6. With Istanbul, it was possible to work around it by running babel on my code and then covering the ES5 output. However, the coverage reports were then off because of the extra code that babel has to insert in order to properly simulate ES6 in ES5. Tern, on the other hand, did not have an option. If I chose to use only fat arrows, it would be workable because I discovered I could copy and past the code for normal functions to those and it worked more or less as expected. However, everything else was a wash.

So why not ditch Tern and put up with the Istanbul workaround until it gets ES6 support? As I used ES6 over the summer, I came to realize that in 99% of my usage, it wasn't much of an improvement. let is certainly useful (and the way it always should have been), arrow functions are awesome, and for(a of as) finally gives a sane looping construct in the language. Other than that, the only feature that's really exciting is destructuring, and while it is a bit of a pain to destructure complex data by hand, it isn't something that I have to do often. Classes were not of any use to me for this project either. None of my data made sense to represent as a class. Although in theory my React components would make sense as classes, I'd rather use the old, well-documented, clear method that has support for mixins (which would have to be implemented through inheritance were I to use ES6 classes).

The decision ultimately came down to three things:

  1. I wasn't getting much (just let, for of, fat arrows, and destructuring) from ES6
  2. ES6 vs ES5 is just one more thing my team would have to pick up after I'm gone.
  3. ES6→ES5 transpilation is a thing that somebody would have to support after I'm gone, and there is no telling how long it will be before it is no longer needed.

In the interest of making the life of my successor a tiny bit easier to manage, I ultimately chose to ditch ES6 for ye olde ES5. I had to throw out a bunch of the ES6 prototype code anyway, so there was very little additional cost in stripping it out of the code. Ultimately, I believe that this was the right decision for this project. Although losing those few additional features I was using was a bit painful, gaining the proper support of my tools and losing the incidental complexity of transpilation was worth it I think.

I'll probably still use ES6 with Babel for small side projects. (Anything large won't be in JS, even if it compiles to it!) If you want to try out ES6, Babel is a very safe and easy way to do it. I look forward to the day that ES6 has widespread support and Babel is…well, still needed for ES7 transpilation, but that's for another day.

Postscript

I don't like the import syntax and don't even get me started on classes and inheritance in ES6.

Change Can Happen

Written by J David Smith
Published on 27 June 2015

The past two weeks have been a big for the United States and the world at large. So much has happened. Some of it was good, some was bad, but all was important.

First, on June 17th, Dylann Roof killed 9 people in a racially motivated attack on a black church in Charleston, South Carolina. He apparently intended to start a race war, but that didn't happen. Instead, we saw people unite to against a symbol of racial hatred: the Confederate flag. Today – ten days later – Brittany "Bree" Newsome climbed the flag pole at the South Carolina capitol and did what the South Carolinian politicians would not: took down the flag. It went back up soon after, but the internet exploded in support of her action. And one must not forget President Obama's incredible eulogy for those lost in the massacre.

That was not the only significant event in recent days, though. Yesterday, on the 26th of June, 2015, SCOTUS ruled that bans on gay marriage were unconstitutional. This was followed by much hate from the Republican end of the US political spectrum. Some states are even considering not issuing any marriage licenses. Justice Scalia issued an opinion on the decision (pdf link) that alternated between being entertaining, baffling, and scary. However, the general public seemed to react very positively to the news.

Yet another major event is the Supreme Court's 6-3 decision in favor of the Affordable Care Act. I heard much less about this one. It was overshadowed by other events and, honestly, I'm not even sure what it means anymore. So much of the ACA has been marred by general misunderstanding and intentional disinformation by parties with axes to grind that I don't understand the full implications of the ruling. That will change over the next few days, as I intend to catch up on it.

All of these things have potential to be major historical events in their own right. We will have to wait and see the aftermath to be sure, but there are some very important lessons that we need to take from this (in my opinion).

Change Happens – Slowly but Surely

Too often it seems that I hear people depressingly discussing the sad state of affair in our world. Nothing seems to change. We push and pull and nothing moves. But then, how long have people been fighting for gay rights? The first documented demonstration was in Berlin in 1922 (citing Wikipedia because the primary source is a book and I can't link to that). That pins the SCOTUS decision at 93 years after the first demonstration. What's more: gay marriage is still illegal in Germany. Americans have been fighting over racial inequality since at least the Civil War in 1861 (154 years ago), and we still haven't finished dealing with it. The smallest of the three issues (universal, affordable healthcare) has been debated endlessly since at least 1912 (103 years), when it was an issue in Theodore Roosevelt's presidential campaign.

These issues are far from resolved, but all have been a part of political discussion for at nearly a century – yet we still today are seeing change in these issues. It is important to keep this in mind as we push towards a better future: change is often slow, but it does occur. (Note: I am not saying that change is always slow or necessarily slow. I am merely trying to point out that even when change is not obvious, in the end our efforts can pay off)

Change Takes Work

This ties in closely with the previous point. The modern LGBT rights movement in the United States can be traced back to the Stonewall Riots in 1969 (yet another wiki link, too many sources – not enough internet sources). The movement has been active and relentless for more than 40 years! Again: the war isn't won yet, but an important battle has been. 40+ years of work led to this change, and that same dedication will lead to future change.

The Work Is Not Done

The Charleston Massacre shows just how far we are from having solved racism and racial inequality. Significant effort has been expended in dealing with these problems, and much more remains to be done. This horrific event serves as a reminder that even after significant victories (like the dismantling of Jim Crow laws, or the electing of a black president), we can't continue to be or become complacent.

(I am not saying that others have been, but rather that I have been. The last few months have been enlightening for me as I've seen how much change still needs to happen.)

Next Steps

One may ask how to get involved. Unfortunately, I am not the best one to answer that question. For all my words, I have been largely a bystander. A "social media activist". I have tweeted, retweeted, faved, liked, and even donated some small amounts. However, I've not done much.

Better people to ask would be those on the front lines. Shaun King has done a tremendous job of bringing awareness to racial equality issues (especially those to do with police brutality). Wikipedia has a long list of LGBT rights organizations in the United States (and other countries).

I feel weird and a bit hypocritical having done nothing and yet writing this post, but in some way this is a call for myself more than anyone else. The SCOTUS ruling(s) show us not only that change can happen, but that change does happen. The Charleston Massacre has the dual nature of showing how much hatred remains and showing how we can move forward from tragedy towards a better future. This is my moment of realization that my attitude of depressing complacence accomplishes nothing, and that by action I may help move our society to a better place.

Anarchy Online: Why?

Written by J David Smith
Published on 23 May 2015

img

I started playing AO a bit more than a decade ago, right when they began allowing players to play for free. Free players (colloquially known as 'froobs') have access only to the base game and the Notum Wars boosters, not any of the (4 at present) expansion packs. I played on and off as a froob for much of that period, never reaching higher than level 80 (of 200).

So why do I keep coming back? More than that: why the hell did I pick up the full expansion set this last time around? It was only $20, but still: Why? I am beginning to understand, I think. The game is one giant puzzle.

I was playing my new Fixer, running around in the Shadowlands, trying to figure out where to go next to keep leveling. I googled it, found some info, and set about trying to act on it. And failed over and over again. Dangerous enemies were between me and my goal. As of writing this, I have yet to figure out a way to slip past them.

It isn't that these enemies are over-leveled for me either: they are on level, and I can fight one and sometimes even two at a time without dying. However, every entry point seems to set me against situations where I fight minimum two and often three of these creatures.

There are many possible ways I could deal with this. Maybe I need to temporarily blow some IP (for the uninitiated: IP increase skills) in Concealment and sneak past them. Maybe I need to go hunt for better nanos and the requisite buffs to equip and cast them. Maybe I need a better gun (or two). I don't know.

As someone who loves puzzles and is absolutely unconcerned with reaching the level cap in a timely manner, I enjoy this. The struggle just to succeed. I have fond memories of pugging ToTW on my Agent (Emallson – my namesake), pushing all the way to the legionnaires for efficient XP or the final boss encounter for the wonderful loot (though I can't remember these days what he drops). Getting there as a solo player without any consistent help was hard. For about a month I was stuck on level 41, continuously dying before dinging and feeding the XP into my bonus pool (Aside: dying loses XP, which goes into a bonus pool that gives you 1.5x XP until you've regained all of it. I really like this system).

Again: it was a puzzle. How do I survive? What can I change? Where do I go? Who do I work with? It was fun. It is fun. This is why I still play this ugly, unwieldy game. Come to think: its unwieldiness actually feeds into that. It gives you most of the information you could reasonably ask for, but it's scattered around. Figuring out which nanos I can buff into reasonably requires finding not only what nanos I can get (in the shop) but also what buffs I can get cast on me (by an MP most often), what weapons I can pull from missions without spending too much on the search is something that doesn't have a good answer because of the QL system, etc.

There are a lot of things that I like about this game. There are enough of them that I feel I can look past the ugliness and unwieldiness to enjoy it. It's fun to explore this world. And that's what I want from a game: fun.

2014 in Review

Written by J David Smith
Published on 12 January 2015

Interning at IBM

When I applied for internships in December of 2013, I wasn't sure what would happen. I applied to big names – Google, Microsoft, IBM, and others as I did the year prior. In 2012-2013, I got no responses. In 2013-2014, I got many. My applications to both Google and IBM were accepted, Riot Games asked for an interview (which I unfortunately had to decline because I'd already accepted IBM's offer), and Microsoft ignored my existence (maybe because my resumé is slathered in Linux tooling and has not a whiff of Microsoft on it).

I struggled for weeks with the decision between Google and IBM. Working at Google is a dream job, but there was a catch: the project I would be working on there was boring. Meanwhile, the project I was offered at IBM was really cool and exciting. At the time, it involved significant open-source contributions. Although it changed later, the change helped refine the project goals and clarify what my team would be doing.

In the end, I chose IBM. I was both looking forward to and dreading starting there at the end of May. What if I had chosen incorrectly? Once we got started, however, all my doubt vanished. The project turned out to be just as exciting as it had sounded. Even better: I had the pleasure of working with a phenomenal group of people. On the IBM side, we had a fantastic manager ([Ross Grady](https://twitter.com/rossgrady)) and great support from the group we were working with.

On the intern side, things couldn't have been better. My team was phenomenal: John and Walker were (and are) great technically, and all four of us (me, John, Walker, and Chris) worked together without even a hint of an issue throughout the Summer. What's more, I was surprised at how welcome I felt in the intern group. I've never been very comfortable socially, and yet by the end of the Summer there is but one that I'd not call a friend.

The biggest benefit of the internship for me was not the technical knowledge I gained, the skills I developed, money I made. It was the opportunity to work with these people. Prior to this, I had never had the chance to work with other programmers. I'd worked in a research lab, but that is a very different focus. Seeing how capable my fellow interns were and realizing that I was actually capable of keeping up with them was a tremendous confidence boost for me.

I have no regrets about my decision to work at IBM this past Summer. I came out of it knowing more, having more friends and contacts, and with several offers for positions at IBM. I ended up declining all of them to pursue a PhD, but set up an internship with one of the security software teams for Summer 2015.

The Interview

In the middle of the Summer, I got a wholly unexpected phone call: a Google recruiter contacted me about interviewing for a full-time position. At the time, my plans for the future were undecided but leaning heavily towards the pursuit of a PhD. I told him that I would be willing to talk more after the Summer ended, when I had more time.

When I followed up with him in August/September, things moved rapidly. I was able to skip the phone interviews because I'd done well enough on the ones for the internship to receive an offer. I got to fly to California and do interview in person! Working full-time at Google requires passing a high bar, so being interviewed indicates that I may be close to it.

In the end, I did not receive an offer. However, I was thrilled at the thought that I might be capable of reaching and surpassing the skill level needed for entry. This also forced me to mentally work out how to deal with serious rejection. I have been generally successful throughout my life, and hadn't had any rejection on this level before. I am glad that it came at a time when I had the opportunity to stop and think about it, rather than a super-busy season.

The Fulbright Program

I also began working on an application to the Fulbright U.S. Student Program in the summer. This program – if I were accepted – would let me study at a school almost anywhere in the world. The program grant covers one year, but I will be able to build a case for financial aid and visa for continuing on should I desire.

The application for this is for the most part not too bad. However, the two essays that go along with it (Personal Statement & Statement of Purpose) were especially difficult. I had never written anything like them before and was ill-prepared to do so. The advisor at UK was incredibly helpful in this, and I believe that I ended up with a competitive application. Regardless, I spent a solid month and a half thinking about nothing else. This prepared me well to write the statements for grad school applications, but was a significant time sink.

The worst part about this application is that I won't know the result until March of this year, while the deadline was September of last year. The long waiting period is killer, and is a problem I am facing in other areas as well.

Graduate School Applications

This is where I made my biggest mistake of the year: I did not work on grad school applications on Thanksgiving break. I took the week off: I slept, I played video games, I wrote code. I did not apply to grad school. Because of this, I was ill-prepared to meet the popular 15 December deadline. I was more prepared to meet the 1 January deadline that others have, but between the insanity of finals week (15-20 Dec.) and Christmas, ended up being largely tardy with that as well. (Also, far fewer schools have the later deadline)

I learned in 2012/2013 not to wait so long. I made a point of doing internship applications in '13 on Thanksgiving break so as to not miss deadlines. I learned the lesson, and then in arrogance forgot it. I applied to four schools: MIT, Texas A&M, UFlorida and UKansas. I have already been accepted into UKansas (0.0), but we'll see what happens.

I probably won't hear back from the other three schools until mid-March. I will have little enough time to make a decision, and will have to start planning for the Fall immediately. What really gets me is simply the waiting period. I do not know what will happen. I cannot realistically make any plans for or assumptions about after the summer until March. It sucks. I don't like it.

Goals for 2014

I didn't really set goals for 2014. One that I stumbled upon through meditation on Tom Shear's (Assemblage 23) Otherness. This is a long-term goal: be a better person. I started trying to write down a concrete list of this while writing this blog post, but I will need to think about it more. I realize how incredibly wishy-washy 'be a better person' is, and need to nail it down so I know what I'm going for. Details will be a blog post sometime in the next week.

Looking Forward: Goals for 2015

I am not a fan of New Years resolutions, and thus have none. However, over the course of last semester I became of several deficiencies in my overall behavior. In particular: my aversion to lists and my inconsistency.

Lists are helpful tools, yet I often do not use them. I saw how my dad became dependent on his lists to remember things and suppose I overreacted. I started keeping lists of assignments and due dates during this semester, and it helped reduce the number of times that I missed an assignment due to forgetfulness.

This is one method of moving towards my present goal: becoming more consistent. Self-discipline is not one of my strong points, but I have been working on improving. The impact of this will be better control over what I buy, what I eat, and how I spend my time. It meshes well with my goal of 'be a better person' (lol), as control will allow me to be who I want to be.

I have a long way to go.

Evaluating JavaScript in a Node.js REPL from an Emacs Buffer

Written by J David Smith
Published on 1 June 2014

For my internship at IBM, we're going to be doing a lot of work on Node.js. This is awesome: Node is a great platform. However, I very quickly discovered that the state of Emacs ↔ Node.js integration is dilapidated at best (as far as I can tell, at least).

A Survey of Existing Tools

One of the first tools I came across was the swank-js / slime-js combination. However, when I (after a bit of pain) got both setup, slime promptly died when I tried to evaluate the no-op function: `(function() {})()`.

Many pages describing how to work with Node in Emacs seem woefully out of date. However, I did eventually find nodejs-repl via package.el. This worked great right out of the box! However, it was missing what I consider a killer feature: evaluating code straight from the buffer.

Buffer Evaluation: Harder than it Sounds

Most of the languages I use that have a REPL are Lisps, which makes choosing what code to run in the REPL when I mash C-x C-e pretty straightforward. The only notable exceptions are Python (which I haven't used much outside of Django since I started using Emacs) and JavaScript (which I haven't used an Emacs REPL for before). Thankfully, while the problem is actually quite difficult, a collection of functions from js2-mode, which I use for development, made it much easier.

The first thing I did was try to figure out how to evaluate things via Emacs Lisp. Thus, I began with this simple function:

(defun nodejs-repl-eval-region (start end)
  "Evaluate the region specified by `START' and `END'."
  (let ((proc (get-process nodejs-repl-process-name)))
    (comint-simple-send proc (buffer-substring-no-properties start end))))

It worked! Even better, it put the contents of the region in the REPL so that it was clear exactly what had been evaluated! Whole-buffer evaluation was similarly trivial:

(defun nodejs-repl-eval-buffer (&optional buffer)
  "Evaluate the current buffer or the one given as `BUFFER'.
`BUFFER' should be a string or buffer."
  (interactive)
  (let ((buffer (or buffer (current-buffer))))
    (with-current-buffer buffer
      (nodejs-repl-eval-region (point-min) (point-max)))))

I knew I wasn't going to be happy with just region evaluation, though, so I began hunting for a straightforward way to extract meaning from a js2-mode buffer.

js2-mode: Mooz is my Savior

Mooz has implemented JavaScript parsing in Emacs Lisp for his extension js2-mode. What this means is that I can use his tools to extract meaningful and complete segments of code from a JS document intelligently. I experimented for a while in an Emacs Lisp buffer. In short order, it became clear that the fundamental unit I'd be working with was a node. Each node is a segment of code not unlike symbols in a BNF. He's implemented many different kinds of nodes, but the ones I'm mostly interested in are statement and function nodes. My first stab at function evaluation looked like this:

(defun nodejs-repl-eval-function ()
  (interactive)
  (let ((fn (js2-mode-function-at-point (point))))
    (when fn
      (let ((beg (js2-node-abs-pos fn))
            (end (js2-node-abs-end fn)))
        (nodejs-repl-eval-region beg end)))))

This worked surprisingly well! However, it only let me evaluate functions that the point currently resided in. For that reason, I implemented a simple reverse-searching function:

(defun nodejs-repl--find-current-or-prev-node (pos &optional include-comments)
  "Locate the first node before `POS'.  Return a node or nil.
If `INCLUDE-COMMENTS' is set to t, then comments are considered
valid nodes.  This is stupid, don't do it."
  (let ((node (js2-node-at-point pos (not include-comments))))
    (if (or (null node)
            (js2-ast-root-p node))
        (unless (= 0 pos)
          (nodejs-repl--find-current-or-prev-node (1- pos) include-comments))
      node)))

This searches backwards one character at a time to find the closest node. Note that it does not find the closest function node, only the closest node. It'd be pretty straightforward to incorporate a predicate function to make it match only functions or statements or what-have-you, but I haven't felt the need for that yet.

My current implementation of function evaluation looks like this:

(defun nodejs-repl-eval-function ()
  "Evaluate the current or previous function."
  (interactive)
  (let* ((fn-above-node (lambda (node)
                         (js2-mode-function-at-point (js2-node-abs-pos node))))
        (fn (funcall fn-above-node
             (nodejs-repl--find-current-or-prev-node
              (point) (lambda (node)
                        (not (null (funcall fn-above-node node))))))))
    (unless (null fn)
      (nodejs-repl-eval-node fn))))

You Know What I Meant!

My next step was to implement statement evaluation, but I'll leave that off of here for now. If you're really interested, you can check out the full source.

The final step in my rather short adventure through buffevaluation-land was a *-dwim function. DWIM is Emacs shorthand for Do What I Mean. It's seen throughout the environment in function names such as comment-dwim. Of course, figuring out what the user means is not feasible – so we guess. The heuristic I used for my function was pretty simple:

This is succinctly represent-able using cond:

(defun nodejs-repl-eval-dwim ()
  "Heuristic evaluation of JS code in a NodeJS repl.
Evaluates the region, if active, or the first statement found at
or prior to the point.
If the point is at the end of a line, evaluation is done from one
character prior.  In many cases, this will be a semicolon and will
change what is evaluated to the statement on the current line."
  (interactive)
  (cond
   ((use-region-p) (nodejs-repl-eval-region (region-beginning) (region-end)))
   ((= (line-end-position) (point)) (nodejs-repl-eval-first-stmt (1- (point))))
   (t (nodejs-repl-eval-first-stmt (point)))))

The Beauty of the Emacs Development Process

This whole adventure took a bit less than 2 hours, all told. Keep in mind that, while I consider myself a decent Emacs user, I am by no means an ELisp hacker. Previously, the extent of my ELisp has been one-off advice functions for my .emacs.d. Being a competent Lisper, using ELisp has always been pretty straightforward, but I did not imagine that this project would end up being so simple.

The whole reason it ended up being easy is because the structure of Emacs makes it very easy to experiment with new functionality. The built-in Emacs Lisp REPL had me speeding through iterations of my evaluation functions, and the ability to jump to functions by name with a single key-chord was invaluable. This would not have been possible if I had been unable to read the context from the sources of comint-mode, nodejs-repl and js2-mode. Even if I had just been forced to grep through the codebases instead of being able to jump straight to functions, it would have taken longer and been much less enjoyable.

The beautiful part of this process is really how it enables one to stand on the shoulders of those who came before. I accomplished more than I had expected in far, far less time than I had anticipated because I was able to read and re-use the code written by my fellows and precursors. I am thoroughly happy with my results and have been using this code to speed up prototyping of Node.js code. The entire source code can be found here.

A Good, Stiff Kick

Written by J David Smith
Published on 1 May 2014

This semester may be the first semester that I get a grade less than an A in any in-major class (read: CS, MA). I am taking the graduate-level Numerical Analysis course with Dr. Wasilkowski this semester. Dr. Wasilkowski is a good teacher – I actually went out of my way to make sure I took this class with him because of that and because it is his research area.

I've not done poorly by any means. My grade on the first exam was 18.75 / 20. I consistently earned good grades on the homework. However, I was barely keeping my head above water. Having counted on my good luck and general intellect to get me through without much effort, I found myself wholly unprepared for the failure of both.

The Exam

The second mid-term exam had 4 problems plus an extra. We could choose any 3 of the normal problems and solve the extra for bonus points. The exam was scored on a scale of 1-20. I solved the first two problems easily. And then I bombed the third. I did not do the extra.

My mistake on the third problem was not due to lack of knowledge, but a simple misunderstanding of the problem on my part. The problem wasn't particularly opaque either – everyone I spoke to had solved it with the correct method. Everyone but me. I did not have the padding in my grade to take such a hit. As it stands now, I am 3.2% below the requirement for an A.

The Final

Dr. Wasilkowski gives his students the option to not take the final. If you are happy with your grade prior to the final, you can take it as-is and skip the final. If you are not, you can take the final to try to improve it. There is one catch: if you take the final and do poorly, you can lower your grade.

In order to raise my grade up to an A, I have to earn 38.25 / 40 points on the final. My reaction upon seeing that went something like this:

"I must've done something wrong"
"Well damn, that's high"
"Is that even possible?"
"Oops."

Then I looked to see what the minimum I need to keep a B is: 28.25 / 40. I can do that, it's only a minimum of ~70%. Actually, I am quite confident that I could earn that without studying for the exam at all. But that isn't what I want.

A Bit of Context

This is not the first class I've taken with Wasilkowski. Previously, I had taken the CS Discrete Math course under him. (This was how I knew prior to registration that he was a good professor). In that class, he gave out quite a lot of extra credit. One person even managed to earn 140% as their overall grade – though he would not say who. I did quite well on the exams, and was easily able to qualify for skipping the final with an A.

The Kick

Today, I went to his office hours for advice. Up to this point I had been leaning towards taking the exam, but I wanted to know what he thought about taking the exam vs not. After listening to my explanation, he told me that he couldn't give me advice – it had to be my own decision. Then he said something like this:

You know, I was really disappointed with your performance this semester. You have a lot of potential, you were one of the best students in my other class, but I didn't see the effort this semester.

Boom. I have a lot of respect for Dr. Wasilkowski and his opinion, so I take what he says seriously. And he's right, ya ken? I haven't put the effort in this semester. I haven't been sufficiently familiar with the material, I've spent far more time on reddit (in and out of class) than in previous semesters, and I have relied far too much on luck.

My Semester in Review

Throughout this semester, I have been frustrated by my performance. I screwed up the first homework, but have made up for it. During the first exam, I wrote more guesses than answers. Still, I got a very good grade. Yet it always feels weird to be the hat that Indy grabs from under the door: scraping through no worse for the wear, but not by one's own doing. I mean, technically it was my own doing, but I have put forth very little effort in this class and most others this semester.

The only extra credit I've earned has been from turning in well-formatted & printed rather than handwritten homework. I did not even attempt most of the extra credit problems; minimum effort was all I gave. That's a big part of my present problem.

Two contributing factors are general tiredness and a simple experiment that I took far too long to give up. Tiredness is easy to understand, as I have a lot of stuff to do and just enough time to do it. However, my little experiment ended up hurting more than I had anticipated: I used my phone (a phablet) as a notebook. Digital distractions abound. One moment I'm taking notes – then suddenly class is over, I don't remember anything from that lecture and my notes are horrifyingly incomplete. Oops. Ultimately, these are both excuses, and the fault still lies with me.

My Resolve

As I left his office, I turned and told him that I was going to take the exam. I have resolved to both take the exam, but also to ace it. Will I fail? Probably – I am prone to silly little errors – but I will try. Even if I do fail, I am no worse off.

I am thankful for teachers like Dr. Wasilkowski. He is an excellent teacher, to be sure. Energetic, interesting, funny (he tells the best jokes that I've ever heard from a teacher) while still covering the material clearly. It is easier to pay attention in his classes than in any other I've been in. Clearly, he also isn't afraid to teach outside of the classroom – even when it involves a stern rebuke. More than his in-class capabilities, I am thankful for that. Sometimes a stiff kick in the gut is good to bring me to my senses. And by sometimes I mean often. And by often I mean pretty much always. Without the pretty much. So just always? Yea, always.

UPDATE: I got an A! ^.

Turning on Your High-Beams: Using Beamer with Org-mode

Written by J David Smith
Published on 18 April 2014

What is Beamer?

Beamer's official site sums up the advantages and disadvantages of using Beamer fairly well, so I'll let it speak for itself:

Beamer is a LaTeX class for creating presentations that are held using a projector, but it can also be used to create transparency slides. Preparing presentations with Beamer is different from preparing them with WYSIWYG programs like OpenOffice.org's Impress, Apple's Keynote, or KOffice's KPresenter. A Beamer presentation is created like any other LaTeX document. … The obvious disadvantage of this approach is that you have to know LaTeX in order to use Beamer. The advantage is that if you know LaTeX, you can use your knowledge of LaTeX also when creating a presentation, not only when writing papers.

If you are familiar with the advantages and disadvantages of Microsoft Word versus LaTeX, then you are familiar with them for PowerPoint versus Beamer. Beamer is not well-suited to every presentation. However, it is very powerful for math-heavy or code-oriented presentations because of how the equations and source can be embedded into LaTeX.

What is Org-mode?

According to the official site, Org-mode is an Emacs major mode for "keeping notes, maintaining TODO lists, and doing project planning with a fast and effective plain-text system." It has a ton of features, but the ones I use most are:

Org-mode is a successor to things like <code>outline-mode</code>, which (as the name
suggests) was/is used for outlining. Org-mode extends it in many, many
ways. I won't list them all. You can find a feature list [here](http://orgmode.org/features.html).
Being an Emacs major mode, Org-mode also packs quite the editing punch. It
comes loaded with utility functions, syntax highlighting, outline folding,
and a bevy of other features. For example, I can re-order headlines (or
slides) using `M-S-<up>/<down>` (that's `Meta-Shift-<up>` or
`Meta-Shift-<down>` for those unused to Emacs-style key-chords). If I want
to focus on a particular section, `C-x n s` will narrow the buffer to only
that section (`C-x n w` will widen the buffer back to the entire file).
![img](//atlanis.net/media/blog/org-mode-screen.jpg)
LaTeX is a wonderful, wonderful thing. Unfortunately, it is also
*incredibly verbose!* As a markup language, this level of explicitness is
necessary to remove ambiguity and give fine-grained control over
output. However, in my every-day use-case I really just want to write up my
homework assignments with pretty-printed math without having to worry about
all of that. Org-mode fills this gap by providing a much less verbose
markup that has well-defined semantics. It also allows embedded LaTeX code
so that if Org-mode is lacking support in some area, you're not out of
luck.
While HTML is a bit less "wonderful" than LaTeX, it is still quite
important. Org-mode documents can be exported to HTML in much the same
way as LaTeX is. In fact, [this blog post](https://github.com/emallson/atlanis.net-blog/blob/master/resources/posts/org-beamer.org) was written as an Org document and
then exported into HTML! You can read more about how this is done in a
[previous post](http://atlanis.net/blog/posts/new-site-stasis.html).

Of course, there are many other things it can do. I myself have only barely scratched the surface of its capabilities. However, these functions are sufficient for our usage: writing Beamer presentations in Org-mode.

Why Org-mode on top of Beamer?

The primary reason I use Org-mode over Beamer rather than just Beamer is reduced verbosity. For example, here is a comparison of a simple presentation and the resulting LaTeX output:

Org-mode

#+TITLE: A Presentation
#+AUTHOR: John Doe
#+DATE: 18 April 2014
#+OPTIONS: toc:nil num:nil
#+STARTUP: beamer
* What is Beamer?
  - LaTeX styles for presentations
  - Exports to PowerPoint-style presentation format
* What is Org-mode?
  - Emacs major-mode for organized thought
  - Spiritual successor to =outline-mode=
  - Supports wide variety of markup
    - /italic/, *bold*, =code=, ~verbatim~, etc.
    - Inline LaTeX (including math!)
  - Can export to LaTeX, HTML, and other formats
* How Do I Work This Thing?
  - Edit text
  - Export to PDF
    - Intermediate LaTeX file is created and saved
    - This allows all the fine-grained tweaking of LaTeX -- optionally
  - Unfortunately, the difficult bit remains: giving the presentation

LaTeX with Beamer

(***Note:** indentation was added after generation by me to make the LaTeX more readable*)

% Created 2014-04-18 Fri 21:26
\documentclass[presentation]{beamer}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fixltx2e}
\usepackage{graphicx}
\usepackage{longtable}
\usepackage{float}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{marvosym}
\usepackage{wasysym}
\usepackage{amssymb}
\usepackage{hyperref}
\tolerance=1000
\usepackage{minted}
\usepackage{microtype}
\usetheme{default}
\author{John Doe}
\date{18 April 2014}
\title{A Presentation}
\hypersetup{
  pdfkeywords={},
  pdfsubject={},
  pdfcreator={Emacs 24.3.1 (Org mode 8.2.5h)}}
\begin{document}
\maketitle
\begin{frame}[label=sec-1]{What is Beamer?}
  \begin{itemize}
  \item \LaTeX{} styles for presentations
  \item Exports to PowerPoint-style presentation format
  \end{itemize}
\end{frame}
\begin{frame}[fragile,label=sec-2]{What is Org-mode?}
  \begin{itemize}
  \item Emacs major-mode for organized thought
  \item Spiritual successor to \texttt{outline-mode}
  \item Supports wide variety of markup
    \begin{itemize}
    \item \emph{italic}, \alert{bold}, \texttt{code}, \verb~verbatim~, etc.
    \item Inline \LaTeX{} (including math!)
    \end{itemize}
  \item Can export to \LaTeX{}, HTML, and other formats
  \end{itemize}
\end{frame}
\begin{frame}[label=sec-3]{How Do I Work This Thing?}
  \begin{itemize}
  \item Edit text
  \item Export to PDF
    \begin{itemize}
    \item Intermediate \LaTeX{} file is created and saved
    \item This allows all the fine-grained tweaking of \LaTeX{} -- optionally
    \end{itemize}
  \item Unfortunately, the difficult bit remains: giving the presentation
  \end{itemize}
\end{frame}
% Emacs 24.3.1 (Org mode 8.2.5h)
\end{document}

Dunno about you, dear reader, but to me the Org-mode file looks significantly easier to read/write/maintain than the LaTeX one. Org-mode used in this context is fundamentally a quality-of-life change from Beamer. However, it is a powerful tool in its own right.

Using Org-mode with Emacs

Now then, with the reasoning for why you'd use Org-mode out of the way I can move onto how to actually use the damn thing! If you're already comfortable with Emacs, it is very, very simple. If you aren't, then you will have a bit more difficulty due to the learning curve imposed by Emacs.

Start by opening a new file for the presentation (that's C-x C-f for the uninitiated). Provided you chose a file-name ending in .org, org-mode will already be on. You will also need to turn on org-beamer-mode using M-x org-beamer-mode RET. An attribute can be added to the file which will inform Org to start Beamer mode every time this file is opened:

#+STARTUP: beamer

Though no further attributes need to be set, there are a few others that are useful in most cases:

This attribute sets the title of your presentation. If it is not set, it
defaults to the name of the file.
Sets the author's name. If it is not set, it defaults to your username.
Sets the date used by the title slide. If unset, defaults to export date.
Sets export options. Some useful ones are <code>toc</code> and <code>num</code>, which specify
whether to create a Table of Contents and whether to number sections.
Boolean options use lisp-y style. For example: `toc:nil` disables the ToC,
while `toc:t` enables it (analogous to `nil` and `t` in both Common and
Emacs Lisp). These options override the user's Emacs configuration. Both
`toc` and `num` default to `t`.

After that, start writing! Beginning a line with *'s creates a new heading. In a LaTeX export, these are sections. Increasing the number of stars creates sub-sections. For Beamer, the top-most headings are slides (unless #+OPTIONS has H:x specifying another level x to start at) and the sub-headings are treated as sub-sections of individual slides.

Once you've written for a while, you probably want to see what your work looks like! Use C-c C-e to display all export options. From there, l P will export a PDF version of the presentation. You can view the output either by opening it manually or using the C-c C-e l O command (which creates and opens the file). Many PDF viewers automatically reload when you regenerate the PDF, which is often preferable to continuously opening and closing the viewer.

More content can be written, and presentation regeneration is just a key chord away.

So…What's My Setup?

This is all well and good, but John asked about my setup, not how to use Org mode! My workflow can be summarized like this:

Very simple and straightforward, just the way I like it. While it is not quite as simple as a WYSIWYG editor like PowerPoint (which lacks the export step), it integrates much better into my usual environment.

But Can I Use Org-mode for Presentations Without Emacs?

In theory, it is possible to use Org-mode outside of Emacs. Org documents are plain text and pretty straightforward to parse. The format has parsers written in other languages (Common Lisp, PHP and Python are all listed on the community tools page) and there are vim plugins for Org files (VimOrganizer and vim-orgmode).

In practice, it isn't really a good idea. The parsers are geared towards reading org documents into data structures for manipulation, and the vim plugins implement only a subset of Org-mode's functionality. If you want to use Org mode it is best to use Emacs along with it, even if you just edit in vim and fall back to Emacs for exporting.

If you're a vim user who is on the road to recovery interested in trying Emacs, you should check out bling's Emacs->Vim Survival Guide and [Emacs bootstrapping](https://bling.github.io/blog/2013/09/09/vim-in-emacs-bootstrap/) posts.

Example Time!

Of course, all of this is moot unless you can actually generate decent-looking presentations with it, right? Since I've done a some presentations recently, I'll pull two from my stock.

I wrote code and slides for last week's BoF presentation. The PDF output of
the slides can be downloaded [here](https://atlanis.net/media/blog/org-beamer-example-output/bof.pdf). This presentation demonstrates both
using LaTeX commands to override default behavior and displaying inline
code with syntax highlighting.
Note: the code in this presentation could have been included using
`#+INCLUDE`, however it was longer than the slide width and I wasn't sure
of a good way to do wrapping without either modifying the code directly or
copying it into the org file. I opted for the latter to preserve proper
formatting in the code itself.
I also wrote slides (and code, but not included in the presentation) for a
Keeping Current presentation. The PDF output of the slides can be
downloaded [here](https://atlanis.net/media/blog/org-beamer-example-output/keeping-current.pdf). This Org file demonstrates inline LaTeX (optional
`#+BEGIN/END_LaTeX` commands can be used to make it explicit) as well as
preventing certain parts of the document from being exported (note that the
*Abstract* subtree is not exported because it has the `:noexport:`
tag). This feature can be used for storing speech notes with the slides
without having them exported alongside them.
As part of my upcoming summer internship, I was asked to put together a
slide giving a brief biography. This was done [as an Org document](https://atlanis.net/media/blog/ibm_bio_slide.org). The PDF
output can be viewed [here](https://atlanis.net/media/blog/org-beamer-example-output/ibm_bio_slide.pdf). This example demonstrates using columns and
embedding images.

You might note that neither of these presentations are particularly pretty. However, this is because in both cases I had very little time available when they were written and so used the default Beamer template. You can see many better-looking examples by searching for beamer presentation on Google Images.

So Should I Use Org-mode?

Yes! At least give it a try! Org-mode has come bundled with Emacs since version 22, but you can also optionally install the latest version via `M-x package-install RET org RET`.

While I have no doubts that many people will continue to prefer PowerPoint for their presentations, using Beamer with Org-mode is an effective and in many cases superior alternative – especially in Math- and Code-heavy fields like Computer Science.

Tutoring Nightmares: Meet E

Written by J David Smith
Published on 11 April 2014

A Bit of Background

I've been tutoring since I started college – before that if you count helping classmates with homework. My freshman year of college, I tutored Algebra I for a small group of high school students. I enjoyed it, and it felt like I was helping them understand the material (it can be hard to tell sometimes).

Unfortunately, due to schedule conflicts I had to stop doing that when my sophomore year started. At the beginning of this semester, I got the opportunity to begin tutoring again through the Tau Beta Pi honor society, which I was initiated into last fall. I've enjoyed it greatly: for many students it is easy to tell if I've been helpful (if they show up more than once), which has helped me become a more effective tutor. Not to mention that hanging out with the other tutors is a great time in and of itself.

Since I mostly tutor CS (a popular major here, at least for freshman and first-semester sophomore students), I spend a lot of my time trying to explain the concepts behind C++ syntax (example: x->doSomething() vs x.doSomething() vs X::doSomething()). There is one project in particular that I've spent a lot of time helping people with: building a CLI schedule application.

The requirements for the application are pretty simple: read data from a CSV-ish file, print it out in various ways (day schedule and week schedule, mostly), take input for new events, and write it all back to a file. By this point – 2 semesters into the curriculum – students have been exposed to all of these things at least once. It's just a problem of putting it all together (and of understanding pointers). The professor for the course has even provided a helper class that reads individual lines into vectors of strings, which is the hard part of reading the file. While most students just need help filling in gaps in their understanding, occasionally they lack basic understanding about the language itself (which is a very bad sign this late in the semester) or seem unwilling or incapable of programming on their own. Occasionally, they're also assholes about it. E is one such student.

Meet E

E came for help for the first time on Tuesday. He wasn't the first there and I was busy helping another person at that time. Despite that, he proceeded to repeatedly ask me to come and help him. By the time I finally got to him, I knew I'd be in for a long haul.

This rather large programming assignment is due today. On Tuesday, he'd barely touched it. To make it worse, he exhibited little understanding of what he was doing. I helped him finish implementing one of the required classes (there are 4 very similar classes) and told him to work on the others himself while I helped other students. He gave me an odd look, packed up his laptop and left.

I should mention (it will be important later) that E is neither American nor a native English speaker. However, he has a firm enough grasp of the language to understand what I'm saying and to respond, so I am rather confident that he is able to understand enough of what he might read on the internet to finish much of the project himself.

E: Redux

I am in the habit of giving my email to students when they ask, because I can often quickly identify the source of their problem and direct them towards the solution. Having given my email address to E, it was not surprising that I got an email from him yesterday morning. The question it contained, however, was simply "when will you be tutoring today?". I responded, expecting to find him in RGAN commons (where we tutor) when I arrived. Instead, he showed up about 30 minutes later and again badgered me to come and help him while I was busy with another person who was working on a different assignment. Other students were hanging out, though they didn't need help at the moment.

E walked up to badger me again right as a member of the group I was with told a joke – everybody laughed. Everybody except E, of course, who hadn't heard the joke. He threw me an incredulous look and stated "You guys are making fun of me" quietly. Before I could respond, he stalked back to his table. I shrugged it off, somewhat thankful because he had finally stopped badgering me.

Coming up to me again about 15 minutes later, he announced that he really needed help before he had to leave in 30 minutes. Annoyed, I told my current tutoree that I'd be back once I'd helped him. He had plenty to work anyway.

I was appalled to find that E had barely touched his code since Tuesday. One more class was implemented – mostly with copypasta from what I'd helped him with – and a legion of compiler errors (which Visual Studio had dutifully highlighted in red) were preventing him from progressing. The worst part, however, was what he said when I sat down to help him: "Fix it."

Fix it? Really?

I did a double-take when I heard that. Fix it? I'm a tutor! I'm supposed to help him understand what to do, not be an interactive debugger! Things went downhill from there. I explained to him for the third time that you need the variable types before their names in function declarations – apparently neither time I'd explained that on Tuesday had stuck and he hadn't bothered Googling the error messages. About half of the errors were in that vein.

In the process of going through God-knows-what, he made this comment to me: 'Whether I pass or not depends on the quality of the TAs and tutors, you know?'. (Note: I don't remember his exact wording, but the gist of it is the same.) I very nearly told him that he was on his own right then. The final straw came about five minutes later, when he pulled out his phone and started texting in the middle of my explanation of what the address-of (&) operator does to pointers.

Naturally, when he started texting I stopped talking and finished my memory diagram. "Continue," he told me as he motioned me towards the computer. Not the paper, the computer. And then he was texting away again. I stood and told him that I'd be back in a moment, then grabbed one of the other tutors and dragged him out to the hall. After I explained the situation to him, he advised me to give E something to work on and explain to him that I needed to help other students. So I did. Again, E immediately got up and left.

This is not how to get someone to help you

Today I got this gem of an email:

img

When I got this email, I was completely floored. I'd dedicated about 1.5 hours to helping this guy over the course of a couple of days. My normal working time for those 2 days is 3 hours, so I spent roughly *half of my time working with E!* And yet he has the gall to call me a racist for supposedly helping others more?

He caught me as I about to head to a CS gathering and asked if I could help him. My negative response resulted in a demand to know why I had helped other people on this project more. Indeed, several other people had come for help with this project, and all of them had most of it done or were stuck on some conceptual part (such as opening a file) and were able to complete the rest on their own. The only person I spent more time with is someone who paid me to come in on the weekend and privately tutor him. Ultimately, I told him two things: (1) I didn't help others more, and (2) I couldn't help him. Neither claim was accepted.

In closing…

This situation is still developing. The project was due earlier today, and he said he was going to ask his professor for an extension. I doubt that he'll get it. Even still, I am concerned about his potential reaction to this and how it'll impact things for me – in particular tutoring. One thing I'm pretty confident about, though: I'm not going to give out my email to students as easily anymore.

World of Warcraft's Recruit-a-Friend Reward Structure is Flawed

Written by J David Smith
Published on 5 April 2014

What instigated this post?

Last night, an unnamed redditor asked the WoW sub-reddit what the fastest way to level these days is. Why? Because their girlfriend "has been wanting to start playing wow with me". Seems reasonable, right? S/he goes on to ask about RaF.

I immediately jump in and try to head off a disaster in the making. "What disaster?" one may ask. Simple: RaF dungeon spamming isn't fun. In fact, I wrote that "Personally, I wouldn't even use RaF because of how it completely screws up the structure of the early game." This set the gears in my head to whizzing frantically. What changed that made a really cool system actively harm the game? And – more importantly – how can it be fixed?

What is Recruit-a-Friend?

In order to answer those questions, it is important to understand what the RaF system actually does. Blizzard's FAQ does a good job of describing the system. There are actually a lot of perks to using RaF, but there is one in particular that really hurts the game: triple XP.

For levels 1 - 85, while in a group and similarly leveled the recruiter and recruitee gain 3 times the normal amount of experience. This isn't simply mob kill experience either: quest experience is also affected. The result these days is that – if you aren't spamming dungeons to power-level – you out-level zones just as you're starting to get their stories. To understand the impact of this effect, we need to first dig deeper into what the reward structure for WoW is.

I Saved Westfall and all I got was this stupid T-Shirt!

World of Warcraft is not unique in its structure. You help people, kill monsters and collect rewards. There are two general classes of rewards in WoW:

  1. Power-increasing rewards

    These rewards increase the player's overall power level (although perhaps not immediately). Examples of this are loot (literal character power), gold (economic power) and experience (character power – albeit slightly delayed).
  2. Emotional rewards

    These rewards tug on the player's heart-strings. Whether it's saving an adorable little orphan boy or laughing maniacally as you help Theldurin punch Deathwing in the face, these ones make you feel good (or bad) for having done whatever it was you did. Type 1 rewards are a subset of this reward class.

In my experience, the latter are much more important than the former. This is upheld by observations of the reaction to the Madness of Deathwing fight and Deathwing in general. While players got more powerful than ever before, there was something missing. Emotional reward was lacking, and it showed.

How does this relate to Recruit-a-Friend?

The RaF system increases the gain rate of a particular Type 1 reward: experience. However, it not only causes problems with the rate of gain of other Type 1 rewards, but often outright prevents the gain of Type 2 rewards!

Recently, I leveled through Darkshore. Starting at level 11, I finished the quest achievement at level 24. Had I been using RaF, I'd have only made it through the first 1/3rd of the quests in that time. This would have left the story hanging and broken the illusion of world-changing impact that Blizzard has worked so hard to create.

As a result, emotional investment can become a liability preventing enjoyment rather than a boon aiding it. It's like reading the first third of every Spider-Man comic in order to 'catch up' to the current. Sure, one would reach your goal faster, but at the cost of enjoying the process of reading comic books. Even once you were caught up, you wouldn't understand all of the stuff going on in the current issue.

I've seen situations where one player wants to get their significant other into the game using RaF. In every case I've seen where the core benefit of RaF is used to its fullest (ie by dungeon spamming), the SO quits playing. Therefore, I believe that the overall benefit of RaF for the new player is non-existent and in many cases it even causes damage to their perception and enjoyment of the game.

Two Birds, One Stone

The solution to this problem is relatively simple. While simply removing the XP bonus would go a long way towards preventing the damage currently being done by RaF, why stop at simple prevention when it can be used to make the game genuinely more enjoyable?

Think back, ye die-hard WoW fans: what problem always crops up when questing as a group? Yes, that one. You know it well. Someone plays while the others are away, gets ahead in both experience and quests and is either forced to wait for the group to catch up, retread the content you just did, or leave the group behind.

With long-time players, this isn't much of a problem. We have alts, we have mains, and we can always do something else while the group is offline. For a new player, however, such options are severely lacking. PvP grants experience, dungeons grant experience, even gathering mats to level crafting grants experience these days! Imagine if the Priest class is the only one that really clicks with your friend. Are you going to ask them to not play when you aren't online? To roll an alt? A second priest?

This problem is solved relatively well by the combination of massively boosted XP and level granting: the increased XP rate encourages moving on to other quest chains with relative frequency and level granting ensures that the older player can keep up (most of the time). However, if triple XP is removed from the system, then the problem again rears its ugly head because the player no longer has such an incentive to move on in the middle of a quest chain.

Sure, the two players can remain evenly leveled, but what about quest progress? Forcing the new player to retread content is not exactly ideal, so why not allow the new player to catch the older one up not only in levels but also in quests?

What I am proposing is this:

This would prevent XP gain from completely overriding any other sort of reward in the game and would allow new players to continue questing with their friends without worrying about quest dependencies and level discrepancies. To my view, this would be superior to the current system – especially since the store is now the go-to way to pay for a fast 90. However, one question remains to be answered.

Why was it designed this way in the first place?

World of Warcraft is not the game that it once was. In ye olden days, when Azeroth was yet young and paladins still only had 2 buttons for the first 40 levels, there were fewer quest chains and it was common – up til Outland, at least – to complete a zone without having out-leveled it. In that era, there were far fewer tales of merit told in the quests.

Way back then – near a full 6 years ago – tripling the experience rate made sense. It meant that you'd have to do one zone to get through a level range instead of 2.5-3. Still, those days are gone and now, with the world designed to take one player through a level range in one zone, it no longer makes sense.

Here's hoping that Blizzard fixes this system soon. It bothers me to think of the people potentially missing a great experience because something that should be rewarding can easily become the opposite. With all of the dramatic WoD changes incoming, this could be the perfect time to do it!