New Blog (Again)

Written by J David Smith
Published on 8 September 2015


My old blog engine worked well for the most part. The major wrench in the works was actually the dependency on Emacs as an exporter. It prevented me from doing a lot of fancier things with the content because templating by string concatenation is a pain in the ass. It also would routinely break with updates to org-mode.To be frank, my most significant gripe with Emacs these days is how difficult it is to maintain a statically versioned config. Once installed, things don't update; but whenever I re-run the setup (which happens more often than I want to admit when I'm hopping back and forth between machines), things will inexplicably break because a MELPA package updates.

I really liked the Clojure piece: that bit was very pleasant to work with. I didn't take the time to understand how some of the pieces *cough*optimus*cough* worked, but it still did exactly what I wanted.

Ditching org-mode

I ultimately ditched org-mode entirely, which was rather disappointing. The problem ultimately was that nothing supported it, and I got sick of rolling my own workarounds when Markdown covers 99% of my use case and the edge cases are covered by inline HTML.Yes, I know ox-md exists and will let me export to Markdown, but there isn't much point in exporting from org when it is giving me relatively little. The amount of fiddling required for the more advanced org features to render the way I want them to is too much for me.

I really like org-mode, so this was a tough sell for me. I even went through and created an org AST parser in Clojure (using the output of org-element as input) using Enlive to transform it to HTML, but it was finicky as hell and I knew that I would not want to update it when the output of org-element changes next.

Really, I could have plugged in markdown instead of org as the parser for my existing blog and gotten away with it. But no. That adventure is over for now; I have other adventures that are consuming what was formerly fiddling-with-blog-engine time.

Tufte CSS

I fell in love with Tufte CSS as soon as I saw it. I don't know if it is actually a great choice for my blog, but I'm gonna give it a shot! The highlights of it are that it has excellent font design, is incredibly simple, and has this lovely concept of margin notes. Margin notes are really simple in concept but I've never seen them on a blog before. I am rather fond of asides and frequently littered my posts with parentheticals containing them. I believe that margin notes are better suited for this.

No More Comments

I never really had issues with my Disqus comments, but I also never had much use for them either. Nobody commented. They provided no analytics and I doubt that I'd have used them anyway. If people want to comment on a blog post, they can email me, or tweet at @emallson.

Why Hexo?

I could describe some of the things I like about it, but honestly: it was the first batteries-included static blog engine for Node that I came across. It is doing everything I want for right now, so I'm unlikely to change it for the moment.

In Conclusion...

This is one part of my effort to update my site as a whole. Updating the style of my blog is an important piece. I have updated my main page as well, and am debating whether I should stick with Bootstrap or go with Tufte. I feel like I could accomplish a lot with that margin to give more info and character to specific events, but we will see. We will see.

How To Set Up an Encrypted, Compressed Filesystem in Arch Linux

Written by J David Smith
Published on 15 August 2015

The best example I have of this is a large dataset I'm downloading from a REST API as we speak. The current uncompressed size is 25G. The amount of space used on this partition has only increased by about 5G so far.The size is reported by du -hs, which does not report compressed size on a btrfs-compressed partition

This document is intended to be a guide on how to set up a disk (especially a SSD, which will best take advantage of the features) to use both encryption and compression. Please read the entire thing once at least before attempting installation. In particular, in step 4+ there are gaps in the process where 'normal' installation continues (and for which I have not duplicated the normal instructions). While none of them are irreversible, it will be easier if you understand everything before diving in.

The Goal

One frustration I've always had with FS setup guides is that they often don't start with what they intend to give you. I will not make that mistake. The ultimate result of this guide should be a fresh Arch Linux installation with:

WARNING: It is very important that you do not use a swapfile on btrfs! It will not work! You have been warned!

Note: Much of the LVM-on-LUKS material is now covered on the Arch Wiki, which I did not realize when beginning to write. The material used to be much more scattered. I pieced together much of the contents of this post from reading various blogs and the dm-crypt wiki page.

Step 0: Pre-Setup


Unless you are working with a brand-new drive, do a double check to confirm that you have all the data you need. Unlike normal formatting, where blocks are typically touched in an ordered fashion, encrypted data will be spread across the drive. Thus, the chance to retrieve data will very quickly vanish!

With that said, grab the latest Arch CD and burn it to a disc. Boot from it.Remember to pull up this document on a phone or other computer, or to print it off!

Step 1: Initial Partitioning

Using your favorite partition editor (I personally am a fan of parted), create 2 partitions:

  1. /boot (See this page for UEFI systems)
  2. A blank partition consuming the rest of the drive (or some portion of it. Your choice)

For simplicity, I will use sda1/2 to refer to these partitions. In the real world, it is best to use their UUIDs to reference them.

Step 2: Encryption

Setting up disk encryptionAgain, I make no promises about the security of your data! The default cryptsetup settings are pretty solid, but not necessarily optimal! is surprisingly easy with cryptsetup.

  1. # cryptsetup luksFormat /dev/sda2
    This command sets up encryption on /dev/sda2. It should prompt you for a passphrase.You can replace it with a key on a flash drive or some other setup later. Setting LUKS up to use anything other than the default passphrase setup is outside the scope of this guide Please remember this!
  2. # cryptsetup open –type luks /dev/sda2 vg
    This command sets up a mapping from /dev/mapper/vg to the (decrypted) contents of the drive.

Step 3: LVM

To create a set of LVMI use LVM here because – last I knew – swap partitions can't be on BTRFS sub-volumes. Since LVM is already needed, there isn't much point in adding yet another layer of indirection with BTRFS sub-volumes on top of LVM volumes. volumes:

  1. # pvcreate /dev/mapper/vg
    This command creates an LVM physical volume. See the man page for more details on what that actually means.
  2. # vgcreate vg /dev/mapper/vg
    This command creates a volume group on the physical volume at /dev/mapper/vg.
  3. # lvcreate -L <N>G vg -n swap
    lvcreate creates a logical volume in a volume group. Again, see the man page for more details on the actual meaning of the terminology.
    Replace <N> by the amount of RAM you have. So if you had 4GB, it'd be -L 4G.
  4. # lvcreate -L 30G vg -n root This partition will be used for /. I like having a fairly large amount of space, especially as some dev kits (looking at you, Android) clock in at rather heinous sizes.
  5. # lvcreate -l +100%FREE vg -n home Finally, use the rest of the space for home.
  6. # mkfs.btrfs /dev/vg/root; mkfs.btrfs /dev/vg/home; mkswap /dev/vg/swap
    Create the filesystems on each of the partitions. Compression is set after creation.

Step 4: Compression

Continue with the normal installation with two exceptions:

When mounting either btrfs volume, use the -o compress=lzo option to mount.In fact, existing btrfs partitions can be compressed on the fly simply by setting compress=lzo or compress=zlib in /etc/fstab This will enable compression of newly-written data.

When generating the /etc/fstab file, add the compress=lzo option to the 4th column. If you are using an SSD, adding noatime,discard,ssdNote that enabling discard has security ramifications! Discard will remove any chance of claiming plausible deniability and will reveal some of the usage patterns of the disk. Discard will not reveal any data. In my case, I find it worthwhile to make this tradeoff in order to extend the life of the drive. is also recommended. When labeling the drives in /etc/fstab the command lsblk -o NAME,LABEL,UUID can be used to locate the LABELs or UUIDs of your volumes. *It is strongly recommended that you use those instead of the dev-path* format!

Step 5: Bootloader

Continue with normal installation until you are setting up the boot loader.If this is your first time setting up a boot loader on UEFI, it may seem as if the world has suddenly become a confusing and dangerous place. I recommend using systemd-boot (formerly known as gummiboot). Any feelings about systemd aside, it is really simple and easy to use. See the Arch Wiki for more info.

Step 5.1: Configure mkinitcpio

Two hooks need to be added to mkinitcpio: encrypt and lvm2. Add them – in that order – to the HOOKS line of /etc/mkinitcpio.conf after the keyboard hook and before the filesystem hook. If you also want to set up hibernation, add the resume hook just before the filesystem hook. If you are using an alternate keymap (like colemak or dvorak), add the keymap hook immediately before the keyboard hook.

The placement of the hooks is important! They are run in the order they are listed. This ordering makes sure that the keyboard is enabled before decryption is attempted – otherwise no passphrase could be entered – and that decryption occurs before filesystems are mounted.

Run mkinitcpio -p linux to rebuild the initramfs.

Step 5.2: Configure the Kernel Parameters

Any bootloader you use should provide a way to configure kernel parameters see the relevant wiki page for details on how to do it for your specific bootloader. There are three parameters that are important:

My entire (working!) kernel parameter line is:

cryptdevice=/dev/sda2:vg:allow-discards root=/dev/vg/root quiet rw resume=/dev/vg/swap

Step 6: Finish & Enjoy!

Everything should be in order, so finish the installation process and reboot. If you have set things up correctly, then after booting you should be greeted with a prompt for your passphrase.

That's it! Your / and /home partitions are both transparently compressed and encrypted (in that order), and your swap partition is encrypted!Additionally: if you followed the instructions to enable hibernation, then `systemctl hibernate` should work and rebooting should prompt for your passphrase before resuming.

On my laptop, compressing /home has gotten me 15-30% more storage (depending on what I have on home at any given time – large text files like JSON data compress better than small text files or binary data like videos). If I were using zlib instead of lzo or used the compress-force mount option, it'd be even more. A 15% storage gain may not seem like much, but that's an extra 30GB of space on my 200GB /home partition. Given that SSDs are typically smaller than their magnetic-platter siblings, every additional byte helps.

Why I Stopped Using ES6

Written by J David Smith
Published on 18 July 2015

Pushing ClojureScript or Elm didn't seem like a great way to spend my time, so I instead chose to toy with another relatively new bit of technology: EcmaScript 6. This page has a great overview of the new features coming to JavaScript with ES6, but most of them haven't actually made it in yet. I used the Babel transpiler to compile the code down from ES6 to ES5.

I was initially going to title this post "Why I Stopped Using Babel", but that would make it sound like there was some problem with Babel. I have had no issues whatsoever with Babel. The transpilation time was almost negligible (~1s for my code, combined with ~4s of browserify run time), it didn't perceivably impact performance (even when I was profiling inner loops written in ES6), and it never caused any bugs. On the whole, Babel is excellent software and if you want to use ES6 now, I highly recommend it. But there's the catch: you have to want to use ES6 now. And slowly, over the course of a couple of months, my desire to do so was sapped away (through no fault of Babel, and almost no fault of ES6).

The problems I had were mostly with integration. Two very important pieces of my workflow are Tern and Istanbul. Tern provides auto completion and type-guessing that is integrated into Emacs. Istanbul provides code coverage reports. Neither of them support ES6. With Istanbul, it was possible to work around it by running babel on my code and then covering the ES5 output. However, the coverage reports were then off because of the extra code that babel has to insert in order to properly simulate ES6 in ES5. Tern, on the other hand, did not have an option. If I chose to use only fat arrows, it would be workable because I discovered I could copy and past the code for normal functions to those and it worked more or less as expected. However, everything else was a wash.

So why not ditch Tern and put up with the Istanbul workaround until it gets ES6 support? As I used ES6 over the summer, I came to realize that in 99% of my usage, it wasn't much of an improvement. let is certainly useful (and the way it always should have been), arrow functions are awesome, and for(a of as) finally gives a sane looping construct in the language. Other than that, the only feature that's really exciting is destructuring, and while it is a bit of a pain to destructure complex data by hand, it isn't something that I have to do often. Classes were not of any use to me for this project either. None of my data made sense to represent as a class. Although in theory my React components would make sense as classes, I'd rather use the old, well-documented, clear method that has support for mixins (which would have to be implemented through inheritance were I to use ES6 classes).

The decision ultimately came down to three things:

  1. I wasn't getting much (just let, for of, fat arrows, and destructuring) from ES6
  2. ES6 vs ES5 is just one more thing my team would have to pick up after I'm gone.
  3. ES6→ES5 transpilation is a thing that somebody would have to support after I'm gone, and there is no telling how long it will be before it is no longer needed.

In the interest of making the life of my successor a tiny bit easier to manage, I ultimately chose to ditch ES6 for ye olde ES5. I had to throw out a bunch of the ES6 prototype code anyway, so there was very little additional cost in stripping it out of the code. Ultimately, I believe that this was the right decision for this project. Although losing those few additional features I was using was a bit painful, gaining the proper support of my tools and losing the incidental complexity of transpilation was worth it I think.

I'll probably still use ES6 with Babel for small side projects. (Anything large won't be in JS, even if it compiles to it!) If you want to try out ES6, Babel is a very safe and easy way to do it. I look forward to the day that ES6 has widespread support and Babel is…well, still needed for ES7 transpilation, but that's for another day.


I don't like the import syntax and don't even get me started on classes and inheritance in ES6.

Change Can Happen

Written by J David Smith
Published on 27 June 2015

The past two weeks have been a big for the United States and the world at large. So much has happened. Some of it was good, some was bad, but all was important.

First, on June 17th, Dylann Roof killed 9 people in a racially motivated attack on a black church in Charleston, South Carolina. He apparently intended to start a race war, but that didn't happen. Instead, we saw people unite to against a symbol of racial hatred: the Confederate flag. Today – ten days later – Brittany "Bree" Newsome climbed the flag pole at the South Carolina capitol and did what the South Carolinian politicians would not: took down the flag. It went back up soon after, but the internet exploded in support of her action. And one must not forget President Obama's incredible eulogy for those lost in the massacre.

That was not the only significant event in recent days, though. Yesterday, on the 26th of June, 2015, SCOTUS ruled that bans on gay marriage were unconstitutional. This was followed by much hate from the Republican end of the US political spectrum. Some states are even considering not issuing any marriage licenses. Justice Scalia issued an opinion on the decision (pdf link) that alternated between being entertaining, baffling, and scary. However, the general public seemed to react very positively to the news.

Yet another major event is the Supreme Court's 6-3 decision in favor of the Affordable Care Act. I heard much less about this one. It was overshadowed by other events and, honestly, I'm not even sure what it means anymore. So much of the ACA has been marred by general misunderstanding and intentional disinformation by parties with axes to grind that I don't understand the full implications of the ruling. That will change over the next few days, as I intend to catch up on it.

All of these things have potential to be major historical events in their own right. We will have to wait and see the aftermath to be sure, but there are some very important lessons that we need to take from this (in my opinion).

Change Happens – Slowly but Surely

Too often it seems that I hear people depressingly discussing the sad state of affair in our world. Nothing seems to change. We push and pull and nothing moves. But then, how long have people been fighting for gay rights? The first documented demonstration was in Berlin in 1922 (citing Wikipedia because the primary source is a book and I can't link to that). That pins the SCOTUS decision at 93 years after the first demonstration. What's more: gay marriage is still illegal in Germany. Americans have been fighting over racial inequality since at least the Civil War in 1861 (154 years ago), and we still haven't finished dealing with it. The smallest of the three issues (universal, affordable healthcare) has been debated endlessly since at least 1912 (103 years), when it was an issue in Theodore Roosevelt's presidential campaign.

These issues are far from resolved, but all have been a part of political discussion for at nearly a century – yet we still today are seeing change in these issues. It is important to keep this in mind as we push towards a better future: change is often slow, but it does occur. (Note: I am not saying that change is always slow or necessarily slow. I am merely trying to point out that even when change is not obvious, in the end our efforts can pay off)

Change Takes Work

This ties in closely with the previous point. The modern LGBT rights movement in the United States can be traced back to the Stonewall Riots in 1969 (yet another wiki link, too many sources – not enough internet sources). The movement has been active and relentless for more than 40 years! Again: the war isn't won yet, but an important battle has been. 40+ years of work led to this change, and that same dedication will lead to future change.

The Work Is Not Done

The Charleston Massacre shows just how far we are from having solved racism and racial inequality. Significant effort has been expended in dealing with these problems, and much more remains to be done. This horrific event serves as a reminder that even after significant victories (like the dismantling of Jim Crow laws, or the electing of a black president), we can't continue to be or become complacent.

(I am not saying that others have been, but rather that I have been. The last few months have been enlightening for me as I've seen how much change still needs to happen.)

Next Steps

One may ask how to get involved. Unfortunately, I am not the best one to answer that question. For all my words, I have been largely a bystander. A "social media activist". I have tweeted, retweeted, faved, liked, and even donated some small amounts. However, I've not done much.

Better people to ask would be those on the front lines. Shaun King has done a tremendous job of bringing awareness to racial equality issues (especially those to do with police brutality). Wikipedia has a long list of LGBT rights organizations in the United States (and other countries).

I feel weird and a bit hypocritical having done nothing and yet writing this post, but in some way this is a call for myself more than anyone else. The SCOTUS ruling(s) show us not only that change can happen, but that change does happen. The Charleston Massacre has the dual nature of showing how much hatred remains and showing how we can move forward from tragedy towards a better future. This is my moment of realization that my attitude of depressing complacence accomplishes nothing, and that by action I may help move our society to a better place.

Anarchy Online: Why?

Written by J David Smith
Published on 23 May 2015


I started playing AO a bit more than a decade ago, right when they began allowing players to play for free. Free players (colloquially known as 'froobs') have access only to the base game and the Notum Wars boosters, not any of the (4 at present) expansion packs. I played on and off as a froob for much of that period, never reaching higher than level 80 (of 200).

So why do I keep coming back? More than that: why the hell did I pick up the full expansion set this last time around? It was only $20, but still: Why? I am beginning to understand, I think. The game is one giant puzzle.

I was playing my new Fixer, running around in the Shadowlands, trying to figure out where to go next to keep leveling. I googled it, found some info, and set about trying to act on it. And failed over and over again. Dangerous enemies were between me and my goal. As of writing this, I have yet to figure out a way to slip past them.

It isn't that these enemies are over-leveled for me either: they are on level, and I can fight one and sometimes even two at a time without dying. However, every entry point seems to set me against situations where I fight minimum two and often three of these creatures.

There are many possible ways I could deal with this. Maybe I need to temporarily blow some IP (for the uninitiated: IP increase skills) in Concealment and sneak past them. Maybe I need to go hunt for better nanos and the requisite buffs to equip and cast them. Maybe I need a better gun (or two). I don't know.

As someone who loves puzzles and is absolutely unconcerned with reaching the level cap in a timely manner, I enjoy this. The struggle just to succeed. I have fond memories of pugging ToTW on my Agent (Emallson – my namesake), pushing all the way to the legionnaires for efficient XP or the final boss encounter for the wonderful loot (though I can't remember these days what he drops). Getting there as a solo player without any consistent help was hard. For about a month I was stuck on level 41, continuously dying before dinging and feeding the XP into my bonus pool (Aside: dying loses XP, which goes into a bonus pool that gives you 1.5x XP until you've regained all of it. I really like this system).

Again: it was a puzzle. How do I survive? What can I change? Where do I go? Who do I work with? It was fun. It is fun. This is why I still play this ugly, unwieldy game. Come to think: its unwieldiness actually feeds into that. It gives you most of the information you could reasonably ask for, but it's scattered around. Figuring out which nanos I can buff into reasonably requires finding not only what nanos I can get (in the shop) but also what buffs I can get cast on me (by an MP most often), what weapons I can pull from missions without spending too much on the search is something that doesn't have a good answer because of the QL system, etc.

There are a lot of things that I like about this game. There are enough of them that I feel I can look past the ugliness and unwieldiness to enjoy it. It's fun to explore this world. And that's what I want from a game: fun.

2014 in Review

Written by J David Smith
Published on 12 January 2015

Interning at IBM

When I applied for internships in December of 2013, I wasn't sure what would happen. I applied to big names – Google, Microsoft, IBM, and others as I did the year prior. In 2012-2013, I got no responses. In 2013-2014, I got many. My applications to both Google and IBM were accepted, Riot Games asked for an interview (which I unfortunately had to decline because I'd already accepted IBM's offer), and Microsoft ignored my existence (maybe because my resumé is slathered in Linux tooling and has not a whiff of Microsoft on it).

I struggled for weeks with the decision between Google and IBM. Working at Google is a dream job, but there was a catch: the project I would be working on there was boring. Meanwhile, the project I was offered at IBM was really cool and exciting. At the time, it involved significant open-source contributions. Although it changed later, the change helped refine the project goals and clarify what my team would be doing.

In the end, I chose IBM. I was both looking forward to and dreading starting there at the end of May. What if I had chosen incorrectly? Once we got started, however, all my doubt vanished. The project turned out to be just as exciting as it had sounded. Even better: I had the pleasure of working with a phenomenal group of people. On the IBM side, we had a fantastic manager ([Ross Grady]( and great support from the group we were working with.

On the intern side, things couldn't have been better. My team was phenomenal: John and Walker were (and are) great technically, and all four of us (me, John, Walker, and Chris) worked together without even a hint of an issue throughout the Summer. What's more, I was surprised at how welcome I felt in the intern group. I've never been very comfortable socially, and yet by the end of the Summer there is but one that I'd not call a friend.

The biggest benefit of the internship for me was not the technical knowledge I gained, the skills I developed, money I made. It was the opportunity to work with these people. Prior to this, I had never had the chance to work with other programmers. I'd worked in a research lab, but that is a very different focus. Seeing how capable my fellow interns were and realizing that I was actually capable of keeping up with them was a tremendous confidence boost for me.

I have no regrets about my decision to work at IBM this past Summer. I came out of it knowing more, having more friends and contacts, and with several offers for positions at IBM. I ended up declining all of them to pursue a PhD, but set up an internship with one of the security software teams for Summer 2015.

The Interview

In the middle of the Summer, I got a wholly unexpected phone call: a Google recruiter contacted me about interviewing for a full-time position. At the time, my plans for the future were undecided but leaning heavily towards the pursuit of a PhD. I told him that I would be willing to talk more after the Summer ended, when I had more time.

When I followed up with him in August/September, things moved rapidly. I was able to skip the phone interviews because I'd done well enough on the ones for the internship to receive an offer. I got to fly to California and do interview in person! Working full-time at Google requires passing a high bar, so being interviewed indicates that I may be close to it.

In the end, I did not receive an offer. However, I was thrilled at the thought that I might be capable of reaching and surpassing the skill level needed for entry. This also forced me to mentally work out how to deal with serious rejection. I have been generally successful throughout my life, and hadn't had any rejection on this level before. I am glad that it came at a time when I had the opportunity to stop and think about it, rather than a super-busy season.

The Fulbright Program

I also began working on an application to the Fulbright U.S. Student Program in the summer. This program – if I were accepted – would let me study at a school almost anywhere in the world. The program grant covers one year, but I will be able to build a case for financial aid and visa for continuing on should I desire.

The application for this is for the most part not too bad. However, the two essays that go along with it (Personal Statement & Statement of Purpose) were especially difficult. I had never written anything like them before and was ill-prepared to do so. The advisor at UK was incredibly helpful in this, and I believe that I ended up with a competitive application. Regardless, I spent a solid month and a half thinking about nothing else. This prepared me well to write the statements for grad school applications, but was a significant time sink.

The worst part about this application is that I won't know the result until March of this year, while the deadline was September of last year. The long waiting period is killer, and is a problem I am facing in other areas as well.

Graduate School Applications

This is where I made my biggest mistake of the year: I did not work on grad school applications on Thanksgiving break. I took the week off: I slept, I played video games, I wrote code. I did not apply to grad school. Because of this, I was ill-prepared to meet the popular 15 December deadline. I was more prepared to meet the 1 January deadline that others have, but between the insanity of finals week (15-20 Dec.) and Christmas, ended up being largely tardy with that as well. (Also, far fewer schools have the later deadline)

I learned in 2012/2013 not to wait so long. I made a point of doing internship applications in '13 on Thanksgiving break so as to not miss deadlines. I learned the lesson, and then in arrogance forgot it. I applied to four schools: MIT, Texas A&M, UFlorida and UKansas. I have already been accepted into UKansas (0.0), but we'll see what happens.

I probably won't hear back from the other three schools until mid-March. I will have little enough time to make a decision, and will have to start planning for the Fall immediately. What really gets me is simply the waiting period. I do not know what will happen. I cannot realistically make any plans for or assumptions about after the summer until March. It sucks. I don't like it.

Goals for 2014

I didn't really set goals for 2014. One that I stumbled upon through meditation on Tom Shear's (Assemblage 23) Otherness. This is a long-term goal: be a better person. I started trying to write down a concrete list of this while writing this blog post, but I will need to think about it more. I realize how incredibly wishy-washy 'be a better person' is, and need to nail it down so I know what I'm going for. Details will be a blog post sometime in the next week.

Looking Forward: Goals for 2015

I am not a fan of New Years resolutions, and thus have none. However, over the course of last semester I became of several deficiencies in my overall behavior. In particular: my aversion to lists and my inconsistency.

Lists are helpful tools, yet I often do not use them. I saw how my dad became dependent on his lists to remember things and suppose I overreacted. I started keeping lists of assignments and due dates during this semester, and it helped reduce the number of times that I missed an assignment due to forgetfulness.

This is one method of moving towards my present goal: becoming more consistent. Self-discipline is not one of my strong points, but I have been working on improving. The impact of this will be better control over what I buy, what I eat, and how I spend my time. It meshes well with my goal of 'be a better person' (lol), as control will allow me to be who I want to be.

I have a long way to go.

Evaluating JavaScript in a Node.js REPL from an Emacs Buffer

Written by J David Smith
Published on 1 June 2014

For my internship at IBM, we're going to be doing a lot of work on Node.js. This is awesome: Node is a great platform. However, I very quickly discovered that the state of Emacs ↔ Node.js integration is dilapidated at best (as far as I can tell, at least).

A Survey of Existing Tools

One of the first tools I came across was the swank-js / slime-js combination. However, when I (after a bit of pain) got both setup, slime promptly died when I tried to evaluate the no-op function: `(function() {})()`.

Many pages describing how to work with Node in Emacs seem woefully out of date. However, I did eventually find nodejs-repl via package.el. This worked great right out of the box! However, it was missing what I consider a killer feature: evaluating code straight from the buffer.

Buffer Evaluation: Harder than it Sounds

Most of the languages I use that have a REPL are Lisps, which makes choosing what code to run in the REPL when I mash C-x C-e pretty straightforward. The only notable exceptions are Python (which I haven't used much outside of Django since I started using Emacs) and JavaScript (which I haven't used an Emacs REPL for before). Thankfully, while the problem is actually quite difficult, a collection of functions from js2-mode, which I use for development, made it much easier.

The first thing I did was try to figure out how to evaluate things via Emacs Lisp. Thus, I began with this simple function:

(defun nodejs-repl-eval-region (start end)
  "Evaluate the region specified by `START' and `END'."
  (let ((proc (get-process nodejs-repl-process-name)))
    (comint-simple-send proc (buffer-substring-no-properties start end))))

It worked! Even better, it put the contents of the region in the REPL so that it was clear exactly what had been evaluated! Whole-buffer evaluation was similarly trivial:

(defun nodejs-repl-eval-buffer (&optional buffer)
  "Evaluate the current buffer or the one given as `BUFFER'.
`BUFFER' should be a string or buffer."
  (let ((buffer (or buffer (current-buffer))))
    (with-current-buffer buffer
      (nodejs-repl-eval-region (point-min) (point-max)))))

I knew I wasn't going to be happy with just region evaluation, though, so I began hunting for a straightforward way to extract meaning from a js2-mode buffer.

js2-mode: Mooz is my Savior

Mooz has implemented JavaScript parsing in Emacs Lisp for his extension js2-mode. What this means is that I can use his tools to extract meaningful and complete segments of code from a JS document intelligently. I experimented for a while in an Emacs Lisp buffer. In short order, it became clear that the fundamental unit I'd be working with was a node. Each node is a segment of code not unlike symbols in a BNF. He's implemented many different kinds of nodes, but the ones I'm mostly interested in are statement and function nodes. My first stab at function evaluation looked like this:

(defun nodejs-repl-eval-function ()
  (let ((fn (js2-mode-function-at-point (point))))
    (when fn
      (let ((beg (js2-node-abs-pos fn))
            (end (js2-node-abs-end fn)))
        (nodejs-repl-eval-region beg end)))))

This worked surprisingly well! However, it only let me evaluate functions that the point currently resided in. For that reason, I implemented a simple reverse-searching function:

(defun nodejs-repl--find-current-or-prev-node (pos &optional include-comments)
  "Locate the first node before `POS'.  Return a node or nil.
If `INCLUDE-COMMENTS' is set to t, then comments are considered
valid nodes.  This is stupid, don't do it."
  (let ((node (js2-node-at-point pos (not include-comments))))
    (if (or (null node)
            (js2-ast-root-p node))
        (unless (= 0 pos)
          (nodejs-repl--find-current-or-prev-node (1- pos) include-comments))

This searches backwards one character at a time to find the closest node. Note that it does not find the closest function node, only the closest node. It'd be pretty straightforward to incorporate a predicate function to make it match only functions or statements or what-have-you, but I haven't felt the need for that yet.

My current implementation of function evaluation looks like this:

(defun nodejs-repl-eval-function ()
  "Evaluate the current or previous function."
  (let* ((fn-above-node (lambda (node)
                         (js2-mode-function-at-point (js2-node-abs-pos node))))
        (fn (funcall fn-above-node
              (point) (lambda (node)
                        (not (null (funcall fn-above-node node))))))))
    (unless (null fn)
      (nodejs-repl-eval-node fn))))

You Know What I Meant!

My next step was to implement statement evaluation, but I'll leave that off of here for now. If you're really interested, you can check out the full source.

The final step in my rather short adventure through buffevaluation-land was a *-dwim function. DWIM is Emacs shorthand for Do What I Mean. It's seen throughout the environment in function names such as comment-dwim. Of course, figuring out what the user means is not feasible – so we guess. The heuristic I used for my function was pretty simple:

This is succinctly represent-able using cond:

(defun nodejs-repl-eval-dwim ()
  "Heuristic evaluation of JS code in a NodeJS repl.
Evaluates the region, if active, or the first statement found at
or prior to the point.
If the point is at the end of a line, evaluation is done from one
character prior.  In many cases, this will be a semicolon and will
change what is evaluated to the statement on the current line."
   ((use-region-p) (nodejs-repl-eval-region (region-beginning) (region-end)))
   ((= (line-end-position) (point)) (nodejs-repl-eval-first-stmt (1- (point))))
   (t (nodejs-repl-eval-first-stmt (point)))))

The Beauty of the Emacs Development Process

This whole adventure took a bit less than 2 hours, all told. Keep in mind that, while I consider myself a decent Emacs user, I am by no means an ELisp hacker. Previously, the extent of my ELisp has been one-off advice functions for my .emacs.d. Being a competent Lisper, using ELisp has always been pretty straightforward, but I did not imagine that this project would end up being so simple.

The whole reason it ended up being easy is because the structure of Emacs makes it very easy to experiment with new functionality. The built-in Emacs Lisp REPL had me speeding through iterations of my evaluation functions, and the ability to jump to functions by name with a single key-chord was invaluable. This would not have been possible if I had been unable to read the context from the sources of comint-mode, nodejs-repl and js2-mode. Even if I had just been forced to grep through the codebases instead of being able to jump straight to functions, it would have taken longer and been much less enjoyable.

The beautiful part of this process is really how it enables one to stand on the shoulders of those who came before. I accomplished more than I had expected in far, far less time than I had anticipated because I was able to read and re-use the code written by my fellows and precursors. I am thoroughly happy with my results and have been using this code to speed up prototyping of Node.js code. The entire source code can be found here.

A Good, Stiff Kick

Written by J David Smith
Published on 1 May 2014

This semester may be the first semester that I get a grade less than an A in any in-major class (read: CS, MA). I am taking the graduate-level Numerical Analysis course with Dr. Wasilkowski this semester. Dr. Wasilkowski is a good teacher – I actually went out of my way to make sure I took this class with him because of that and because it is his research area.

I've not done poorly by any means. My grade on the first exam was 18.75 / 20. I consistently earned good grades on the homework. However, I was barely keeping my head above water. Having counted on my good luck and general intellect to get me through without much effort, I found myself wholly unprepared for the failure of both.

The Exam

The second mid-term exam had 4 problems plus an extra. We could choose any 3 of the normal problems and solve the extra for bonus points. The exam was scored on a scale of 1-20. I solved the first two problems easily. And then I bombed the third. I did not do the extra.

My mistake on the third problem was not due to lack of knowledge, but a simple misunderstanding of the problem on my part. The problem wasn't particularly opaque either – everyone I spoke to had solved it with the correct method. Everyone but me. I did not have the padding in my grade to take such a hit. As it stands now, I am 3.2% below the requirement for an A.

The Final

Dr. Wasilkowski gives his students the option to not take the final. If you are happy with your grade prior to the final, you can take it as-is and skip the final. If you are not, you can take the final to try to improve it. There is one catch: if you take the final and do poorly, you can lower your grade.

In order to raise my grade up to an A, I have to earn 38.25 / 40 points on the final. My reaction upon seeing that went something like this:

"I must've done something wrong"
"Well damn, that's high"
"Is that even possible?"

Then I looked to see what the minimum I need to keep a B is: 28.25 / 40. I can do that, it's only a minimum of ~70%. Actually, I am quite confident that I could earn that without studying for the exam at all. But that isn't what I want.

A Bit of Context

This is not the first class I've taken with Wasilkowski. Previously, I had taken the CS Discrete Math course under him. (This was how I knew prior to registration that he was a good professor). In that class, he gave out quite a lot of extra credit. One person even managed to earn 140% as their overall grade – though he would not say who. I did quite well on the exams, and was easily able to qualify for skipping the final with an A.

The Kick

Today, I went to his office hours for advice. Up to this point I had been leaning towards taking the exam, but I wanted to know what he thought about taking the exam vs not. After listening to my explanation, he told me that he couldn't give me advice – it had to be my own decision. Then he said something like this:

You know, I was really disappointed with your performance this semester. You have a lot of potential, you were one of the best students in my other class, but I didn't see the effort this semester.

Boom. I have a lot of respect for Dr. Wasilkowski and his opinion, so I take what he says seriously. And he's right, ya ken? I haven't put the effort in this semester. I haven't been sufficiently familiar with the material, I've spent far more time on reddit (in and out of class) than in previous semesters, and I have relied far too much on luck.

My Semester in Review

Throughout this semester, I have been frustrated by my performance. I screwed up the first homework, but have made up for it. During the first exam, I wrote more guesses than answers. Still, I got a very good grade. Yet it always feels weird to be the hat that Indy grabs from under the door: scraping through no worse for the wear, but not by one's own doing. I mean, technically it was my own doing, but I have put forth very little effort in this class and most others this semester.

The only extra credit I've earned has been from turning in well-formatted & printed rather than handwritten homework. I did not even attempt most of the extra credit problems; minimum effort was all I gave. That's a big part of my present problem.

Two contributing factors are general tiredness and a simple experiment that I took far too long to give up. Tiredness is easy to understand, as I have a lot of stuff to do and just enough time to do it. However, my little experiment ended up hurting more than I had anticipated: I used my phone (a phablet) as a notebook. Digital distractions abound. One moment I'm taking notes – then suddenly class is over, I don't remember anything from that lecture and my notes are horrifyingly incomplete. Oops. Ultimately, these are both excuses, and the fault still lies with me.

My Resolve

As I left his office, I turned and told him that I was going to take the exam. I have resolved to both take the exam, but also to ace it. Will I fail? Probably – I am prone to silly little errors – but I will try. Even if I do fail, I am no worse off.

I am thankful for teachers like Dr. Wasilkowski. He is an excellent teacher, to be sure. Energetic, interesting, funny (he tells the best jokes that I've ever heard from a teacher) while still covering the material clearly. It is easier to pay attention in his classes than in any other I've been in. Clearly, he also isn't afraid to teach outside of the classroom – even when it involves a stern rebuke. More than his in-class capabilities, I am thankful for that. Sometimes a stiff kick in the gut is good to bring me to my senses. And by sometimes I mean often. And by often I mean pretty much always. Without the pretty much. So just always? Yea, always.

UPDATE: I got an A! ^.

World of Warcraft's Recruit-a-Friend Reward Structure is Flawed

Written by J David Smith
Published on 5 April 2014

What instigated this post?

Last night, an unnamed redditor asked the WoW sub-reddit what the fastest way to level these days is. Why? Because their girlfriend "has been wanting to start playing wow with me". Seems reasonable, right? S/he goes on to ask about RaF.

I immediately jump in and try to head off a disaster in the making. "What disaster?" one may ask. Simple: RaF dungeon spamming isn't fun. In fact, I wrote that "Personally, I wouldn't even use RaF because of how it completely screws up the structure of the early game." This set the gears in my head to whizzing frantically. What changed that made a really cool system actively harm the game? And – more importantly – how can it be fixed?

What is Recruit-a-Friend?

In order to answer those questions, it is important to understand what the RaF system actually does. Blizzard's FAQ does a good job of describing the system. There are actually a lot of perks to using RaF, but there is one in particular that really hurts the game: triple XP.

For levels 1 - 85, while in a group and similarly leveled the recruiter and recruitee gain 3 times the normal amount of experience. This isn't simply mob kill experience either: quest experience is also affected. The result these days is that – if you aren't spamming dungeons to power-level – you out-level zones just as you're starting to get their stories. To understand the impact of this effect, we need to first dig deeper into what the reward structure for WoW is.

I Saved Westfall and all I got was this stupid T-Shirt!

World of Warcraft is not unique in its structure. You help people, kill monsters and collect rewards. There are two general classes of rewards in WoW:

  1. Power-increasing rewards

    These rewards increase the player's overall power level (although perhaps not immediately). Examples of this are loot (literal character power), gold (economic power) and experience (character power – albeit slightly delayed).
  2. Emotional rewards

    These rewards tug on the player's heart-strings. Whether it's saving an adorable little orphan boy or laughing maniacally as you help Theldurin punch Deathwing in the face, these ones make you feel good (or bad) for having done whatever it was you did. Type 1 rewards are a subset of this reward class.

In my experience, the latter are much more important than the former. This is upheld by observations of the reaction to the Madness of Deathwing fight and Deathwing in general. While players got more powerful than ever before, there was something missing. Emotional reward was lacking, and it showed.

How does this relate to Recruit-a-Friend?

The RaF system increases the gain rate of a particular Type 1 reward: experience. However, it not only causes problems with the rate of gain of other Type 1 rewards, but often outright prevents the gain of Type 2 rewards!

Recently, I leveled through Darkshore. Starting at level 11, I finished the quest achievement at level 24. Had I been using RaF, I'd have only made it through the first 1/3rd of the quests in that time. This would have left the story hanging and broken the illusion of world-changing impact that Blizzard has worked so hard to create.

As a result, emotional investment can become a liability preventing enjoyment rather than a boon aiding it. It's like reading the first third of every Spider-Man comic in order to 'catch up' to the current. Sure, one would reach your goal faster, but at the cost of enjoying the process of reading comic books. Even once you were caught up, you wouldn't understand all of the stuff going on in the current issue.

I've seen situations where one player wants to get their significant other into the game using RaF. In every case I've seen where the core benefit of RaF is used to its fullest (ie by dungeon spamming), the SO quits playing. Therefore, I believe that the overall benefit of RaF for the new player is non-existent and in many cases it even causes damage to their perception and enjoyment of the game.

Two Birds, One Stone

The solution to this problem is relatively simple. While simply removing the XP bonus would go a long way towards preventing the damage currently being done by RaF, why stop at simple prevention when it can be used to make the game genuinely more enjoyable?

Think back, ye die-hard WoW fans: what problem always crops up when questing as a group? Yes, that one. You know it well. Someone plays while the others are away, gets ahead in both experience and quests and is either forced to wait for the group to catch up, retread the content you just did, or leave the group behind.

With long-time players, this isn't much of a problem. We have alts, we have mains, and we can always do something else while the group is offline. For a new player, however, such options are severely lacking. PvP grants experience, dungeons grant experience, even gathering mats to level crafting grants experience these days! Imagine if the Priest class is the only one that really clicks with your friend. Are you going to ask them to not play when you aren't online? To roll an alt? A second priest?

This problem is solved relatively well by the combination of massively boosted XP and level granting: the increased XP rate encourages moving on to other quest chains with relative frequency and level granting ensures that the older player can keep up (most of the time). However, if triple XP is removed from the system, then the problem again rears its ugly head because the player no longer has such an incentive to move on in the middle of a quest chain.

Sure, the two players can remain evenly leveled, but what about quest progress? Forcing the new player to retread content is not exactly ideal, so why not allow the new player to catch the older one up not only in levels but also in quests?

What I am proposing is this:

This would prevent XP gain from completely overriding any other sort of reward in the game and would allow new players to continue questing with their friends without worrying about quest dependencies and level discrepancies. To my view, this would be superior to the current system – especially since the store is now the go-to way to pay for a fast 90. However, one question remains to be answered.

Why was it designed this way in the first place?

World of Warcraft is not the game that it once was. In ye olden days, when Azeroth was yet young and paladins still only had 2 buttons for the first 40 levels, there were fewer quest chains and it was common – up til Outland, at least – to complete a zone without having out-leveled it. In that era, there were far fewer tales of merit told in the quests.

Way back then – near a full 6 years ago – tripling the experience rate made sense. It meant that you'd have to do one zone to get through a level range instead of 2.5-3. Still, those days are gone and now, with the world designed to take one player through a level range in one zone, it no longer makes sense.

Here's hoping that Blizzard fixes this system soon. It bothers me to think of the people potentially missing a great experience because something that should be rewarding can easily become the opposite. With all of the dramatic WoD changes incoming, this could be the perfect time to do it!

New Site (Built with Stasis)

Written by J David Smith
Published on 3 March 2014

First off: why not Wordpress?

Nothing against Automatic, but after having run several WP blogs I sympathize with this guy:

Wondering how I managed to end up building a Wordpress site today. For those of you that do this regularly, you have my deepest sympathies.

— Daniel Grant (@danieljohngrant) February 25, 2014

I don't want to run another WP blog and I don't want to have to hack any more PHP. The solution?

Static Site Generation

Static Site Generation is a pretty simple concept. You have some templates and some content, you want to put the content in the templates, and only want to do so once. The site content is transformed into HTML once by the site owner (aka me) and then served without any extra work by the server.

This has some big advantages. First, it makes a very fast website, as the restriction is not HTML generation time but simple transmission time. Second, it is extremely secure because malicious content serving is impossible short of someone gaining root access on my server (or someone hijacking Disqus; I trust Disqus' security people to do better than I could – it is their job, after all).

Even better, because the code doesn't need to interact with the server, I am not restricted to things that play nicely with the server (which Clojure actually does through Java, but that's not a place I'd like to go right now).

I only had one big requirement – that I be able to write my posts in Org format – but I also wanted something that I could hack on. Clojure is the language I'm most interested in right now, so I started looking in that direction. I toyed around with several options – even going so far as to [fork nakkaya's static]( – but eventually settled on magnar's stasis.

The biggest problem I had with static was how it dealt with posts. This snippet says it best:

(defn list-files [d]
  (let [d (File. (dir-path d))]
    (if (.isDirectory d)
       (FileUtils/listFiles d (into-array ["markdown"
                                           "html"]) true)) [] )))
(defn create-latest-posts
  "Create and write latest post pages."
  (let [posts-per-page (:posts-per-page (config))
        posts (partition posts-per-page
                         (reverse (list-files :posts)))
        pages (partition 2 (interleave (reverse posts) (range)))
        [_ max-index] (last pages)]

As you can see, the posts list is created by using partition on what amounts to a directory listing. While this isn't a huge problem, my blog posts aren't organized that way and I didn't want to change that. Having dates in the file name looks ugly to me – never mind the fact that it duplicates the #+DATE headers that are in all of my posts.

This is where stasis comes in. It's a no-batteries-included framework, which means basically all it does is apply the templates to my sources. This leaves designing the templates, template framework and sources to me. I used the whattheemacsd source as my stasis-basis and built from there.

The biggest thing I had to do was implement conversion of Org files into HTML. While not the fastest option (in terms of running time), I opted to simply leave that to emacs by calling it in batch mode. The #+STUFF headers are trivial to parse using regexp, so pulling in my #+DATE's was a non-issue.

Ultimately, I'm pretty happy with how things turned out. This is the first post I've written using the new system and it's worked great!

What next?

There are a couple of features that I want to build, starting with category and tag views. After that, I may look at implementing an elisp command to replace my current deployment method (a shell script) so that I can deploy directly from the editor.

Technology & Style Credits

The full source code is available on github.