New project: A Visual Studio Code extension for time zone data files

(A boring title, I know, but it’s stuffed full of juicy SEO keywords. Mmmmm.)

As mentioned in my previous project-related post

From now on, I’m going to attempt to write a proper explanation here for each side project I complete. Even if no-one else reads it, at least I’ve created some historical documentation for my future self.

As many people know by now, I’m a time zone nerd. I’ve given talks at conferences and meetups about problems and solutions when dealing with time zones. I sit on the time zone database mailing list and read about last-minute changes to a country’s time zone with a mixture of fascination, horror, and (mostly) amusement.

The IANA time zone database (also known as the “Olson database”) source files are incredibly rich with information. About two thirds of the lines in each file are comments about the data and its history. The project is part database, part history lesson, part social commentary.

A couple of years ago I wrote a Sublime Text package that added syntax highlighting specifically for the formatting of the time zone database files (available via Package Control). While it’s great to read the comments surrounding the data, sometimes you want the comments to be styled like comments and fade away a little, to let the lines of data stand out.
It seems to have been well-received, and Matt Johnson of Microsoft (a frequent contributor to the mailing list) suggested an improvement to the text formatting.

A few months ago, Matt contacted me asking if I was interested in porting the package over to an extension for Visual Studio Code (hereafter referred to as “VS Code”, because I’m lazy and it’s easier to type). I regularly use VS Code for coding, so I figured it was a good way for me to explore the extension ecosystem. I’ll describe how I converted it, and some of the mistakes and improvements I made along the way, in case it’s helpful for someone else.

If you want to cut to the chase and use the finished extension, it’s available as zoneinfo in the VS Code marketplace.

Version 1 — Conversion and syntax highlighting

The initial conversion was remarkably easy. As highlighted in the documentation, the VS Code team provides an extension generator. The generator has a feature that will take a Text Mate language definition file (used by Sublime Text) and convert it into a VS Code extension. Using the generator in this way meant that most of the hard work was done in one go.

npm install -g yo generator-code
yo code

A screenshot of using the "yo code" command line generator tool

I provided the name vscode-zoneinfo and the location of the .tmLanguage file from the Sublime Text package, and used the default values for the rest of the options (which were mostly derived from the contents of the .tmLanguage file). At the end of it I had a zoneinfo directory containing most of what I needed.

I had to tweak some of the generated data to properly match the Sublime Text package. For example, the zoneinfo.tmLanguage file defined specific file names (due to the naming convention of the tzdb source files), but the generator converted those to file extensions, which I had to change back.

The Sublime Text package also contains a default settings file to use the correct indentation, which was not detected by the VS Code extension generator.

{
    "detect_indentation": false,
    "translate_tabs_to_spaces": false,
    "tab_size": 8
}

Adding this into the VS Code extension was as easy as adding the same JSON properties to a specific section of the package.json file:

{
  // Other package.json stuff goes here

  "contributes": {
    "configurationDefaults": {
      "[zoneinfo]": {
        "editor.detectIndentation": false,
        "editor.insertSpaces": false,
        "editor.tabSize": 8
      }
    }
  }
}

Testing that the changes worked was easy thanks to the extension generator’s default configuration. It provides a “Launch Extension” command that starts up a new window of VS Code with your extension installed.

After that, it was a matter of the usual tuning and polishing of a new project before publishing it: a proper description and keywords in package.json, taking screenshots, writing a README and changelog, adding a licence file, and going through the rigmarole of signing up to yet another publishing platform.

Version 2 — Taking advantage of a new editor

With the initial request taken care of (thanks for the review, Matt), I decided to take advantage of the extra features VS Code provided. Sublime Text is a text editor, while VS Code is an IDE, so there are many more things that can be done.

This also provided a good excuse for me to try using TypeScript. VS Code is written using TypeScript and has first-class support for it, while the extension generator can also handily create all the files and configuration needed to get started.

The first step was to turn my simple config-based extension into something that could hook into the VS Code APIs and lifecycle. I re-ran the extension generator in a separate directory to create a default TypeScript extension, then copied over the files I didn’t already have in my project.

Rather than trying to do everything at once, I broke down the tasks into manageable chunks. I knew that I’d probably have to rewrite a fair amount of code by the end of it, but it helped me stay on track (and not get overwhelmed to the point of giving up). I’ll just highlight the main points here, rather than going into full super-technical details of each step (because that’s what a commit history is for).

Get something working

“Make it work, make it right, make it fast.”

The first step when venturing into new territory like this (new APIs and new syntax in TypeScript) is to follow the documentation and get the smallest, simplest thing working. Quickly getting to a state of “yep, that works” is the best motivating factor. If it takes a long time to get anything working, that doesn’t bode well for the rest of the project.

With that in mind, I whipped up the quickest working implementation I could. A command to “Go to Symbol in File” would parse the currently-focused file, extract the names of any Link, Rule, or Zone declarations, and return them as a list of symbol objects.

But first, the documentation says there are two different models for creating a language extension:

  • In-process — everything is handled within the extension process.
  • Client/server — the extension process just sends commands to a different process, and receives data back.

Uh oh, an architectural decision to make already! Luckily, in this case, the decision turned out to be an easy one. The client/server model is best for programming languages that have their own runtime. The JS-run extension can call out to a server written in another language (Go, Ruby, Python, etc.). Whereas my extension is for a “language” that is just specially-structured data with a whole lot of comments. In this way it’s similar to writing an extension for Markdown or SQL files.

So I followed the documentation examples, hooked up the simple text parser and… huzzah it worked!

A screenshot of the "Go to Symbol in File" feature

Restructure

“Now that I can get symbols for one file, it should be easy enough to get them for all the tz source files in the workspace,” thought Past Gil. Of course, with the benefit of Current Gil’s hindsight, Past Gil was naïve and should have foreseen that there would be problems. Silly Past Gil.

Overall, the documentation for writing VS Code extensions is very good, and tells me everything I need to know. However, trying to complete this particular task was the one area where I felt lost. The API docs didn’t quite explain the difference between a few seemingly-similar APIs, and one method didn’t work the way I expected it to based on its name. I tried looking for an example of what I was trying to do in the vscode-extension-samples repository, but all the language extension examples were for the client/server model, not the single-process model I was using.

I saved some notes about “things that could be improved in the docs” and persevered. Between confusion of which APIs to use and confusion about how I wanted this feature to work, I ended up changing the file structure a few times.

Some basic diagrams describing multiple attempts at structuring file includes. It took 4 attempts to get the right structure.

Eventually I got it working, and I was now able to quickly search for a zone definition across the whole tz directory. Double huzzah!

A screenshot of the "Go to Symbol in Workspace" feature

Shiny and chrome

Having got the big chunk of confusion out of the way, the rest was pretty smooth. I had to make the text file parsing smarter to find all references from one symbol to another, which then allowed for more features:

  • Click-through navigation for going to the definition of a Zone or Rule. This also brought in an inline preview when hovering, absolutely free of charge (i.e. I didn’t have to do anything, it Just Worked™).
  • Find all references to a Zone or Rule, which is really just the reverse operation of “go to definition”.
  • Smarter text matching when searching for symbols in a workspace
  • Hooking into document events to re-parse a changed document. This ensures that symbol references are correct even for unsaved changes.

Async I can

While looking for examples of other extensions, I realised that TypeScript would allow me to use the async/await syntax without adding any extra build processes. Since most of the VS Code APIs already use promises, it was fairly straightforward to convert to use async/await. The only real trouble I had was working out exactly where to put the async and await keywords when using Promise.all() and Array.prototype.map(). Once that was sorted, the code certainly did look neater and easier to follow. I’ll definitely be trying to use async/await more in future where I can.

After those improvements, it was time for another round of cleanup: remove all the debug logging, update the README file, and add more screenshots (often the most time-consuming part of publishing a project). And lo, version 2 was done.

Campsite rule

One thing I try to abide by when working with any external system (browser, IDE, someone else’s open source code, etc.) is the Campsite Rule: “Leave the campsite cleaner than you found it.”

As mentioned earlier, while I was building the extension I made a note of anything I thought could be improved about the experience. Once the extension was finished, I started raising issues and pull requests in VS Code-related repositories. I doubt that they’ll all get fixed, because there are priorities to be maintained on a large project, but at least I can say I did my bit to try to make it better for the next person.

Here’s a list of all the issues and pull requests I created for various VS Code projects:

In the end, I’m not too fussed about the future popularity of this extension. It was designed for a niche audience (i.e. people who read and navigate the source files of the IANA time zone database), and will get limited use. But it was a great learning opportunity for me, and I had fun making it. When my free time for coding is restricted to my morning train trip only, that’s mostly what counts.

New projects: Emoji Censor and Numberwang

I’ve generally been rather lax when it comes to writing about side projects I’ve done. I’ll finish the project, mention it on Twitter a couple of times, then… well, that’s about it. Not a particularly effective way to do promotion of ideas.

Time to do something it. From now on, I’m going to attempt to write a proper explanation here for each side project I complete. Even if no-one else reads it, at least I’ve created some historical documentation for my future self.

Take me down to Distraction City, where the… oooh shiny!

I have a history of talking with people at tech user groups and meetups and coming up with ridiculous ideas for side projects. Mostly the ideas go no further than that because, well, they’re ridiculous (and taking a joke too far often ruins the humour). Sometimes, though, the idea is just too tempting to leave alone.

The first of these ideas came from a SydCSS meetup:

For the rest of the evening, a friend and I would whisper “that’s Numberwang!” whenever a bunch of numbers appeared on a presentation slide. Yes, we’re far too easily amused.

Wait, what the hell is Numberwang?

Numberwang started as a series of sketches on the TV show That Mitchell and Webb Look. The basic premise is a number-focused game show that makes no sense to viewers, but everyone on the show knows exactly what’s happening. I highly recommend watching the full collection of sketches.

The next morning on the train I whipped up a super-quick prototype of a browser extension that would randomly exclaim “That’s Numberwang!” when you typed a number on a web page. I let it linger for a few months, then another conversation spurred me to write it properly. Here’s a video of it in action:

It works in Chrome and Firefox, but I’m not planning on publishing it to the Chrome Web Store or Mozilla Add-ons any time soon, because jokes have limits. I was running it in my day-to-day browser for a while to see if it became really annoying. In the end, it felt like a form of Russian Roulette (albeit one with far less severe consequences). Every time the notification popped up, I’d wonder if the next one would be the event that triggered a full-page rotating animated GIF while I was using my laptop on a crowded train.

Still, the value of even silly and frivolous side projects is in learning new things. In this particular case I learned:

  • How to use browser notifications.
  • How much compatibility there is these days between Chrome and Firefox extension APIs, thanks to the WebExtensions initiative.
  • How to efficiently parse and cache all numbers found in an input, so that only changed numbers would trigger the notification (hooray for ES6 Maps and their ability to store DOM elements as keys).
  • How many people in my immediate social network do or do not know about Numberwang.

Completely worth it.

My ❤️/👿 relationship with emoji

Those who know me well (or even for an hour) know that I’m a grumpy old man on the inside. Emoji characters are just one of many topics I could rant about for a while, but that’s a post for another day. So when Ben Buchanan tweeted this…

…I knew he was on to a winning idea.

I had a tremendous amount of fun working with the Web Speech API and Web Audio API to produce a prototype, then whipped up a quick site to allow anyone to play with it.

The result is Emoji Censor, which will also redact (black out) the emoji characters visually, to match the audio censorship.

Screenshot of Emoji Censor site

Once you get into the mindset of every emoji being mentally swapped for a censorship bleep, social media sites become much funnier to read. (Especially those incredibly condescending “clappy” tweets THAT 👏 LOOK 👏 LIKE 👏 THIS.)

For an extra piece of fun, I then integrated Monica Dincolescu’s emoji-translate library for extra bleeping fun. This way you can convert a bunch of English words to their approximate emoji equivalents, which also get censored.

Although this was yet another distraction from what I was meant to be doing, I still managed to learn:

  • How to use speech synthesis in browsers.
  • The history and commonly-used audio frequency of censorship bleeps.
  • What defines an “emoji character” (hint: like everything to do with languages, it’s complicated).
  • Even on silly side projects, you can still end up making valuable contributions to open source libraries.

Follow-up

After this blog post was first published, I got to present a 5-minute lightning talk at the SydJS meetup about building Emoji Censor. It was even recorded, which is unusual for my 5-minute rants:

After that talk, I had enough people ask me about making an Emoji Censor browser extension that I figured it was worth doing. So, live redacting of emoji characters as you browse the web is now available as an extension for Chrome and Firefox. Enjoy!

Embrace the ridiculous

During these projects, I was ruminating on a chat I had a while ago with Tim Holman. It was innocuous enough that I doubt he remembers it, but it stuck with me nonetheless. I used to feel a bit guilty about making silly projects like these—like somehow I was wasting time that should be used for “proper” projects. Tim indirectly taught me to embrace the whimsical distractions that inevitably pop up (as you’d expect from the author of elevator.js, console.frog, and BSOD.js).

There’s still enormous value in making things that have no real practical use. Not only do you find yourself learning new and unexpected things, but you end up feeling more refreshed and willing to go back to “serious” ideas. A palate cleanser for the mind, in effect. Inspired by Tim, I followed through on these two ideas. I’ve also made all my frivolous projects the first things listed on my coding portfolio site at gilmoreorless.github.io.

To finish off this post, here’s one final thought that only occurred to me while writing it. Emoji Censor changes the length of the audio bleep for combination sequences. For example, the sequence of {U+1F469 WOMAN} {U+200D ZERO-WIDTH JOINER} {U+1F467 GIRL} {U+200D ZERO-WIDTH JOINER} {U+1F466 BOY} (or 👩{ZWJ}👧{ZWJ}👦) is designed to be rendered as one single glyph: 👩‍👧‍👦 (whether it displays as 1 or 3 glyphs depends on your device). Emoji Censor will play this sequence as a bleep about 3 times longer than normal. Theoretically you could craft a string of emoji together that produce either short or long bleeps and create a hidden message in morse code. I’ll leave that as an exercise for the reader.

Redirecting GitHub Pages after renaming a repository

One of the golden rules I try to follow in web development is “don’t break URLs”. That is, even if pages get moved around, or a site is completely revamped, the old URLs should still redirect somewhere. It’s a terrible experience to bookmark a page or follow a link from another site, only to find that the page has completely vanished. Let’s try not to make the problem of link rot even worse.

The best way to redirect old pages is to get the web server to send a 301 Moved Permanently HTTP header. This has the benefit that search engines will see the 301 header and automatically update their caches to reference the new URL. Where this can fall down is when using static site hosting such as GitHub Pages or Surge.sh. By design, you’re not allowed to send any web server directives such as custom HTTP headers, so the 301 redirect option is out.

So what do you do when you want to move some URLs around when using GitHub Pages? Here are the options I found.

Moving a page within the same repository

Let’s say you have a repository with a gh-pages branch (or equivalent publishing branch) and you want to move/rename a page within that branch. For example, going from /oldurl.html to /newurl.html. There are two options for keeping the old URL alive as a redirect, as helpfully described by GitHub’s documentation.

Option 1: Running with Jekyll

If you’re using Jekyll for static site publishing, GitHub allows you to install the jekyll-redirect-from plugin. In your new page’s metadata, you specify which URLs it will redirect from. This will generate an HTML page at the old URL that simply contains a redirect <meta> element, pointing to the new URL. I haven’t used Jekyll much, so I can’t tell you much more than the documentation.

Option 2: Hyding from Jekyll

For everything else, you can manually create an HTML page that does the equivalent of the Jekyll plugin.

  1. First, move the file to the new location in git and commit that, so that the git history of the file remains intact: git mv oldurl.html newurl.html; git commit
  2. Create a new HTML file at the old location, containing the important meta redirect definition: <meta http-equiv="refresh" content="0; url=/newurl.html">
  3. Commit the new file, push both commits, and you now have an old-to-new redirect in place.

Waaaaaait a second, redirect meta tags ⁉️

Errr, yep. Even though the W3C recommends that redirect meta tags are not used, they’re pretty much the only option for this scenario. This is one of the big downsides of static site hosting. However, today’s browsers optimise those redirects pretty well. For accessibility purposes (and as a back-up if the browser fails to auto-redirect) it’s still best to include some text explaining the redirect, with a link to the new page. This is the boilerplate I’ve used previously:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf8">
    <meta http-equiv="refresh" content="0; url=https://newurl/">
    <link rel="canonical" href="https://newurl/">
    <title>This page has moved</title>
</head>
<body>
    <p>This page has moved. Redirecting you to <a href="https://newurl/">https://newurl/</a>&hellip;</p>
</body>
</html>

I’ve also got a canonical link in there as well. Even though search engines can usually understand the refresh meta instruction, there’s no guarantee that they’ll update their caches to point to the new URL. Adding a canonical link definition gives an extra hint to update to the new page.

Renaming an entire repository

GitHub allows you to rename a git repository and will automatically redirect references from the old name to the new one. This works for links to the web UI, API requests, git command line operations… but not GitHub Pages. Which means that if you have your documentation publicly published at https://your-username.github.io/old-repo-name/ and you rename the repository, none of the publicly cached references to your docs will redirect to the equivalent page at https://your-username.github.io/new-repo-name.

Note: If you’re using a custom domain name (e.g. my-superb-docs.com), this should be handled without your intervention. The public URL doesn’t change at all, just the source location of the files. For everything else, however, there’s a workaround available.

GitHub allows you to create public websites in two ways. One has already been featured above, which is having a specific part of your repo (either a branch or directory) hooked up to the publishing feature. This gives you a URL of https://your-username.github.io/repo-name/ – where the repo name will always be a subdirectory of the top-level domain.

The other way is using a repository called your-username.github.io. Anything that’s pushed to the master branch in this repo will be visible at (surprise, surprise) https://your-username.github.io/, including directories. So you can publish a subdirectory to a top-level repo, or publish to the root of a different repo, and they’ll both be visible as subdirectories of the public domain.

But what about name conflicts?

  • Pushing a directory called some-project to the master branch of the your-username.github.io repo will give you a URL of https://your-username.github.io/some-project/
  • Pushing the root files to the gh-pages branch of the some-project repository will also give you a URL of https://your-username.github.io/some-project/

Through some quick testing, I found that the gh-pages branch (or equivalent) of the project repository will always override the subdirectory of the parent repo (your-username.github.io). This is what can use to our advantage, by treating the parent repo as fallback content when renaming the project repo.

Step by step, mile by mile

To give a solid example, this is the scenario I encountered recently. I had a presentation slide deck that I’d done for a local meetup (SydCSS), with the repo name sydcss-preso-gradient-circus. Then I gave that presentation for a mini-conference (Decompress) – I wanted to publish both sets of slides (as the content differed) at a more generic URL, while still having the old URL redirect to the new one. So https://gilmoreorless.github.io/sydcss-preso-gradient-circus/ would redirect to https://gilmoreorless.github.io/preso-gradient-circus/sydcss/. These are the steps I took:

  1. Created sydcss-preso-gradient-circus/index.html within the gilmoreorless.github.io repo. It contained the meta redirect example shown earlier, pointing to the new URL of the SydCSS version of the presentation.
  2. Copied all the files in the gh-pages branch of the sydcss-preso-gradient-circus repo to a sydcss subdirectory within the same repo. This meant the root page / and the subdirectory /sydcss/ both had the same content.
  3. Published the Decompress version of the presentation to the decompress subdirectory of the same repo.
  4. Renamed the repo in the GitHub web interface, from sydcss-preso-gradient-circus to preso-gradient-circus. At this point, any requests to https://gilmoreorless.github.io/sydcss-preso-gradient-circus/ would hit the fallback page created in step 1. That would then redirect them to the new URL of https://gilmoreorless.github.io/preso-gradient-circus/sydcss/.
  5. With the redirect in place, I changed the index.html of the preso-gradient-circus repo to just be a simple list of links to the different versions of the presentation.

In summary, the situation right now is:

It seems like a lot of effort when it’s written down like that, but it only took a few minutes to do the process. It took me far longer to write this post than to set up the redirects (including the testing of fallback content). I think it was worth it; even though it was just a slide deck, I care about keeping URLs working wherever possible.

Also, this post is mainly serving as documentation for Future Me in case I ever need to do it again. Anyone else getting a benefit from it is merely a bonus.

Software development amnesia

There needs to be a name for the software development version of the Gell-Mann Amnesia effect. The full version is worth reading, but I’ll just repeat the critical parts here (emphasis mine):

You open the newspaper to an article on some subject you know well. […] You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward–reversing cause and effect. […] In any case, you read with exasperation or amusement the multiple errors in a story–and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

I’d like to propose a corollary to the Gell-Mann Amnesia effect, targeted specifically at software developers:

As an experienced software developer, you how much work goes into delivering a new feature for a reasonably-sized product. The myriad priorities that are juggled to determine the ever-shifting sands of the roadmap. You’ve been frustrated at the angry customers demanding that their personal top priority be attended to first, under the highly-mistaken assumption that their use case is everyone’s use case.

You know the weeks or months of discussion and planning that happens before any code is written. The sheer number of people who will be involved in trying to get it right. The UX research, the design iterations, the architectural concerns. You know the amount of testing that needs to happen for edge cases on different platforms, or with different configuration setups. You know the rigour with which someone in the QA team will pick up on potential problems. Depending on the size of the company, there might even be an internal roll-out first, to pick up any stray bugs. Then, and only then, can it finally go through to a public release.

You breathe a sigh of relief that the feature you’ve worked so hard for is out the door. Switching to some other piece of software you use frequently, you see an update notification. The release notes mention some small, trivial new feature that has no value for you. You exclaim, “Ugh, how can they possibly be focusing on such unimportant details when they still haven’t fixed the thing that annoys me the most?” — as if somehow this other company’s planning and implementation process is any different from yours.

You switch away from your product, and forget what you know.

Should I continue canianimate.com?

Or, shouldicontinuecanianimate.

This is an open question to the front-end development community. I have a website and want to make it better. But I need to know if it’s actually valuable before putting in more work.

Back to…

A couple of years ago, as part of a building some animation prototypes, I ended up creating a small library that collates which CSS properties can be animated, and how they animate. At the suggestion of Sean Curtis, I turned that data into a site called canianimate.com. It’s inspired by caniuse.com but only for CSS animations and transitions.

I launched a basic version of the site and had grander plans for it. Ultimately I wanted to add a bunch of interactive graphs that showed better details of how different types of properties were interpolated and transitioned.

But a little while after that, I attended a conference where the theme of several talks was performant animations. All the advice was to animate only the opacity and transform properties, which matched the trend of advice coming from the wider dev community — especially those working for browser vendors.

While it was definitely good advice (and still is today), it effectively took the wind out of my sails. What was the point of putting more effort into explaining the minutiae of all the different properties’ animation rules, when the general advice is to only use two of them?

…the Future

So here we are, 2 years later. The domain is coming up for renewal and I need to decide if it’s a project worth keeping alive. But this is a not a choice I can make with only a single data point.

This is where I ask for your opinion.

  1. Is canianimate.com a useful reference?
  2. If not, could it be? What would make it better?

Let me know in the comments, or ping me on Twitter – @iamnotyourbroom.

P.S. To get an indication of some of the plans I had for the site, have a look through the issue backlog. I’ve also included some of my draft sketches below.

A rough sketch of how to show the interpolation of simple number values.

A rough sketch of how to show the interpolation of colour properties.

Using Make to generate Chrome extension icons

File this under “you don’t have to use JS for everything”.

Problem

I’ve built a few Chrome browser extensions, and there’s a part of making them that I generally find rather tedious: icons. To account for all the different places where icons are used, as well as multiple screen resolutions, 6 differently-sized icon files are required. Sometimes I’ve manually exported a few of these from Photoshop for one-off projects, but my most recent extension required a few updates to the icon. This was the tipping point for adding automation.

In this post I’ll detail how I used only a few lines of a Makefile to handle all icon size conversions. Yep, Make. No JavaScript-based build system with a multitude of npm modules and seconds of overhead for every run. Sometimes the old ways are the best.

Solution

Taking advantage of some special features of Make, this is what I came up with:

iconsrc := src/icon-512.png
icondir := chrome-extension/icons
iconsizes := {16,19,38,48,128,256}
iconfiles := $(shell echo $(icondir)/icon-$(iconsizes).png)

$(icondir)/icon-%.png:
    @mkdir -p $(@D)
    convert $(iconsrc) -resize $* $@

icons: $(iconfiles)

.PHONY: icons

That’s the whole thing. Now running make icons will generate 6 icons in under a second (on my machine at least), using ImageMagick’s convert command for the actual file generation:

$ time make icons
convert src/icon-512.png -resize 16 chrome-extension/icons/icon-16.png
convert src/icon-512.png -resize 19 chrome-extension/icons/icon-19.png
convert src/icon-512.png -resize 38 chrome-extension/icons/icon-38.png
convert src/icon-512.png -resize 48 chrome-extension/icons/icon-48.png
convert src/icon-512.png -resize 128 chrome-extension/icons/icon-128.png
convert src/icon-512.png -resize 256 chrome-extension/icons/icon-256.png

real    0m0.608s
user    0m0.157s
sys     0m0.065s

There are a few bits of potentially unfamiliar syntax in there, so I’ll break it down into chunks.

iconsizes := {16,19,38,48,128,256}
iconfiles := $(shell echo $(icondir)/icon-$(iconsizes).png)

These lines set up variables that refer to the desired icon sizes and file names. The iconsizes variable uses shell expansion to build a list of pixel widths. Anything that refers to that variable will enumerate the numbers to produce some output. This can be seen with the iconfiles variable, which generates a list of icon files (chrome-extension/icons/icon-16.png chrome-extension/icons/icon-19.png ...).

$(icondir)/icon-%.png:

This builds any of the desired icon files. The % symbol is a wildcard that matches any series of characters. The key thing is that this match can then be referenced in the subsequent instructions.

@mkdir -p $(@D)

Ensure that the icons directory exists before doing anything with it. Prefixing a command with @ is a way of silencing the command, so that it doesn’t appear in the output log. Make has several special variables that are available within each ruleset, based on the defined file(s). One of these is $(@D), which equates to “the directory part of the target filepath”. The target is the file being built, so in total this line just means “create the directory where the file to be built will be put, but only if it doesn’t already exist”.

convert $(iconsrc) -resize $* $@

The part where the actual resizing happens. I’ll break this down further:

  • convert $(iconsrc) — The convert command is provided by ImageMagick, and can perform a whole bunch of transformation operations on a source image. $(iconsrc) is just referring to a variable set up at the top of the Makefile, which is the single source image I made at a large size (512px wide).
  • -resize $* — Tells convert to resize the source image to a certain pixel width… but what width? This is where the $* syntax helps. It’s another special variable built in to Make, which refers to the part of the file matched by the % symbol in the target declaration. For example, if the file name is icon-38.png, the $* variable contains just 38.
  • $@ — Yet another Make special variable, this time referencing the whole target filepath. In other words, this matches the exact file being built.

Putting it all together, running make chrome-extension/icons/icon-38.png will end up calling:

convert src/icon-512.png -resize 38 chrome-extension/icons/icon-38.png

So far the file generation commands have assumed a single file being built, but I have 6 different icon sizes, and I want to build them all with a single command. That’s where this magic line comes in:

icons: $(iconfiles)

It may not look like much, but this is the part that makes the automation worthwhile. This is an empty ruleset (i.e. it has no commands to run on its own) but has a dependency on $(iconfiles). As described above, $(iconfiles) is actually a list of all the desired icon filenames. Therefore running make icons will ensure that each of the icon files is built in turn by the $(icondir)/icon-%.png ruleset above it.

.PHONY: icons

One final touch — because Make is designed to work with actual files, and there is no file named icons, it’s best to add icons to a special .PHONY command. This tells Make that it shouldn’t look for any file named icons.

In summary

The beauty of Make (that is not often replicated by build tools in other languages) is that it will only build the files that need to be built. If the source file hasn’t changed since the last run, and the generated files are all found, Make will exit early saying it has nothing to do. If one of the icon files gets deleted, running make icons will re-generate only that file. But if the source image changes, make icons will generate 6 new icons for me.

This may not be the “best” solution, but it’s working well for me so far.

Credits: A lot of my knowledge of how to do this came from reading Isaac Schlueter’s Makefile tutorial.

Strip analytics URL parameters before bookmarking

Have you ever found yourself bookmarking a website URL that contains a heap of tracking parameters in the URL? Generally they’re Google Analytics campaign referral data (e.g. http://example.com/?utm_source=SomeThing&utm_medium=email).

I use browser bookmarks a lot, mostly as a collection of references around particular topics. The primary source for most of my web development-related bookmarks is a variety of regular email newsletters. My current list of subscriptions is:

What many of these have in common is the addition of Google Analytics tracking parameters to the URL. This is great for the authors of the content, as they have the option to see where spikes in traffic come from. When I’m saving a link for later, though, I don’t want to keep any of the extra URL cruft. It would also be giving false tracking results if I opened a link with that URL data a year or two later.

So I wrote a quick snippet to strip any Google Analytics tracking parameters from a URL. I can run this just before bookmarking the link:

/**
 * Strip out Google Analytics query parameters from the current URL.
 * Makes bookmarking canonical links easier.
 */
(function () {
    var curSearch = location.search;
    if (!curSearch) {
        return;
    }
    var curParams = curSearch.slice(1).split('&');
    console.log('Stripping query parameters:', curParams);
    var newParams = curParams.filter(function (param) {
        return param.substr(0, 4) !== 'utm_';
    });
    if (newParams.length === curParams.length) {
        return;
    }
    var newSearch = newParams.join('&');
    if (newSearch) {
        newSearch = '?' + newSearch;
    }
    var newUrl = location.href.replace(curSearch, newSearch);
    history.replaceState({}, document.title, newUrl);
})();

I have saved this snippet in my browser’s devtools. See github.com/bgrins/devtools-snippets for more details of how these work.

However, some people prefer to use bookmarklets, without fiddling around in browser devtools. I’ve converted the above snippet into a bookmarklet as well. If you’re using a desktop browser, you can drag the following link to your bookmarks area to save it:

Strip GA URL params

Animated demonstration of using the bookmarklet link

LOSS: Lazy Open-Source Software

I work on open-source projects as a hobby. Writing code used to be my primary job but it hasn’t been for a few years now.

I make my hobby projects’ source code open by default. Mainly because I think that possibly someone will benefit from having it available. Primarily I write code for myself, but if someone else happens to find it beneficial, cool! I know I’ve benefited a lot from the open source projects of others in the past, so why not return the favour?

But here’s the thing. Because I’ve relegated coding to just a hobby, I’m actually terrified of something I’ve written becoming very popular. I don’t like the thought of one of my projects hitting 10000+ GitHub stars (or however else you want to measure popularity) — then getting a heap of issues and pull requests as a result. In fact, that thought scares the shit out of me. I don’t want that responsibility.

I have so many project ideas that arrive in my head far faster than I can build them. I write them all down, and watch the list grow ever larger. At some point I had to learn to accept that a huge majority of them would never even be started, let alone finished. I had to prioritise them. Over time, I realised that certain types of ideas keep getting relegated to the bottom of the list. Not the ones that would require the most work to create, but the ones that would have the largest amount of ongoing maintenance.

Thus I realised that my model for coding is not FOSS, it’s LOSS — Lazy Open-Source Software. I’m happy to write something new and try out different ideas, but coming back to maintain old projects is painful. Some people say that’s just ignoring problems, but I don’t particularly care.

My coding time is pretty much just restricted to train rides to/from the office where I work, and maybe a bit of time of an evening. Therefore I try to maximise the enjoyment I get out of that time. I fuck around with some code for a bit and make something that I like. But most of the time I like to focus on other things at home — family, non-coding hobbies, and just generally switching off.

What I’m trying to say is… actually, I have no idea what I’m trying to say. I just thought of the LOSS acronym and ran with it. I guess the underlying point I’m making is, if you’re relying on me to fix up some old open source code, I’m sorry. It’s on the list somewhere.

Accessible presentations

Recent events have had me thinking a lot about accessible presentations at conferences and user groups. Myriad factors have kicked me into considering the topic seriously. Accessibility comes in many forms, and doesn’t just apply to people with disabilities. The core consideration is: Are you excluding anyone with the way you present your content?

To start with, I work with Sean Curtis, who is Atlassian’s resident accessibility expert. Sean has, among other things, been trying to educate everyone about better colour contrast in presentation slides, for both internal and external presentations.

Then Sean and I attended A11y Camp — a one-day event in Sydney dedicated to accessibility on the web. This event guaranteed there would be people with vision impairments attending (and presenting). Therefore all presenters had been asked to audibly describe any slides they were relying on for visual impact. The key point was to not ever rely on the visuals alone. I remember one presenter really forcing himself to read out quotes and describe images, which he said was so different from his normal presentation style, but it greatly helped the audience.

Recently I had the pleasure of presenting at JSConf EU, which uses live transcription services during all the presentations. As preparation for this I was asked to send through as much material for my talk as possible ahead of time. This would assist with building transcription dictionaries for the specific technical terms used at a developer-focused conference. This got me thinking about the accessibility of my talk for more reasons than just vision impairment.

I’ve also been reading a lot lately about vestibular disorders, which can result in certain types of visual motion making people physically ill. All of these factors combined to make me think hard about how I was going to present my talk. Some of the following tips are things I’ve always done, but others I had to put more conscious effort into.

Things I try to do for a presentation

1. Ensure text and images are readable

Large text, large images, high contrast between text and background colours. This is good practise anyway, regardless of accessibility considerations. Make sure someone sitting at the back of the room can still see what’s on your slides — we’re not all young programmers with perfect eyesight. This also applies to picking colours that don’t look the same for someone with colour blindness.

2. Describe any purely visual elements

How to get this wrong: show something on your slides, then point at it and say “and this is the result” with no further explanation. While it may seem funny or obvious to a majority of your audience, you still shouldn’t assume that everyone has understood the message. Thus, if the crux of your talk or the punchline of a joke involves pointing at an animated GIF with no description, that’s excluding anyone in the audience who can’t see it or who takes longer to process visual information (due to cognitive disabilities).

I like to think of it in the same was as putting an alt attribute on an image. On a web page, if the image is purely decorative and adds nothing to the content, it’s ok to have an empty attribute (alt=""), but in all other cases the image should have a description. It’s the same with a presentation. If the image is purely decorative and adds nothing to the content, it’s ok to not mention it, but in all other cases the image should be audibly described. This doesn’t have to be a clunkily-worded halting of your verbal flow in order to say “and on screen right now is an image of a squirrel”. If done right, the description of what’s on screen can be woven into the flow and narrative of your speech without ever feeling unnatural.

3. Only share slides online if they make sense on their own

Something I’ve quoted before, but is worth repeating here:

The gist is: our slides are not the talks, our slides aid the talks we are giving. They are a visual catalyst for the things we talk about. When you see something and you hear about it at the same time it is more likely to stick. It is as simple as that.

[…] If I look at the PDF of a talk a few weeks later and only see pretty images without remembering what they meant at that time I get confused and frustrated.

— Christian Heilmann, On controversial slides, talk distribution and lack of context

Your talk is about you talking. A talk should start from the words you say, with slides being a progressive enhancement. And because they are an enhancement, they don’t always make sense on their own, devoid of context. I only put slides online if I think there’s enough information in the slides alone to get a point across. For me, this only works for in-depth technical topics such as easing or gradients (in fact, I still added an initial explanation slide to the gradients presentation specifically for people reading the slides online later).

One technique I’ve seen that I really like is putting the slides online in the form of an article with slides and text side-by-side. Each slide is presented next to the words that were spoken for that section. It’s a great adaptation of a talk from an in-person presentation to an on-screen essay, changing the format to best suit the medium in which it’s published. The two people I’ve seen consistently do this well are Maciej Cegłowski and Bret Victor.

4. Limit animations

There are many people who don’t like lots of animations whizzing around on a screen. Slides with lots of fancy animation effects can make people with vestibular disorders feel sick. Personally, I don’t want to make my audience feel ill just because of how I’ve chosen to present something. And regardless of vestibular disorders, I often find the super-amazing 3D transform effects of slide transitions quite distracting. They’re cool/cute the first couple of times, but over the course of a whole talk they just get in the way and take attention away from what the presenter is saying.

5. Don’t just take my word for it

There is also a good list of tips for accessible presentations published by the W3C.

How did I do?

For my JSConf EU talk, my slides had almost no animations. I made sure to be conscious of describing what was on screen (although I know I failed at a couple of points). But I know I won’t always get these tips 100% right. I’m not perfect and I know I’ll make mistakes, but I’ll keep trying to improve.

Also, these tips aren’t necessarily going to work for every presentation. For example, a talk about colour blending is inherently visual and would be very hard to describe to someone who doesn’t see colours. But it’s worth trying to be as inclusive as possible.

Stop “winning” everything

It’s time for a Grumpy Old Man rant – the kind that I occasionally unleash at the poor people I work with, but that I rarely put the effort into typing for public consumption.

“You’ve won the internet.”

“This is it. I’ve found the best gif. EVER!”

“This wins the prize for best sign.”

“World’s best pull request description.”

Why must everything be constantly turned into a competition? Why can’t we just enjoy something on its own merits without frothing with hyperbole about its place in the annals of human history? By constantly over-inflating the importance of every tiny insignificant thing, we produce a false economy. When everyone and everything is special, nothing is.

Not everything is special. Not everything is the best. And that’s perfectly fine – in fact it’s desired. Life is not purely black and white, it is full of nuances and shades of grey (far more than 50). Without lows as a counter-balance, the highs have no reference and no meaning. But instead of embracing that, we seem to be desperately polarising and categorising all that we encounter.

We are creating a false dichotomy. It has become more and more common to treat things we see on the internet as a binary state: “It must be best in class, even if we have to invent a new class for it to be best of” versus “It’s not even worth my time to look at it”. It’s like George W. Bush declaring “You’re either with us or against us.”

By all means, enjoy things that are amusing. But seeing someone write something clever doesn’t mean they’ve “won the internet.” (Quite frankly, the internet’s been won so many times that it’s surely just a faded hand-me-down prize by now.) Enjoy it for what it is – an isolated moment in time, unencumbered by the need to measure and compare it.

Step away from the white spotlight and the black abyss, and come join me over here in the grey areas. (Just don’t get me started on the hyperbole of sports commentators.)