New projects: Emoji Censor and Numberwang

I’ve generally been rather lax when it comes to writing about side projects I’ve done. I’ll finish the project, mention it on Twitter a couple of times, then… well, that’s about it. Not a particularly effective way to do promotion of ideas.

Time to do something it. From now on, I’m going to attempt to write a proper explanation here for each side project I complete. Even if no-one else reads it, at least I’ve created some historical documentation for my future self.

Take me down to Distraction City, where the… oooh shiny!

I have a history of talking with people at tech user groups and meetups and coming up with ridiculous ideas for side projects. Mostly the ideas go no further than that because, well, they’re ridiculous (and taking a joke too far often ruins the humour). Sometimes, though, the idea is just too tempting to leave alone.

The first of these ideas came from a SydCSS meetup:

For the rest of the evening, a friend and I would whisper “that’s Numberwang!” whenever a bunch of numbers appeared on a presentation slide. Yes, we’re far too easily amused.

Wait, what the hell is Numberwang?

Numberwang started as a series of sketches on the TV show That Mitchell and Webb Look. The basic premise is a number-focused game show that makes no sense to viewers, but everyone on the show knows exactly what’s happening. I highly recommend watching the full collection of sketches.

The next morning on the train I whipped up a super-quick prototype of a browser extension that would randomly exclaim “That’s Numberwang!” when you typed a number on a web page. I let it linger for a few months, then another conversation spurred me to write it properly. Here’s a video of it in action:

It works in Chrome and Firefox, but I’m not planning on publishing it to the Chrome Web Store or Mozilla Add-ons any time soon, because jokes have limits. I was running it in my day-to-day browser for a while to see if it became really annoying. In the end, it felt like a form of Russian Roulette (albeit one with far less severe consequences). Every time the notification popped up, I’d wonder if the next one would be the event that triggered a full-page rotating animated GIF while I was using my laptop on a crowded train.

Still, the value of even silly and frivolous side projects is in learning new things. In this particular case I learned:

  • How to use browser notifications.
  • How much compatibility there is these days between Chrome and Firefox extension APIs, thanks to the WebExtensions initiative.
  • How to efficiently parse and cache all numbers found in an input, so that only changed numbers would trigger the notification (hooray for ES6 Maps and their ability to store DOM elements as keys).
  • How many people in my immediate social network do or do not know about Numberwang.

Completely worth it.

My ❤️/👿 relationship with emoji

Those who know me well (or even for an hour) know that I’m a grumpy old man on the inside. Emoji characters are just one of many topics I could rant about for a while, but that’s a post for another day. So when Ben Buchanan tweeted this…

…I knew he was on to a winning idea.

I had a tremendous amount of fun working with the Web Speech API and Web Audio API to produce a prototype, then whipped up a quick site to allow anyone to play with it.

The result is Emoji Censor, which will also redact (black out) the emoji characters visually, to match the audio censorship.

Screenshot of Emoji Censor site

Once you get into the mindset of every emoji being mentally swapped for a censorship bleep, social media sites become much funnier to read. (Especially those incredibly condescending “clappy” tweets THAT 👏 LOOK 👏 LIKE 👏 THIS.)

For an extra piece of fun, I then integrated Monica Dincolescu’s emoji-translate library for extra bleeping fun. This way you can convert a bunch of English words to their approximate emoji equivalents, which also get censored.

Although this was yet another distraction from what I was meant to be doing, I still managed to learn:

  • How to use speech synthesis in browsers.
  • The history and commonly-used audio frequency of censorship bleeps.
  • What defines an “emoji character” (hint: like everything to do with languages, it’s complicated).
  • Even on silly side projects, you can still end up making valuable contributions to open source libraries.

Follow-up

After this blog post was first published, I got to present a 5-minute lightning talk at the SydJS meetup about building Emoji Censor. It was even recorded, which is unusual for my 5-minute rants:

After that talk, I had enough people ask me about making an Emoji Censor browser extension that I figured it was worth doing. So, live redacting of emoji characters as you browse the web is now available as an extension for Chrome and Firefox. Enjoy!

Embrace the ridiculous

During these projects, I was ruminating on a chat I had a while ago with Tim Holman. It was innocuous enough that I doubt he remembers it, but it stuck with me nonetheless. I used to feel a bit guilty about making silly projects like these—like somehow I was wasting time that should be used for “proper” projects. Tim indirectly taught me to embrace the whimsical distractions that inevitably pop up (as you’d expect from the author of elevator.js, console.frog, and BSOD.js).

There’s still enormous value in making things that have no real practical use. Not only do you find yourself learning new and unexpected things, but you end up feeling more refreshed and willing to go back to “serious” ideas. A palate cleanser for the mind, in effect. Inspired by Tim, I followed through on these two ideas. I’ve also made all my frivolous projects the first things listed on my coding portfolio site at gilmoreorless.github.io.

To finish off this post, here’s one final thought that only occurred to me while writing it. Emoji Censor changes the length of the audio bleep for combination sequences. For example, the sequence of {U+1F469 WOMAN} {U+200D ZERO-WIDTH JOINER} {U+1F467 GIRL} {U+200D ZERO-WIDTH JOINER} {U+1F466 BOY} (or 👩{ZWJ}👧{ZWJ}👦) is designed to be rendered as one single glyph: 👩‍👧‍👦 (whether it displays as 1 or 3 glyphs depends on your device). Emoji Censor will play this sequence as a bleep about 3 times longer than normal. Theoretically you could craft a string of emoji together that produce either short or long bleeps and create a hidden message in morse code. I’ll leave that as an exercise for the reader.

Redirecting GitHub Pages after renaming a repository

One of the golden rules I try to follow in web development is “don’t break URLs”. That is, even if pages get moved around, or a site is completely revamped, the old URLs should still redirect somewhere. It’s a terrible experience to bookmark a page or follow a link from another site, only to find that the page has completely vanished. Let’s try not to make the problem of link rot even worse.

The best way to redirect old pages is to get the web server to send a 301 Moved Permanently HTTP header. This has the benefit that search engines will see the 301 header and automatically update their caches to reference the new URL. Where this can fall down is when using static site hosting such as GitHub Pages or Surge.sh. By design, you’re not allowed to send any web server directives such as custom HTTP headers, so the 301 redirect option is out.

So what do you do when you want to move some URLs around when using GitHub Pages? Here are the options I found.

Moving a page within the same repository

Let’s say you have a repository with a gh-pages branch (or equivalent publishing branch) and you want to move/rename a page within that branch. For example, going from /oldurl.html to /newurl.html. There are two options for keeping the old URL alive as a redirect, as helpfully described by GitHub’s documentation.

Option 1: Running with Jekyll

If you’re using Jekyll for static site publishing, GitHub allows you to install the jekyll-redirect-from plugin. In your new page’s metadata, you specify which URLs it will redirect from. This will generate an HTML page at the old URL that simply contains a redirect <meta> element, pointing to the new URL. I haven’t used Jekyll much, so I can’t tell you much more than the documentation.

Option 2: Hyding from Jekyll

For everything else, you can manually create an HTML page that does the equivalent of the Jekyll plugin.

  1. First, move the file to the new location in git and commit that, so that the git history of the file remains intact: git mv oldurl.html newurl.html; git commit
  2. Create a new HTML file at the old location, containing the important meta redirect definition: <meta http-equiv="refresh" content="0; url=/newurl.html">
  3. Commit the new file, push both commits, and you now have an old-to-new redirect in place.

Waaaaaait a second, redirect meta tags ⁉️

Errr, yep. Even though the W3C recommends that redirect meta tags are not used, they’re pretty much the only option for this scenario. This is one of the big downsides of static site hosting. However, today’s browsers optimise those redirects pretty well. For accessibility purposes (and as a back-up if the browser fails to auto-redirect) it’s still best to include some text explaining the redirect, with a link to the new page. This is the boilerplate I’ve used previously:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf8">
    <meta http-equiv="refresh" content="0; url=https://newurl/">
    <link rel="canonical" href="https://newurl/">
    <title>This page has moved</title>
</head>
<body>
    <p>This page has moved. Redirecting you to <a href="https://newurl/">https://newurl/</a>&hellip;</p>
</body>
</html>

I’ve also got a canonical link in there as well. Even though search engines can usually understand the refresh meta instruction, there’s no guarantee that they’ll update their caches to point to the new URL. Adding a canonical link definition gives an extra hint to update to the new page.

Renaming an entire repository

GitHub allows you to rename a git repository and will automatically redirect references from the old name to the new one. This works for links to the web UI, API requests, git command line operations… but not GitHub Pages. Which means that if you have your documentation publicly published at https://your-username.github.io/old-repo-name/ and you rename the repository, none of the publicly cached references to your docs will redirect to the equivalent page at https://your-username.github.io/new-repo-name.

Note: If you’re using a custom domain name (e.g. my-superb-docs.com), this should be handled without your intervention. The public URL doesn’t change at all, just the source location of the files. For everything else, however, there’s a workaround available.

GitHub allows you to create public websites in two ways. One has already been featured above, which is having a specific part of your repo (either a branch or directory) hooked up to the publishing feature. This gives you a URL of https://your-username.github.io/repo-name/ – where the repo name will always be a subdirectory of the top-level domain.

The other way is using a repository called your-username.github.io. Anything that’s pushed to the master branch in this repo will be visible at (surprise, surprise) https://your-username.github.io/, including directories. So you can publish a subdirectory to a top-level repo, or publish to the root of a different repo, and they’ll both be visible as subdirectories of the public domain.

But what about name conflicts?

  • Pushing a directory called some-project to the master branch of the your-username.github.io repo will give you a URL of https://your-username.github.io/some-project/
  • Pushing the root files to the gh-pages branch of the some-project repository will also give you a URL of https://your-username.github.io/some-project/

Through some quick testing, I found that the gh-pages branch (or equivalent) of the project repository will always override the subdirectory of the parent repo (your-username.github.io). This is what can use to our advantage, by treating the parent repo as fallback content when renaming the project repo.

Step by step, mile by mile

To give a solid example, this is the scenario I encountered recently. I had a presentation slide deck that I’d done for a local meetup (SydCSS), with the repo name sydcss-preso-gradient-circus. Then I gave that presentation for a mini-conference (Decompress) – I wanted to publish both sets of slides (as the content differed) at a more generic URL, while still having the old URL redirect to the new one. So https://gilmoreorless.github.io/sydcss-preso-gradient-circus/ would redirect to https://gilmoreorless.github.io/preso-gradient-circus/sydcss/. These are the steps I took:

  1. Created sydcss-preso-gradient-circus/index.html within the gilmoreorless.github.io repo. It contained the meta redirect example shown earlier, pointing to the new URL of the SydCSS version of the presentation.
  2. Copied all the files in the gh-pages branch of the sydcss-preso-gradient-circus repo to a sydcss subdirectory within the same repo. This meant the root page / and the subdirectory /sydcss/ both had the same content.
  3. Published the Decompress version of the presentation to the decompress subdirectory of the same repo.
  4. Renamed the repo in the GitHub web interface, from sydcss-preso-gradient-circus to preso-gradient-circus. At this point, any requests to https://gilmoreorless.github.io/sydcss-preso-gradient-circus/ would hit the fallback page created in step 1. That would then redirect them to the new URL of https://gilmoreorless.github.io/preso-gradient-circus/sydcss/.
  5. With the redirect in place, I changed the index.html of the preso-gradient-circus repo to just be a simple list of links to the different versions of the presentation.

In summary, the situation right now is:

It seems like a lot of effort when it’s written down like that, but it only took a few minutes to do the process. It took me far longer to write this post than to set up the redirects (including the testing of fallback content). I think it was worth it; even though it was just a slide deck, I care about keeping URLs working wherever possible.

Also, this post is mainly serving as documentation for Future Me in case I ever need to do it again. Anyone else getting a benefit from it is merely a bonus.

Software development amnesia

There needs to be a name for the software development version of the Gell-Mann Amnesia effect. The full version is worth reading, but I’ll just repeat the critical parts here (emphasis mine):

You open the newspaper to an article on some subject you know well. […] You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward–reversing cause and effect. […] In any case, you read with exasperation or amusement the multiple errors in a story–and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

I’d like to propose a corollary to the Gell-Mann Amnesia effect, targeted specifically at software developers:

As an experienced software developer, you how much work goes into delivering a new feature for a reasonably-sized product. The myriad priorities that are juggled to determine the ever-shifting sands of the roadmap. You’ve been frustrated at the angry customers demanding that their personal top priority be attended to first, under the highly-mistaken assumption that their use case is everyone’s use case.

You know the weeks or months of discussion and planning that happens before any code is written. The sheer number of people who will be involved in trying to get it right. The UX research, the design iterations, the architectural concerns. You know the amount of testing that needs to happen for edge cases on different platforms, or with different configuration setups. You know the rigour with which someone in the QA team will pick up on potential problems. Depending on the size of the company, there might even be an internal roll-out first, to pick up any stray bugs. Then, and only then, can it finally go through to a public release.

You breathe a sigh of relief that the feature you’ve worked so hard for is out the door. Switching to some other piece of software you use frequently, you see an update notification. The release notes mention some small, trivial new feature that has no value for you. You exclaim, “Ugh, how can they possibly be focusing on such unimportant details when they still haven’t fixed the thing that annoys me the most?” — as if somehow this other company’s planning and implementation process is any different from yours.

You switch away from your product, and forget what you know.

Should I continue canianimate.com?

Or, shouldicontinuecanianimate.

This is an open question to the front-end development community. I have a website and want to make it better. But I need to know if it’s actually valuable before putting in more work.

Back to…

A couple of years ago, as part of a building some animation prototypes, I ended up creating a small library that collates which CSS properties can be animated, and how they animate. At the suggestion of Sean Curtis, I turned that data into a site called canianimate.com. It’s inspired by caniuse.com but only for CSS animations and transitions.

I launched a basic version of the site and had grander plans for it. Ultimately I wanted to add a bunch of interactive graphs that showed better details of how different types of properties were interpolated and transitioned.

But a little while after that, I attended a conference where the theme of several talks was performant animations. All the advice was to animate only the opacity and transform properties, which matched the trend of advice coming from the wider dev community — especially those working for browser vendors.

While it was definitely good advice (and still is today), it effectively took the wind out of my sails. What was the point of putting more effort into explaining the minutiae of all the different properties’ animation rules, when the general advice is to only use two of them?

…the Future

So here we are, 2 years later. The domain is coming up for renewal and I need to decide if it’s a project worth keeping alive. But this is a not a choice I can make with only a single data point.

This is where I ask for your opinion.

  1. Is canianimate.com a useful reference?
  2. If not, could it be? What would make it better?

Let me know in the comments, or ping me on Twitter – @iamnotyourbroom.

P.S. To get an indication of some of the plans I had for the site, have a look through the issue backlog. I’ve also included some of my draft sketches below.

A rough sketch of how to show the interpolation of simple number values.

A rough sketch of how to show the interpolation of colour properties.

Using Make to generate Chrome extension icons

File this under “you don’t have to use JS for everything”.

Problem

I’ve built a few Chrome browser extensions, and there’s a part of making them that I generally find rather tedious: icons. To account for all the different places where icons are used, as well as multiple screen resolutions, 6 differently-sized icon files are required. Sometimes I’ve manually exported a few of these from Photoshop for one-off projects, but my most recent extension required a few updates to the icon. This was the tipping point for adding automation.

In this post I’ll detail how I used only a few lines of a Makefile to handle all icon size conversions. Yep, Make. No JavaScript-based build system with a multitude of npm modules and seconds of overhead for every run. Sometimes the old ways are the best.

Solution

Taking advantage of some special features of Make, this is what I came up with:

iconsrc := src/icon-512.png
icondir := chrome-extension/icons
iconsizes := {16,19,38,48,128,256}
iconfiles := $(shell echo $(icondir)/icon-$(iconsizes).png)

$(icondir)/icon-%.png:
    @mkdir -p $(@D)
    convert $(iconsrc) -resize $* $@

icons: $(iconfiles)

.PHONY: icons

That’s the whole thing. Now running make icons will generate 6 icons in under a second (on my machine at least), using ImageMagick’s convert command for the actual file generation:

$ time make icons
convert src/icon-512.png -resize 16 chrome-extension/icons/icon-16.png
convert src/icon-512.png -resize 19 chrome-extension/icons/icon-19.png
convert src/icon-512.png -resize 38 chrome-extension/icons/icon-38.png
convert src/icon-512.png -resize 48 chrome-extension/icons/icon-48.png
convert src/icon-512.png -resize 128 chrome-extension/icons/icon-128.png
convert src/icon-512.png -resize 256 chrome-extension/icons/icon-256.png

real    0m0.608s
user    0m0.157s
sys     0m0.065s

There are a few bits of potentially unfamiliar syntax in there, so I’ll break it down into chunks.

iconsizes := {16,19,38,48,128,256}
iconfiles := $(shell echo $(icondir)/icon-$(iconsizes).png)

These lines set up variables that refer to the desired icon sizes and file names. The iconsizes variable uses shell expansion to build a list of pixel widths. Anything that refers to that variable will enumerate the numbers to produce some output. This can be seen with the iconfiles variable, which generates a list of icon files (chrome-extension/icons/icon-16.png chrome-extension/icons/icon-19.png ...).

$(icondir)/icon-%.png:

This builds any of the desired icon files. The % symbol is a wildcard that matches any series of characters. The key thing is that this match can then be referenced in the subsequent instructions.

@mkdir -p $(@D)

Ensure that the icons directory exists before doing anything with it. Prefixing a command with @ is a way of silencing the command, so that it doesn’t appear in the output log. Make has several special variables that are available within each ruleset, based on the defined file(s). One of these is $(@D), which equates to “the directory part of the target filepath”. The target is the file being built, so in total this line just means “create the directory where the file to be built will be put, but only if it doesn’t already exist”.

convert $(iconsrc) -resize $* $@

The part where the actual resizing happens. I’ll break this down further:

  • convert $(iconsrc) — The convert command is provided by ImageMagick, and can perform a whole bunch of transformation operations on a source image. $(iconsrc) is just referring to a variable set up at the top of the Makefile, which is the single source image I made at a large size (512px wide).
  • -resize $* — Tells convert to resize the source image to a certain pixel width… but what width? This is where the $* syntax helps. It’s another special variable built in to Make, which refers to the part of the file matched by the % symbol in the target declaration. For example, if the file name is icon-38.png, the $* variable contains just 38.
  • $@ — Yet another Make special variable, this time referencing the whole target filepath. In other words, this matches the exact file being built.

Putting it all together, running make chrome-extension/icons/icon-38.png will end up calling:

convert src/icon-512.png -resize 38 chrome-extension/icons/icon-38.png

So far the file generation commands have assumed a single file being built, but I have 6 different icon sizes, and I want to build them all with a single command. That’s where this magic line comes in:

icons: $(iconfiles)

It may not look like much, but this is the part that makes the automation worthwhile. This is an empty ruleset (i.e. it has no commands to run on its own) but has a dependency on $(iconfiles). As described above, $(iconfiles) is actually a list of all the desired icon filenames. Therefore running make icons will ensure that each of the icon files is built in turn by the $(icondir)/icon-%.png ruleset above it.

.PHONY: icons

One final touch — because Make is designed to work with actual files, and there is no file named icons, it’s best to add icons to a special .PHONY command. This tells Make that it shouldn’t look for any file named icons.

In summary

The beauty of Make (that is not often replicated by build tools in other languages) is that it will only build the files that need to be built. If the source file hasn’t changed since the last run, and the generated files are all found, Make will exit early saying it has nothing to do. If one of the icon files gets deleted, running make icons will re-generate only that file. But if the source image changes, make icons will generate 6 new icons for me.

This may not be the “best” solution, but it’s working well for me so far.

Credits: A lot of my knowledge of how to do this came from reading Isaac Schlueter’s Makefile tutorial.

Strip analytics URL parameters before bookmarking

Have you ever found yourself bookmarking a website URL that contains a heap of tracking parameters in the URL? Generally they’re Google Analytics campaign referral data (e.g. http://example.com/?utm_source=SomeThing&utm_medium=email).

I use browser bookmarks a lot, mostly as a collection of references around particular topics. The primary source for most of my web development-related bookmarks is a variety of regular email newsletters. My current list of subscriptions is:

What many of these have in common is the addition of Google Analytics tracking parameters to the URL. This is great for the authors of the content, as they have the option to see where spikes in traffic come from. When I’m saving a link for later, though, I don’t want to keep any of the extra URL cruft. It would also be giving false tracking results if I opened a link with that URL data a year or two later.

So I wrote a quick snippet to strip any Google Analytics tracking parameters from a URL. I can run this just before bookmarking the link:

/**
 * Strip out Google Analytics query parameters from the current URL.
 * Makes bookmarking canonical links easier.
 */
(function () {
    var curSearch = location.search;
    if (!curSearch) {
        return;
    }
    var curParams = curSearch.slice(1).split('&');
    console.log('Stripping query parameters:', curParams);
    var newParams = curParams.filter(function (param) {
        return param.substr(0, 4) !== 'utm_';
    });
    if (newParams.length === curParams.length) {
        return;
    }
    var newSearch = newParams.join('&');
    if (newSearch) {
        newSearch = '?' + newSearch;
    }
    var newUrl = location.href.replace(curSearch, newSearch);
    history.replaceState({}, document.title, newUrl);
})();

I have saved this snippet in my browser’s devtools. See github.com/bgrins/devtools-snippets for more details of how these work.

However, some people prefer to use bookmarklets, without fiddling around in browser devtools. I’ve converted the above snippet into a bookmarklet as well. If you’re using a desktop browser, you can drag the following link to your bookmarks area to save it:

Strip GA URL params

Animated demonstration of using the bookmarklet link

LOSS: Lazy Open-Source Software

I work on open-source projects as a hobby. Writing code used to be my primary job but it hasn’t been for a few years now.

I make my hobby projects’ source code open by default. Mainly because I think that possibly someone will benefit from having it available. Primarily I write code for myself, but if someone else happens to find it beneficial, cool! I know I’ve benefited a lot from the open source projects of others in the past, so why not return the favour?

But here’s the thing. Because I’ve relegated coding to just a hobby, I’m actually terrified of something I’ve written becoming very popular. I don’t like the thought of one of my projects hitting 10000+ GitHub stars (or however else you want to measure popularity) — then getting a heap of issues and pull requests as a result. In fact, that thought scares the shit out of me. I don’t want that responsibility.

I have so many project ideas that arrive in my head far faster than I can build them. I write them all down, and watch the list grow ever larger. At some point I had to learn to accept that a huge majority of them would never even be started, let alone finished. I had to prioritise them. Over time, I realised that certain types of ideas keep getting relegated to the bottom of the list. Not the ones that would require the most work to create, but the ones that would have the largest amount of ongoing maintenance.

Thus I realised that my model for coding is not FOSS, it’s LOSS — Lazy Open-Source Software. I’m happy to write something new and try out different ideas, but coming back to maintain old projects is painful. Some people say that’s just ignoring problems, but I don’t particularly care.

My coding time is pretty much just restricted to train rides to/from the office where I work, and maybe a bit of time of an evening. Therefore I try to maximise the enjoyment I get out of that time. I fuck around with some code for a bit and make something that I like. But most of the time I like to focus on other things at home — family, non-coding hobbies, and just generally switching off.

What I’m trying to say is… actually, I have no idea what I’m trying to say. I just thought of the LOSS acronym and ran with it. I guess the underlying point I’m making is, if you’re relying on me to fix up some old open source code, I’m sorry. It’s on the list somewhere.

Accessible presentations

Recent events have had me thinking a lot about accessible presentations at conferences and user groups. Myriad factors have kicked me into considering the topic seriously. Accessibility comes in many forms, and doesn’t just apply to people with disabilities. The core consideration is: Are you excluding anyone with the way you present your content?

To start with, I work with Sean Curtis, who is Atlassian’s resident accessibility expert. Sean has, among other things, been trying to educate everyone about better colour contrast in presentation slides, for both internal and external presentations.

Then Sean and I attended A11y Camp — a one-day event in Sydney dedicated to accessibility on the web. This event guaranteed there would be people with vision impairments attending (and presenting). Therefore all presenters had been asked to audibly describe any slides they were relying on for visual impact. The key point was to not ever rely on the visuals alone. I remember one presenter really forcing himself to read out quotes and describe images, which he said was so different from his normal presentation style, but it greatly helped the audience.

Recently I had the pleasure of presenting at JSConf EU, which uses live transcription services during all the presentations. As preparation for this I was asked to send through as much material for my talk as possible ahead of time. This would assist with building transcription dictionaries for the specific technical terms used at a developer-focused conference. This got me thinking about the accessibility of my talk for more reasons than just vision impairment.

I’ve also been reading a lot lately about vestibular disorders, which can result in certain types of visual motion making people physically ill. All of these factors combined to make me think hard about how I was going to present my talk. Some of the following tips are things I’ve always done, but others I had to put more conscious effort into.

Things I try to do for a presentation

1. Ensure text and images are readable

Large text, large images, high contrast between text and background colours. This is good practise anyway, regardless of accessibility considerations. Make sure someone sitting at the back of the room can still see what’s on your slides — we’re not all young programmers with perfect eyesight. This also applies to picking colours that don’t look the same for someone with colour blindness.

2. Describe any purely visual elements

How to get this wrong: show something on your slides, then point at it and say “and this is the result” with no further explanation. While it may seem funny or obvious to a majority of your audience, you still shouldn’t assume that everyone has understood the message. Thus, if the crux of your talk or the punchline of a joke involves pointing at an animated GIF with no description, that’s excluding anyone in the audience who can’t see it or who takes longer to process visual information (due to cognitive disabilities).

I like to think of it in the same was as putting an alt attribute on an image. On a web page, if the image is purely decorative and adds nothing to the content, it’s ok to have an empty attribute (alt=""), but in all other cases the image should have a description. It’s the same with a presentation. If the image is purely decorative and adds nothing to the content, it’s ok to not mention it, but in all other cases the image should be audibly described. This doesn’t have to be a clunkily-worded halting of your verbal flow in order to say “and on screen right now is an image of a squirrel”. If done right, the description of what’s on screen can be woven into the flow and narrative of your speech without ever feeling unnatural.

3. Only share slides online if they make sense on their own

Something I’ve quoted before, but is worth repeating here:

The gist is: our slides are not the talks, our slides aid the talks we are giving. They are a visual catalyst for the things we talk about. When you see something and you hear about it at the same time it is more likely to stick. It is as simple as that.

[…] If I look at the PDF of a talk a few weeks later and only see pretty images without remembering what they meant at that time I get confused and frustrated.

— Christian Heilmann, On controversial slides, talk distribution and lack of context

Your talk is about you talking. A talk should start from the words you say, with slides being a progressive enhancement. And because they are an enhancement, they don’t always make sense on their own, devoid of context. I only put slides online if I think there’s enough information in the slides alone to get a point across. For me, this only works for in-depth technical topics such as easing or gradients (in fact, I still added an initial explanation slide to the gradients presentation specifically for people reading the slides online later).

One technique I’ve seen that I really like is putting the slides online in the form of an article with slides and text side-by-side. Each slide is presented next to the words that were spoken for that section. It’s a great adaptation of a talk from an in-person presentation to an on-screen essay, changing the format to best suit the medium in which it’s published. The two people I’ve seen consistently do this well are Maciej Cegłowski and Bret Victor.

4. Limit animations

There are many people who don’t like lots of animations whizzing around on a screen. Slides with lots of fancy animation effects can make people with vestibular disorders feel sick. Personally, I don’t want to make my audience feel ill just because of how I’ve chosen to present something. And regardless of vestibular disorders, I often find the super-amazing 3D transform effects of slide transitions quite distracting. They’re cool/cute the first couple of times, but over the course of a whole talk they just get in the way and take attention away from what the presenter is saying.

5. Don’t just take my word for it

There is also a good list of tips for accessible presentations published by the W3C.

How did I do?

For my JSConf EU talk, my slides had almost no animations. I made sure to be conscious of describing what was on screen (although I know I failed at a couple of points). But I know I won’t always get these tips 100% right. I’m not perfect and I know I’ll make mistakes, but I’ll keep trying to improve.

Also, these tips aren’t necessarily going to work for every presentation. For example, a talk about colour blending is inherently visual and would be very hard to describe to someone who doesn’t see colours. But it’s worth trying to be as inclusive as possible.

Stop “winning” everything

It’s time for a Grumpy Old Man rant – the kind that I occasionally unleash at the poor people I work with, but that I rarely put the effort into typing for public consumption.

“You’ve won the internet.”

“This is it. I’ve found the best gif. EVER!”

“This wins the prize for best sign.”

“World’s best pull request description.”

Why must everything be constantly turned into a competition? Why can’t we just enjoy something on its own merits without frothing with hyperbole about its place in the annals of human history? By constantly over-inflating the importance of every tiny insignificant thing, we produce a false economy. When everyone and everything is special, nothing is.

Not everything is special. Not everything is the best. And that’s perfectly fine – in fact it’s desired. Life is not purely black and white, it is full of nuances and shades of grey (far more than 50). Without lows as a counter-balance, the highs have no reference and no meaning. But instead of embracing that, we seem to be desperately polarising and categorising all that we encounter.

We are creating a false dichotomy. It has become more and more common to treat things we see on the internet as a binary state: “It must be best in class, even if we have to invent a new class for it to be best of” versus “It’s not even worth my time to look at it”. It’s like George W. Bush declaring “You’re either with us or against us.”

By all means, enjoy things that are amusing. But seeing someone write something clever doesn’t mean they’ve “won the internet.” (Quite frankly, the internet’s been won so many times that it’s surely just a faded hand-me-down prize by now.) Enjoy it for what it is – an isolated moment in time, unencumbered by the need to measure and compare it.

Step away from the white spotlight and the black abyss, and come join me over here in the grey areas. (Just don’t get me started on the hyperbole of sports commentators.)

Responsive graphs and custom elements

A few weeks ago I published a statistical analysis of one-day cricket – specifically a verification of the old “double the 30-over score” trope.

This blog post is not about cricket – you can read the link above for that. This is a post about the technology behind that analysis. Ben Hosken from Flink Labs says around 60% of a data visualisation project is just collecting and cleaning the data – and so it was with this analysis. What was originally a “quick distraction” became my coding-on-the-train hobby project for 3 weeks. Here’s how I built it.

Data

All data came from Cricinfo, thanks to their ball-by-ball statistics. I won’t go into much detail about how I got the data because I have don’t know what their data usage guidelines are (I looked). Suffice it to say that I ended up with over 7000 JSON files on my machine.

From these I wrote a series of Python scripts to parse, verify, clean up, analyse and collate the data for each innings. The final output was a single JSON file containing only the relevant data I needed.

Custom element

In order to fit the graphs neatly into the context of a written article, I wanted to be able to embed them as part of the writing process. The article (source on GitHub) was written using Markdown with some custom additions (which are described later). Markdown was used despite my personal annoyances with some aspects of it, mainly because it’s widely supported.

The easiest way to put a custom graph into the article was to plonk some HTML in the Markdown source and let it be copied across to the rendered output – but what HTML? I needed containers for SVG graphs that would be rendered later via JavaScript. Given that there would be a few graphs, the containers needed to define customisations per graph, preferably as part of the article source (for better context when writing).

The usual choice for something like this would be a plain old <div> with lots of data- attributes, one for each option. Unfortunately this didn’t let me easily tweak the options after rendering, which I consider important for data explorations so that you can get the best possible view of the data.

Given the one-off nature of the article, I had the opportunity to use “new shiny” tech in the form of a custom element, part of the new Web Components specification. Obviously proper custom elements aren’t widely supported across browsers yet, so I used the excellent Skate library as a helper (disclaimer: I work on the same team as Skate’s author).

HTML API

Using a custom element meant that my in-context API was simple HTML attributes. I quickly wrote a potential API in a comment before I’d written any code for it.

/**
 * <odi-graph> custom element.
 * ___________________________
 *
 * Basic usage:
 *
 *   <odi-graph>Text description as a fallback</odi-graph>
 *
 * Attributes (all are optional):
 *
 *   <odi-graph graph-title="This is a graph"></odi-graph>
 *     Add a title to the graph
 *
 *   <odi-graph rolling-average="true"></odi-graph>
 *     Include a rolling average line (off by default)
 *
 * [... etc.]
 */

This allowed me to write the article with in-context HTML like so:

In order to find the answer, I graphed out a 100-innings rolling average, to give a better indication of trends over time.

<odi-graph graph-title="Overall vs rolling average"
    rolling-average="true" innings-points="false">
    IMAGE: A graph showing a rolling average halfway mark as described in the next paragraph.
</odi-graph>

Thanks to Skate, the initial definition of the custom element and its attributes was very simple.

skate('odi-graph', {
    attached: function (elem) {
        // Create internal DOM structure
        // [... SNIP: nothing but basic DOM element manipulation here ...]

        // Create the graph
        elem.graph = new ODIGraph();
        queue(elem, function () {
            elem.graph.init(cloneData(stats.data), configMapper(elem));
            onceVisible(elem, function () {
                elem.graph.render(elem);
            });
        });
    },
    attributes: {
        'graph-title': attrSetter,
        'rolling-average': attrSetter,
        'innings-points': attrSetter,
        'ybounds': attrSetter,
        'date-start': attrSetter,
        'date-end': attrSetter,
        'filter': attrSetter,
        'highlight': attrSetter,
        'reset-highlight-averages': attrSetter
    }
});

function attrSetter(elem, data) {
    if (data.newValue !== data.oldValue) {
        if (elem.graph && elem.graph.inited) {
            elem.graph.config(configMapper(elem));
        }
    }
}

The queue and onceVisible calls were just making sure that the graph wasn’t rendered too early, before the data finished loading. The key part is that it just calls out to a separate ODIGraph instance (described later on in this post) for separation of responsibilities. The attrSetter method made sure that the graph was re-rendered if any HTML attribute changed, after running the attributes through configMapper which just parsed the attribute strings into a JS config object.

This gave me the enormous benefit of being able to just tweak attributes for a graph in browser devtools and see the graph instantly update.

Editing a graph element's HTML attributes instantly updates the SVG internals

Accessibility

As shown above, all graphs were written with some text inside the custom element. During the initialisation of the custom element, a <figure> element is created inside it, then the custom element’s original text context is moved to a hidden <figcaption>. This provides a description that is read out by screenreaders for those users who can’t see the graph visuals.

An additional aspect of providing descriptive captions was making sure that the article text immediately after the graph described an interpretation of the graph data. Doing this ensured that no information was presented only via visuals.

For the finishing touches I added the aria-hidden="true" attribute to the axis and legend text in the graph. Since the graph was built with SVG, all the text elements would be read by screenreaders but without context. (After all, what does “15 20 25 30” mean without the visual positioning context of being on an axis?)

Responsive graph

For the actual graphs, I wanted them to be interactive, mobile-friendly, and (hopefully) not much work.

C3

I started off by using the C3 library for ease of development. It initially ticked a lot of boxes:

  • I could quickly get up and running with my existing data set.
  • It was responsive and auto-expanded to whatever size container it was put into.
  • Auto-scaling and highlighting of data points (with a vertical hover marker) came for free.

I was able to look at my data set in a few different ways with just some simple config tweaks. However, as I dug deeper into what I wanted the graphs to do, I found myself fighting against the library instead of working with it.

As an example, I wanted to combine a scatter plot for individual innings with one or more line plots for the averages. While this was possible with C3, it started to become an either/or situation. C3 gave me a vertical marker line when hovering anywhere on the graph (which is what I wanted), but only when I had just line plots – as soon as the scatter plot was added, the nice hover behaviour disappeared. The way the hover marker was implemented also started to fall down – it worked fine with around 100 data points, but as soon as the full data set of over 1800 points was added, the hovering stopped working.

Added to this was a performance problem of C3 with the size of my data set. In order to hide data point markers on the line plots, C3 adds an SVG DOM node per point then hides them via setting their opacity to 0. It does this so that they can be faded in with an animation (if they’re set to visible via an API call). In my case, this meant over 3500 DOM nodes were being created just to be hidden.

In the end I decided that while C3 is a nice charting library, it wasn’t set up for my use case. I wrote out a list of all the things I could think I would possibly want the graphs to do:

  • Combine scatter and line plots on the same graph.
  • Vertical marker to highlight data points when hovering, auto-snapping the line to the nearest data point to the mouse cursor.
  • Fully custom rendering of a tooltip when highlighting a data point.
  • Resize when the graph’s container size changes.
  • Plot data along the X axis by dates (C3 has an API for this, but it didn’t work with my data).
  • Define a title
  • Optionally add background highlight regions for grouping data points, with each region having its own title.
  • Show a horizontal background line marking 30 overs as a reference.
  • Have good performance – create only as many DOM nodes as necessary for displaying the data.

After quickly looking at some other charting libraries, I bit the bullet and accepted that I was going to have to roll my own.

Custom rendering

Note: All the code for the graphs can be found on GitHub. It changed structure many times as extra requirements turned up, so it’s far from perfect. I didn’t see the need for lots of refactoring, given its one-time-use purpose.

I won’t go into a super-detailed explanation of the code, as most of it is just calling D3 APIs. Instead I’ll detail the high-level concepts.

Most of the contents can be broken down into three main areas of responsibility: setup, sizing, and display.

First is setup. This is only called once per graph on initialisation. The only thing setup code is concerned with is setting up D3 axis components, creating placeholder DOM nodes and adding event listeners.

Second is sizing. This takes DOM nodes created in the setup phase and sets their positioning and dimensions based on the current size of the graph container. The ranges of D3’s axis and scale components are also defined here.

Finally there’s display. This is where the main rendering takes place. The data array that’s passed in is rendered according to a few constraints:

  1. The options already set – filtering, highlight regions, etc.
  2. The dimensions and positioning defined in the sizing phase.

This kind of structure means that when a graph is first created, it calls the setup, sizing and display phases in succession. From then on, resizing the browser window calls straight into sizing and display, meaning the graphs will always fit into the current window (after a short delay due to debouncing). Finally, if graph.data(someNewData) is called, only the display phase is called because the sizing hasn’t changed.

Tweaks for better responsiveness

Although the main data rendering was now responsive to any window size, I realised that some of the text elements didn’t work at small screen sizes. The different types of text required different solutions.

Graph titles could be highly variable in width, as the text is completely different per graph. I ended up with a solution that allowed me to specify different titles in the one attribute – a “primary” title and a shorter variant to be used only if the primary one didn’t fit.

I picked a couple of arbitrary unicode characters that were not going to appear anywhere else, and could be easily typed on my Mac keyboard. One character (· or Option+Shift+9) was used to define a cut-off point in a single title – if the full title didn’t fit in the graph, then only the text up to the cut-off point would be used. The other character (¬ or Option+l (lowercase L)) marked two completely different titles.

Examples from the code comments give a better idea of how they were used:

/**
 * Formatting for titles
 *
 *   `graph-title` attribute and highlight region names can indicate short vs long content.
 *   The short content will be used if the long version can't fit in the space available.
 *   « = short version; » = long version
 *
 *   "This is a title"
 *     « This is a title
 *     » This is a title
 *
 *   "Short title· with some extra"
 *     « Short title
 *     » Short title with some extra
 *
 *   "Short title¬Completely different title"
 *     « Short title
 *     » Completely different title
 */

The other text that didn’t work at small sizes was the legend at the bottom of the graphs. Intially I had added the legend as part of the same SVG element as the graph to keep everything together. I played with some different ways of trying to make the parts of the legend wrap nicely at smaller sizes, but this was one area where SVG worked against me. In the end I realised I was trying to replicate with SVG groups and transforms what CSS gave me for free. I ended up creating one SVG element per legend type, setting display: inline-block on them and letting CSS handle the rest.

Here is a comparison of how the title and legend worked at different graph sizes.

A wide graph with full title text

A narrow graph with reduced title text and wrapped legend

Final touches

With the core functionality done and the graphs finally doing what I wanted, it was time to add in some polish.

Only render when visible

There ended up being 7 graphs in the article. Rendering them all on page load would be a noticeable drag on performance, with over 5000 DOM nodes being inserted into the page.

Instead I used the Verge library to detect when a graph’s custom element became visible for the first time (using a debounced scroll event listener). Once the element becomes visible, the graph setup/sizing/render methods are called for the first time. This makes the initial page load snappy, and also stops the browser from having to render any graph that is never viewed.

Customised Markdown

The article was written using basic Markdown syntax, but with a couple of custom additions. One addition was for adding numeric callouts that appear on the side (using a syntax of ++1234++). The other addition was a helper for easily adding links to specific matches on Cricinfo. The Markdown source was read by a quick-and-dirty Python script, run through a Markdown renderer, then run through extra string replacements for the custom additions. Finally the HTML output was inserted into a basic page template to produce the final article.

So that’s how it was made. Enjoy.