Author Archives: Gilmore

New projects: Emoji Censor and Numberwang

I’ve generally been rather lax when it comes to writing about side projects I’ve done. I’ll finish the project, mention it on Twitter a couple of times, then… well, that’s about it. Not a particularly effective way to do promotion of ideas.

Time to do something it. From now on, I’m going to attempt to write a proper explanation here for each side project I complete. Even if no-one else reads it, at least I’ve created some historical documentation for my future self.

Take me down to Distraction City, where the… oooh shiny!

I have a history of talking with people at tech user groups and meetups and coming up with ridiculous ideas for side projects. Mostly the ideas go no further than that because, well, they’re ridiculous (and taking a joke too far often ruins the humour). Sometimes, though, the idea is just too tempting to leave alone.

The first of these ideas came from a SydCSS meetup:

For the rest of the evening, a friend and I would whisper “that’s Numberwang!” whenever a bunch of numbers appeared on a presentation slide. Yes, we’re far too easily amused.

Wait, what the hell is Numberwang?

Numberwang started as a series of sketches on the TV show That Mitchell and Webb Look. The basic premise is a number-focused game show that makes no sense to viewers, but everyone on the show knows exactly what’s happening. I highly recommend watching the full collection of sketches.

The next morning on the train I whipped up a super-quick prototype of a browser extension that would randomly exclaim “That’s Numberwang!” when you typed a number on a web page. I let it linger for a few months, then another conversation spurred me to write it properly. Here’s a video of it in action:

Continue reading

Redirecting GitHub Pages after renaming a repository

One of the golden rules I try to follow in web development is “don’t break URLs”. That is, even if pages get moved around, or a site is completely revamped, the old URLs should still redirect somewhere. It’s a terrible experience to bookmark a page or follow a link from another site, only to find that the page has completely vanished. Let’s try not to make the problem of link rot even worse.

The best way to redirect old pages is to get the web server to send a 301 Moved Permanently HTTP header. This has the benefit that search engines will see the 301 header and automatically update their caches to reference the new URL. Where this can fall down is when using static site hosting such as GitHub Pages or Surge.sh. By design, you’re not allowed to send any web server directives such as custom HTTP headers, so the 301 redirect option is out.

So what do you do when you want to move some URLs around when using GitHub Pages? Here are the options I found.

Continue reading

Software development amnesia

There needs to be a name for the software development version of the Gell-Mann Amnesia effect. The full version is worth reading, but I’ll just repeat the critical parts here (emphasis mine):

You open the newspaper to an article on some subject you know well. […] You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward–reversing cause and effect. […] In any case, you read with exasperation or amusement the multiple errors in a story–and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

I’d like to propose a corollary to the Gell-Mann Amnesia effect, targeted specifically at software developers:

As an experienced software developer, you know how much work goes into delivering a new feature for a reasonably-sized product. The myriad priorities that are juggled to determine the ever-shifting sands of the roadmap. You’ve been frustrated at the angry customers demanding that their personal top priority be attended to first, under the highly-mistaken assumption that their use case is everyone’s use case.

You know the weeks or months of discussion and planning that happens before any code is written. The sheer number of people who will be involved in trying to get it right. The UX research, the design iterations, the architectural concerns. You know the amount of testing that needs to happen for edge cases on different platforms, or with different configuration setups. You know the rigour with which someone in the QA team will pick up on potential problems. Depending on the size of the company, there might even be an internal roll-out first, to pick up any stray bugs. Then, and only then, can it finally go through to a public release.

You breathe a sigh of relief that the feature you’ve worked so hard for is out the door. Switching to some other piece of software you use frequently, you see an update notification. The release notes mention some small, trivial new feature that has no value for you. You exclaim, “Ugh, how can they possibly be focusing on such unimportant details when they still haven’t fixed the thing that annoys me the most?” — as if somehow this other company’s planning and implementation process is any different from yours.

You switch away from your product, and forget what you know.

Should I continue canianimate.com?

Or, shouldicontinuecanianimate.

This is an open question to the front-end development community. I have a website and want to make it better. But I need to know if it’s actually valuable before putting in more work.

Back to…

A couple of years ago, as part of a building some animation prototypes, I ended up creating a small library that collates which CSS properties can be animated, and how they animate. At the suggestion of Sean Curtis, I turned that data into a site called canianimate.com. It’s inspired by caniuse.com but only for CSS animations and transitions.

I launched a basic version of the site and had grander plans for it. Ultimately I wanted to add a bunch of interactive graphs that showed better details of how different types of properties were interpolated and transitioned.

But a little while after that, I attended a conference where the theme of several talks was performant animations. All the advice was to animate only the opacity and transform properties, which matched the trend of advice coming from the wider dev community — especially those working for browser vendors.

While it was definitely good advice (and still is today), it effectively took the wind out of my sails. What was the point of putting more effort into explaining the minutiae of all the different properties’ animation rules, when the general advice is to only use two of them?

…the Future

Continue reading

Using Make to generate Chrome extension icons

File this under “you don’t have to use JS for everything”.

Problem

I’ve built a few Chrome browser extensions, and there’s a part of making them that I generally find rather tedious: icons. To account for all the different places where icons are used, as well as multiple screen resolutions, 6 differently-sized icon files are required. Sometimes I’ve manually exported a few of these from Photoshop for one-off projects, but my most recent extension required a few updates to the icon. This was the tipping point for adding automation.

In this post I’ll detail how I used only a few lines of a Makefile to handle all icon size conversions. Yep, Make. No JavaScript-based build system with a multitude of npm modules and seconds of overhead for every run. Sometimes the old ways are the best.

Solution

Taking advantage of some special features of Make, this is what I came up with:

Continue reading

Strip analytics URL parameters before bookmarking

Have you ever found yourself bookmarking a website URL that contains a heap of tracking parameters in the URL? Generally they’re Google Analytics campaign referral data (e.g. http://example.com/?utm_source=SomeThing&utm_medium=email).

I use browser bookmarks a lot, mostly as a collection of references around particular topics. The primary source for most of my web development-related bookmarks is a variety of regular email newsletters. My current list of subscriptions is:

What many of these have in common is the addition of Google Analytics tracking parameters to the URL. This is great for the authors of the content, as they have the option to see where spikes in traffic come from. When I’m saving a link for later, though, I don’t want to keep any of the extra URL cruft. It would also be giving false tracking results if I opened a link with that URL data a year or two later.

So I wrote a quick snippet to strip any Google Analytics tracking parameters from a URL. I can run this just before bookmarking the link:

Continue reading

LOSS: Lazy Open-Source Software

Update, June 2020: Some details of this post were accurate at the time it was written, but aren’t accurate right now. Writing code is sometimes my primary job again (depending on the role). However, the main message still holds true.


I work on open-source projects as a hobby. Writing code used to be my primary job but it hasn’t been for a few years now.

I make my hobby projects’ source code open by default. Mainly because I think that possibly someone will benefit from having it available. Primarily I write code for myself, but if someone else happens to find it beneficial, cool! I know I’ve benefited a lot from the open source projects of others in the past, so why not return the favour?

But here’s the thing. Because I’ve relegated coding to just a hobby, I’m actually terrified of something I’ve written becoming very popular. I don’t like the thought of one of my projects hitting 10000+ GitHub stars (or however else you want to measure popularity) — then getting a heap of issues and pull requests as a result. In fact, that thought scares the shit out of me. I don’t want that responsibility.

I have so many project ideas that arrive in my head far faster than I can build them. I write them all down, and watch the list grow ever larger. At some point I had to learn to accept that a huge majority of them would never even be started, let alone finished. I had to prioritise them. Over time, I realised that certain types of ideas keep getting relegated to the bottom of the list. Not the ones that would require the most work to create, but the ones that would have the largest amount of ongoing maintenance.

Thus I realised that my model for coding is not FOSS, it’s LOSS — Lazy Open-Source Software. I’m happy to write something new and try out different ideas, but coming back to maintain old projects is painful. Some people say that’s just ignoring problems, but I don’t particularly care.

My coding time is pretty much just restricted to train rides to/from the office where I work, and maybe a bit of time of an evening. Therefore I try to maximise the enjoyment I get out of that time. I fuck around with some code for a bit and make something that I like. But most of the time I like to focus on other things at home — family, non-coding hobbies, and just generally switching off.

What I’m trying to say is… actually, I have no idea what I’m trying to say. I just thought of the LOSS acronym and ran with it. I guess the underlying point I’m making is, if you’re relying on me to fix up some old open source code, I’m sorry. It’s on the list somewhere.

Accessible presentations

Recent events have had me thinking a lot about accessible presentations at conferences and user groups. Myriad factors have kicked me into considering the topic seriously. Accessibility comes in many forms, and doesn’t just apply to people with disabilities. The core consideration is: Are you excluding anyone with the way you present your content?

To start with, I work with Sean Curtis, who is Atlassian’s resident accessibility expert. Sean has, among other things, been trying to educate everyone about better colour contrast in presentation slides, for both internal and external presentations.

Then Sean and I attended A11y Camp — a one-day event in Sydney dedicated to accessibility on the web. This event guaranteed there would be people with vision impairments attending (and presenting). Therefore all presenters had been asked to audibly describe any slides they were relying on for visual impact. The key point was to not ever rely on the visuals alone. I remember one presenter really forcing himself to read out quotes and describe images, which he said was so different from his normal presentation style, but it greatly helped the audience.

Recently I had the pleasure of presenting at JSConf EU, which uses live transcription services during all the presentations. As preparation for this I was asked to send through as much material for my talk as possible ahead of time. This would assist with building transcription dictionaries for the specific technical terms used at a developer-focused conference. This got me thinking about the accessibility of my talk for more reasons than just vision impairment.

I’ve also been reading a lot lately about vestibular disorders, which can result in certain types of visual motion making people physically ill. All of these factors combined to make me think hard about how I was going to present my talk. Some of the following tips are things I’ve always done, but others I had to put more conscious effort into.

Things I try to do for a presentation

1. Ensure text and images are readable

Large text, large images, high contrast between text and background colours. This is good practise anyway, regardless of accessibility considerations. Make sure someone sitting at the back of the room can still see what’s on your slides — we’re not all young programmers with perfect eyesight. This also applies to picking colours that don’t look the same for someone with colour blindness.

2. Describe any purely visual elements

How to get this wrong: show something on your slides, then point at it and say “and this is the result” with no further explanation. While it may seem funny or obvious to a majority of your audience, you still shouldn’t assume that everyone has understood the message. Thus, if the crux of your talk or the punchline of a joke involves pointing at an animated GIF with no description, that’s excluding anyone in the audience who can’t see it or who takes longer to process visual information (due to cognitive disabilities).

I like to think of it in the same was as putting an alt attribute on an image. On a web page, if the image is purely decorative and adds nothing to the content, it’s ok to have an empty attribute (alt=""), but in all other cases the image should have a description. It’s the same with a presentation. If the image is purely decorative and adds nothing to the content, it’s ok to not mention it, but in all other cases the image should be audibly described. This doesn’t have to be a clunkily-worded halting of your verbal flow in order to say “and on screen right now is an image of a squirrel”. If done right, the description of what’s on screen can be woven into the flow and narrative of your speech without ever feeling unnatural.

3. Only share slides online if they make sense on their own

Something I’ve quoted before, but is worth repeating here:

The gist is: our slides are not the talks, our slides aid the talks we are giving. They are a visual catalyst for the things we talk about. When you see something and you hear about it at the same time it is more likely to stick. It is as simple as that.

[…] If I look at the PDF of a talk a few weeks later and only see pretty images without remembering what they meant at that time I get confused and frustrated.

— Christian Heilmann, On controversial slides, talk distribution and lack of context

Your talk is about you talking. A talk should start from the words you say, with slides being a progressive enhancement. And because they are an enhancement, they don’t always make sense on their own, devoid of context. I only put slides online if I think there’s enough information in the slides alone to get a point across. For me, this only works for in-depth technical topics such as easing or gradients (in fact, I still added an initial explanation slide to the gradients presentation specifically for people reading the slides online later).

One technique I’ve seen that I really like is putting the slides online in the form of an article with slides and text side-by-side. Each slide is presented next to the words that were spoken for that section. It’s a great adaptation of a talk from an in-person presentation to an on-screen essay, changing the format to best suit the medium in which it’s published. The two people I’ve seen consistently do this well are Maciej Cegłowski and Bret Victor.

4. Limit animations

There are many people who don’t like lots of animations whizzing around on a screen. Slides with lots of fancy animation effects can make people with vestibular disorders feel sick. Personally, I don’t want to make my audience feel ill just because of how I’ve chosen to present something. And regardless of vestibular disorders, I often find the super-amazing 3D transform effects of slide transitions quite distracting. They’re cool/cute the first couple of times, but over the course of a whole talk they just get in the way and take attention away from what the presenter is saying.

5. Don’t just take my word for it

There is also a good list of tips for accessible presentations published by the W3C.

How did I do?

For my JSConf EU talk, my slides had almost no animations. I made sure to be conscious of describing what was on screen (although I know I failed at a couple of points). But I know I won’t always get these tips 100% right. I’m not perfect and I know I’ll make mistakes, but I’ll keep trying to improve.

Also, these tips aren’t necessarily going to work for every presentation. For example, a talk about colour blending is inherently visual and would be very hard to describe to someone who doesn’t see colours. But it’s worth trying to be as inclusive as possible.

Stop “winning” everything

It’s time for a Grumpy Old Man rant – the kind that I occasionally unleash at the poor people I work with, but that I rarely put the effort into typing for public consumption.

“You’ve won the internet.”

“This is it. I’ve found the best gif. EVER!”

“This wins the prize for best sign.”

“World’s best pull request description.”

Why must everything be constantly turned into a competition? Why can’t we just enjoy something on its own merits without frothing with hyperbole about its place in the annals of human history? By constantly over-inflating the importance of every tiny insignificant thing, we produce a false economy. When everyone and everything is special, nothing is.

Not everything is special. Not everything is the best. And that’s perfectly fine – in fact it’s desired. Life is not purely black and white, it is full of nuances and shades of grey (far more than 50). Without lows as a counter-balance, the highs have no reference and no meaning. But instead of embracing that, we seem to be desperately polarising and categorising all that we encounter.

We are creating a false dichotomy. It has become more and more common to treat things we see on the internet as a binary state: “It must be best in class, even if we have to invent a new class for it to be best of” versus “It’s not even worth my time to look at it”. It’s like George W. Bush declaring “You’re either with us or against us.”

By all means, enjoy things that are amusing. But seeing someone write something clever doesn’t mean they’ve “won the internet.” (Quite frankly, the internet’s been won so many times that it’s surely just a faded hand-me-down prize by now.) Enjoy it for what it is – an isolated moment in time, unencumbered by the need to measure and compare it.

Step away from the white spotlight and the black abyss, and come join me over here in the grey areas. (Just don’t get me started on the hyperbole of sports commentators.)

Responsive graphs and custom elements

A few weeks ago I published a statistical analysis of one-day cricket – specifically a verification of the old “double the 30-over score” trope.

This blog post is not about cricket – you can read the link above for that. This is a post about the technology behind that analysis. Ben Hosken from Flink Labs says around 60% of a data visualisation project is just collecting and cleaning the data – and so it was with this analysis. What was originally a “quick distraction” became my coding-on-the-train hobby project for 3 weeks. Here’s how I built it.

Data

All data came from Cricinfo, thanks to their ball-by-ball statistics. I won’t go into much detail about how I got the data because I have don’t know what their data usage guidelines are (I looked). Suffice it to say that I ended up with over 7000 JSON files on my machine.

From these I wrote a series of Python scripts to parse, verify, clean up, analyse and collate the data for each innings. The final output was a single JSON file containing only the relevant data I needed.

Custom element

In order to fit the graphs neatly into the context of a written article, I wanted to be able to embed them as part of the writing process. The article (source on GitHub) was written using Markdown with some custom additions (which are described later). Markdown was used despite my personal annoyances with some aspects of it, mainly because it’s widely supported.

The easiest way to put a custom graph into the article was to plonk some HTML in the Markdown source and let it be copied across to the rendered output – but what HTML? I needed containers for SVG graphs that would be rendered later via JavaScript. Given that there would be a few graphs, the containers needed to define customisations per graph, preferably as part of the article source (for better context when writing).

The usual choice for something like this would be a plain old <div> with lots of data- attributes, one for each option. Unfortunately this didn’t let me easily tweak the options after rendering, which I consider important for data explorations so that you can get the best possible view of the data.

Given the one-off nature of the article, I had the opportunity to use “new shiny” tech in the form of a custom element, part of the new Web Components specification. Obviously proper custom elements aren’t widely supported across browsers yet, so I used the excellent Skate library as a helper (disclaimer: I work on the same team as Skate’s author).

HTML API

Using a custom element meant that my in-context API was simple HTML attributes. I quickly wrote a potential API in a comment before I’d written any code for it.

/**
 * <odi-graph> custom element.
 * ___________________________
 *
 * Basic usage:
 *
 *   <odi-graph>Text description as a fallback</odi-graph>
 *
 * Attributes (all are optional):
 *
 *   <odi-graph graph-title="This is a graph"></odi-graph>
 *     Add a title to the graph
 *
 *   <odi-graph rolling-average="true"></odi-graph>
 *     Include a rolling average line (off by default)
 *
 * [... etc.]
 */

This allowed me to write the article with in-context HTML like so:

In order to find the answer, I graphed out a 100-innings rolling average, to give a better indication of trends over time.

<odi-graph graph-title="Overall vs rolling average"
    rolling-average="true" innings-points="false">
    IMAGE: A graph showing a rolling average halfway mark as described in the next paragraph.
</odi-graph>

Thanks to Skate, the initial definition of the custom element and its attributes was very simple.

skate('odi-graph', {
    attached: function (elem) {
        // Create internal DOM structure
        // [... SNIP: nothing but basic DOM element manipulation here ...]

        // Create the graph
        elem.graph = new ODIGraph();
        queue(elem, function () {
            elem.graph.init(cloneData(stats.data), configMapper(elem));
            onceVisible(elem, function () {
                elem.graph.render(elem);
            });
        });
    },
    attributes: {
        'graph-title': attrSetter,
        'rolling-average': attrSetter,
        'innings-points': attrSetter,
        'ybounds': attrSetter,
        'date-start': attrSetter,
        'date-end': attrSetter,
        'filter': attrSetter,
        'highlight': attrSetter,
        'reset-highlight-averages': attrSetter
    }
});

function attrSetter(elem, data) {
    if (data.newValue !== data.oldValue) {
        if (elem.graph && elem.graph.inited) {
            elem.graph.config(configMapper(elem));
        }
    }
}

The queue and onceVisible calls were just making sure that the graph wasn’t rendered too early, before the data finished loading. The key part is that it just calls out to a separate ODIGraph instance (described later on in this post) for separation of responsibilities. The attrSetter method made sure that the graph was re-rendered if any HTML attribute changed, after running the attributes through configMapper which just parsed the attribute strings into a JS config object.

This gave me the enormous benefit of being able to just tweak attributes for a graph in browser devtools and see the graph instantly update.

Editing a graph element's HTML attributes instantly updates the SVG internals

Accessibility

As shown above, all graphs were written with some text inside the custom element. During the initialisation of the custom element, a <figure> element is created inside it, then the custom element’s original text context is moved to a hidden <figcaption>. This provides a description that is read out by screenreaders for those users who can’t see the graph visuals.

An additional aspect of providing descriptive captions was making sure that the article text immediately after the graph described an interpretation of the graph data. Doing this ensured that no information was presented only via visuals.

For the finishing touches I added the aria-hidden="true" attribute to the axis and legend text in the graph. Since the graph was built with SVG, all the text elements would be read by screenreaders but without context. (After all, what does “15 20 25 30” mean without the visual positioning context of being on an axis?)

Responsive graph

For the actual graphs, I wanted them to be interactive, mobile-friendly, and (hopefully) not much work.

C3

I started off by using the C3 library for ease of development. It initially ticked a lot of boxes:

  • I could quickly get up and running with my existing data set.
  • It was responsive and auto-expanded to whatever size container it was put into.
  • Auto-scaling and highlighting of data points (with a vertical hover marker) came for free.

I was able to look at my data set in a few different ways with just some simple config tweaks. However, as I dug deeper into what I wanted the graphs to do, I found myself fighting against the library instead of working with it.

As an example, I wanted to combine a scatter plot for individual innings with one or more line plots for the averages. While this was possible with C3, it started to become an either/or situation. C3 gave me a vertical marker line when hovering anywhere on the graph (which is what I wanted), but only when I had just line plots – as soon as the scatter plot was added, the nice hover behaviour disappeared. The way the hover marker was implemented also started to fall down – it worked fine with around 100 data points, but as soon as the full data set of over 1800 points was added, the hovering stopped working.

Added to this was a performance problem of C3 with the size of my data set. In order to hide data point markers on the line plots, C3 adds an SVG DOM node per point then hides them via setting their opacity to 0. It does this so that they can be faded in with an animation (if they’re set to visible via an API call). In my case, this meant over 3500 DOM nodes were being created just to be hidden.

In the end I decided that while C3 is a nice charting library, it wasn’t set up for my use case. I wrote out a list of all the things I could think I would possibly want the graphs to do:

  • Combine scatter and line plots on the same graph.
  • Vertical marker to highlight data points when hovering, auto-snapping the line to the nearest data point to the mouse cursor.
  • Fully custom rendering of a tooltip when highlighting a data point.
  • Resize when the graph’s container size changes.
  • Plot data along the X axis by dates (C3 has an API for this, but it didn’t work with my data).
  • Define a title
  • Optionally add background highlight regions for grouping data points, with each region having its own title.
  • Show a horizontal background line marking 30 overs as a reference.
  • Have good performance – create only as many DOM nodes as necessary for displaying the data.

After quickly looking at some other charting libraries, I bit the bullet and accepted that I was going to have to roll my own.

Custom rendering

Note: All the code for the graphs can be found on GitHub. It changed structure many times as extra requirements turned up, so it’s far from perfect. I didn’t see the need for lots of refactoring, given its one-time-use purpose.

I won’t go into a super-detailed explanation of the code, as most of it is just calling D3 APIs. Instead I’ll detail the high-level concepts.

Most of the contents can be broken down into three main areas of responsibility: setup, sizing, and display.

First is setup. This is only called once per graph on initialisation. The only thing setup code is concerned with is setting up D3 axis components, creating placeholder DOM nodes and adding event listeners.

Second is sizing. This takes DOM nodes created in the setup phase and sets their positioning and dimensions based on the current size of the graph container. The ranges of D3’s axis and scale components are also defined here.

Finally there’s display. This is where the main rendering takes place. The data array that’s passed in is rendered according to a few constraints:

  1. The options already set – filtering, highlight regions, etc.
  2. The dimensions and positioning defined in the sizing phase.

This kind of structure means that when a graph is first created, it calls the setup, sizing and display phases in succession. From then on, resizing the browser window calls straight into sizing and display, meaning the graphs will always fit into the current window (after a short delay due to debouncing). Finally, if graph.data(someNewData) is called, only the display phase is called because the sizing hasn’t changed.

Tweaks for better responsiveness

Although the main data rendering was now responsive to any window size, I realised that some of the text elements didn’t work at small screen sizes. The different types of text required different solutions.

Graph titles could be highly variable in width, as the text is completely different per graph. I ended up with a solution that allowed me to specify different titles in the one attribute – a “primary” title and a shorter variant to be used only if the primary one didn’t fit.

I picked a couple of arbitrary unicode characters that were not going to appear anywhere else, and could be easily typed on my Mac keyboard. One character (· or Option+Shift+9) was used to define a cut-off point in a single title – if the full title didn’t fit in the graph, then only the text up to the cut-off point would be used. The other character (¬ or Option+l (lowercase L)) marked two completely different titles.

Examples from the code comments give a better idea of how they were used:

/**
 * Formatting for titles
 *
 *   `graph-title` attribute and highlight region names can indicate short vs long content.
 *   The short content will be used if the long version can't fit in the space available.
 *   « = short version; » = long version
 *
 *   "This is a title"
 *     « This is a title
 *     » This is a title
 *
 *   "Short title· with some extra"
 *     « Short title
 *     » Short title with some extra
 *
 *   "Short title¬Completely different title"
 *     « Short title
 *     » Completely different title
 */

The other text that didn’t work at small sizes was the legend at the bottom of the graphs. Intially I had added the legend as part of the same SVG element as the graph to keep everything together. I played with some different ways of trying to make the parts of the legend wrap nicely at smaller sizes, but this was one area where SVG worked against me. In the end I realised I was trying to replicate with SVG groups and transforms what CSS gave me for free. I ended up creating one SVG element per legend type, setting display: inline-block on them and letting CSS handle the rest.

Here is a comparison of how the title and legend worked at different graph sizes.

A wide graph with full title text

A narrow graph with reduced title text and wrapped legend

Final touches

With the core functionality done and the graphs finally doing what I wanted, it was time to add in some polish.

Only render when visible

There ended up being 7 graphs in the article. Rendering them all on page load would be a noticeable drag on performance, with over 5000 DOM nodes being inserted into the page.

Instead I used the Verge library to detect when a graph’s custom element became visible for the first time (using a debounced scroll event listener). Once the element becomes visible, the graph setup/sizing/render methods are called for the first time. This makes the initial page load snappy, and also stops the browser from having to render any graph that is never viewed.

Customised Markdown

The article was written using basic Markdown syntax, but with a couple of custom additions. One addition was for adding numeric callouts that appear on the side (using a syntax of ++1234++). The other addition was a helper for easily adding links to specific matches on Cricinfo. The Markdown source was read by a quick-and-dirty Python script, run through a Markdown renderer, then run through extra string replacements for the custom additions. Finally the HTML output was inserted into a basic page template to produce the final article.

So that’s how it was made. Enjoy.