Category Archives: Development

Using Make to generate Chrome extension icons

File this under “you don’t have to use JS for everything”.

Problem

I’ve built a few Chrome browser extensions, and there’s a part of making them that I generally find rather tedious: icons. To account for all the different places where icons are used, as well as multiple screen resolutions, 6 differently-sized icon files are required. Sometimes I’ve manually exported a few of these from Photoshop for one-off projects, but my most recent extension required a few updates to the icon. This was the tipping point for adding automation.

In this post I’ll detail how I used only a few lines of a Makefile to handle all icon size conversions. Yep, Make. No JavaScript-based build system with a multitude of npm modules and seconds of overhead for every run. Sometimes the old ways are the best.

Solution

Taking advantage of some special features of Make, this is what I came up with:

Continue reading

Strip analytics URL parameters before bookmarking

Have you ever found yourself bookmarking a website URL that contains a heap of tracking parameters in the URL? Generally they’re Google Analytics campaign referral data (e.g. http://example.com/?utm_source=SomeThing&utm_medium=email).

I use browser bookmarks a lot, mostly as a collection of references around particular topics. The primary source for most of my web development-related bookmarks is a variety of regular email newsletters. My current list of subscriptions is:

What many of these have in common is the addition of Google Analytics tracking parameters to the URL. This is great for the authors of the content, as they have the option to see where spikes in traffic come from. When I’m saving a link for later, though, I don’t want to keep any of the extra URL cruft. It would also be giving false tracking results if I opened a link with that URL data a year or two later.

So I wrote a quick snippet to strip any Google Analytics tracking parameters from a URL. I can run this just before bookmarking the link:

Continue reading

LOSS: Lazy Open-Source Software

Update, June 2020: Some details of this post were accurate at the time it was written, but aren’t accurate right now. Writing code is sometimes my primary job again (depending on the role). However, the main message still holds true.


I work on open-source projects as a hobby. Writing code used to be my primary job but it hasn’t been for a few years now.

I make my hobby projects’ source code open by default. Mainly because I think that possibly someone will benefit from having it available. Primarily I write code for myself, but if someone else happens to find it beneficial, cool! I know I’ve benefited a lot from the open source projects of others in the past, so why not return the favour?

But here’s the thing. Because I’ve relegated coding to just a hobby, I’m actually terrified of something I’ve written becoming very popular. I don’t like the thought of one of my projects hitting 10000+ GitHub stars (or however else you want to measure popularity) — then getting a heap of issues and pull requests as a result. In fact, that thought scares the shit out of me. I don’t want that responsibility.

I have so many project ideas that arrive in my head far faster than I can build them. I write them all down, and watch the list grow ever larger. At some point I had to learn to accept that a huge majority of them would never even be started, let alone finished. I had to prioritise them. Over time, I realised that certain types of ideas keep getting relegated to the bottom of the list. Not the ones that would require the most work to create, but the ones that would have the largest amount of ongoing maintenance.

Thus I realised that my model for coding is not FOSS, it’s LOSS — Lazy Open-Source Software. I’m happy to write something new and try out different ideas, but coming back to maintain old projects is painful. Some people say that’s just ignoring problems, but I don’t particularly care.

My coding time is pretty much just restricted to train rides to/from the office where I work, and maybe a bit of time of an evening. Therefore I try to maximise the enjoyment I get out of that time. I fuck around with some code for a bit and make something that I like. But most of the time I like to focus on other things at home — family, non-coding hobbies, and just generally switching off.

What I’m trying to say is… actually, I have no idea what I’m trying to say. I just thought of the LOSS acronym and ran with it. I guess the underlying point I’m making is, if you’re relying on me to fix up some old open source code, I’m sorry. It’s on the list somewhere.

Responsive graphs and custom elements

A few weeks ago I published a statistical analysis of one-day cricket – specifically a verification of the old “double the 30-over score” trope.

This blog post is not about cricket – you can read the link above for that. This is a post about the technology behind that analysis. Ben Hosken from Flink Labs says around 60% of a data visualisation project is just collecting and cleaning the data – and so it was with this analysis. What was originally a “quick distraction” became my coding-on-the-train hobby project for 3 weeks. Here’s how I built it.

Data

All data came from Cricinfo, thanks to their ball-by-ball statistics. I won’t go into much detail about how I got the data because I have don’t know what their data usage guidelines are (I looked). Suffice it to say that I ended up with over 7000 JSON files on my machine.

From these I wrote a series of Python scripts to parse, verify, clean up, analyse and collate the data for each innings. The final output was a single JSON file containing only the relevant data I needed.

Custom element

In order to fit the graphs neatly into the context of a written article, I wanted to be able to embed them as part of the writing process. The article (source on GitHub) was written using Markdown with some custom additions (which are described later). Markdown was used despite my personal annoyances with some aspects of it, mainly because it’s widely supported.

The easiest way to put a custom graph into the article was to plonk some HTML in the Markdown source and let it be copied across to the rendered output – but what HTML? I needed containers for SVG graphs that would be rendered later via JavaScript. Given that there would be a few graphs, the containers needed to define customisations per graph, preferably as part of the article source (for better context when writing).

The usual choice for something like this would be a plain old <div> with lots of data- attributes, one for each option. Unfortunately this didn’t let me easily tweak the options after rendering, which I consider important for data explorations so that you can get the best possible view of the data.

Given the one-off nature of the article, I had the opportunity to use “new shiny” tech in the form of a custom element, part of the new Web Components specification. Obviously proper custom elements aren’t widely supported across browsers yet, so I used the excellent Skate library as a helper (disclaimer: I work on the same team as Skate’s author).

HTML API

Using a custom element meant that my in-context API was simple HTML attributes. I quickly wrote a potential API in a comment before I’d written any code for it.

/**
 * <odi-graph> custom element.
 * ___________________________
 *
 * Basic usage:
 *
 *   <odi-graph>Text description as a fallback</odi-graph>
 *
 * Attributes (all are optional):
 *
 *   <odi-graph graph-title="This is a graph"></odi-graph>
 *     Add a title to the graph
 *
 *   <odi-graph rolling-average="true"></odi-graph>
 *     Include a rolling average line (off by default)
 *
 * [... etc.]
 */

This allowed me to write the article with in-context HTML like so:

In order to find the answer, I graphed out a 100-innings rolling average, to give a better indication of trends over time.

<odi-graph graph-title="Overall vs rolling average"
    rolling-average="true" innings-points="false">
    IMAGE: A graph showing a rolling average halfway mark as described in the next paragraph.
</odi-graph>

Thanks to Skate, the initial definition of the custom element and its attributes was very simple.

skate('odi-graph', {
    attached: function (elem) {
        // Create internal DOM structure
        // [... SNIP: nothing but basic DOM element manipulation here ...]

        // Create the graph
        elem.graph = new ODIGraph();
        queue(elem, function () {
            elem.graph.init(cloneData(stats.data), configMapper(elem));
            onceVisible(elem, function () {
                elem.graph.render(elem);
            });
        });
    },
    attributes: {
        'graph-title': attrSetter,
        'rolling-average': attrSetter,
        'innings-points': attrSetter,
        'ybounds': attrSetter,
        'date-start': attrSetter,
        'date-end': attrSetter,
        'filter': attrSetter,
        'highlight': attrSetter,
        'reset-highlight-averages': attrSetter
    }
});

function attrSetter(elem, data) {
    if (data.newValue !== data.oldValue) {
        if (elem.graph && elem.graph.inited) {
            elem.graph.config(configMapper(elem));
        }
    }
}

The queue and onceVisible calls were just making sure that the graph wasn’t rendered too early, before the data finished loading. The key part is that it just calls out to a separate ODIGraph instance (described later on in this post) for separation of responsibilities. The attrSetter method made sure that the graph was re-rendered if any HTML attribute changed, after running the attributes through configMapper which just parsed the attribute strings into a JS config object.

This gave me the enormous benefit of being able to just tweak attributes for a graph in browser devtools and see the graph instantly update.

Editing a graph element's HTML attributes instantly updates the SVG internals

Accessibility

As shown above, all graphs were written with some text inside the custom element. During the initialisation of the custom element, a <figure> element is created inside it, then the custom element’s original text context is moved to a hidden <figcaption>. This provides a description that is read out by screenreaders for those users who can’t see the graph visuals.

An additional aspect of providing descriptive captions was making sure that the article text immediately after the graph described an interpretation of the graph data. Doing this ensured that no information was presented only via visuals.

For the finishing touches I added the aria-hidden="true" attribute to the axis and legend text in the graph. Since the graph was built with SVG, all the text elements would be read by screenreaders but without context. (After all, what does “15 20 25 30” mean without the visual positioning context of being on an axis?)

Responsive graph

For the actual graphs, I wanted them to be interactive, mobile-friendly, and (hopefully) not much work.

C3

I started off by using the C3 library for ease of development. It initially ticked a lot of boxes:

  • I could quickly get up and running with my existing data set.
  • It was responsive and auto-expanded to whatever size container it was put into.
  • Auto-scaling and highlighting of data points (with a vertical hover marker) came for free.

I was able to look at my data set in a few different ways with just some simple config tweaks. However, as I dug deeper into what I wanted the graphs to do, I found myself fighting against the library instead of working with it.

As an example, I wanted to combine a scatter plot for individual innings with one or more line plots for the averages. While this was possible with C3, it started to become an either/or situation. C3 gave me a vertical marker line when hovering anywhere on the graph (which is what I wanted), but only when I had just line plots – as soon as the scatter plot was added, the nice hover behaviour disappeared. The way the hover marker was implemented also started to fall down – it worked fine with around 100 data points, but as soon as the full data set of over 1800 points was added, the hovering stopped working.

Added to this was a performance problem of C3 with the size of my data set. In order to hide data point markers on the line plots, C3 adds an SVG DOM node per point then hides them via setting their opacity to 0. It does this so that they can be faded in with an animation (if they’re set to visible via an API call). In my case, this meant over 3500 DOM nodes were being created just to be hidden.

In the end I decided that while C3 is a nice charting library, it wasn’t set up for my use case. I wrote out a list of all the things I could think I would possibly want the graphs to do:

  • Combine scatter and line plots on the same graph.
  • Vertical marker to highlight data points when hovering, auto-snapping the line to the nearest data point to the mouse cursor.
  • Fully custom rendering of a tooltip when highlighting a data point.
  • Resize when the graph’s container size changes.
  • Plot data along the X axis by dates (C3 has an API for this, but it didn’t work with my data).
  • Define a title
  • Optionally add background highlight regions for grouping data points, with each region having its own title.
  • Show a horizontal background line marking 30 overs as a reference.
  • Have good performance – create only as many DOM nodes as necessary for displaying the data.

After quickly looking at some other charting libraries, I bit the bullet and accepted that I was going to have to roll my own.

Custom rendering

Note: All the code for the graphs can be found on GitHub. It changed structure many times as extra requirements turned up, so it’s far from perfect. I didn’t see the need for lots of refactoring, given its one-time-use purpose.

I won’t go into a super-detailed explanation of the code, as most of it is just calling D3 APIs. Instead I’ll detail the high-level concepts.

Most of the contents can be broken down into three main areas of responsibility: setup, sizing, and display.

First is setup. This is only called once per graph on initialisation. The only thing setup code is concerned with is setting up D3 axis components, creating placeholder DOM nodes and adding event listeners.

Second is sizing. This takes DOM nodes created in the setup phase and sets their positioning and dimensions based on the current size of the graph container. The ranges of D3’s axis and scale components are also defined here.

Finally there’s display. This is where the main rendering takes place. The data array that’s passed in is rendered according to a few constraints:

  1. The options already set – filtering, highlight regions, etc.
  2. The dimensions and positioning defined in the sizing phase.

This kind of structure means that when a graph is first created, it calls the setup, sizing and display phases in succession. From then on, resizing the browser window calls straight into sizing and display, meaning the graphs will always fit into the current window (after a short delay due to debouncing). Finally, if graph.data(someNewData) is called, only the display phase is called because the sizing hasn’t changed.

Tweaks for better responsiveness

Although the main data rendering was now responsive to any window size, I realised that some of the text elements didn’t work at small screen sizes. The different types of text required different solutions.

Graph titles could be highly variable in width, as the text is completely different per graph. I ended up with a solution that allowed me to specify different titles in the one attribute – a “primary” title and a shorter variant to be used only if the primary one didn’t fit.

I picked a couple of arbitrary unicode characters that were not going to appear anywhere else, and could be easily typed on my Mac keyboard. One character (· or Option+Shift+9) was used to define a cut-off point in a single title – if the full title didn’t fit in the graph, then only the text up to the cut-off point would be used. The other character (¬ or Option+l (lowercase L)) marked two completely different titles.

Examples from the code comments give a better idea of how they were used:

/**
 * Formatting for titles
 *
 *   `graph-title` attribute and highlight region names can indicate short vs long content.
 *   The short content will be used if the long version can't fit in the space available.
 *   « = short version; » = long version
 *
 *   "This is a title"
 *     « This is a title
 *     » This is a title
 *
 *   "Short title· with some extra"
 *     « Short title
 *     » Short title with some extra
 *
 *   "Short title¬Completely different title"
 *     « Short title
 *     » Completely different title
 */

The other text that didn’t work at small sizes was the legend at the bottom of the graphs. Intially I had added the legend as part of the same SVG element as the graph to keep everything together. I played with some different ways of trying to make the parts of the legend wrap nicely at smaller sizes, but this was one area where SVG worked against me. In the end I realised I was trying to replicate with SVG groups and transforms what CSS gave me for free. I ended up creating one SVG element per legend type, setting display: inline-block on them and letting CSS handle the rest.

Here is a comparison of how the title and legend worked at different graph sizes.

A wide graph with full title text

A narrow graph with reduced title text and wrapped legend

Final touches

With the core functionality done and the graphs finally doing what I wanted, it was time to add in some polish.

Only render when visible

There ended up being 7 graphs in the article. Rendering them all on page load would be a noticeable drag on performance, with over 5000 DOM nodes being inserted into the page.

Instead I used the Verge library to detect when a graph’s custom element became visible for the first time (using a debounced scroll event listener). Once the element becomes visible, the graph setup/sizing/render methods are called for the first time. This makes the initial page load snappy, and also stops the browser from having to render any graph that is never viewed.

Customised Markdown

The article was written using basic Markdown syntax, but with a couple of custom additions. One addition was for adding numeric callouts that appear on the side (using a syntax of ++1234++). The other addition was a helper for easily adding links to specific matches on Cricinfo. The Markdown source was read by a quick-and-dirty Python script, run through a Markdown renderer, then run through extra string replacements for the custom additions. Finally the HTML output was inserted into a basic page template to produce the final article.

So that’s how it was made. Enjoy.

From little things big things grow

It’s nearly the end of 2014 — time for a little bit of (perhaps clichéd) reflection on the year that was. Sometimes you look back on a sequence of events in your life and can trace them back to a single catalyst. This particular story starts with a tweet:

This strikes a chord with ideas I’d been vaguely mulling over, and a quick conversation ensues:

And so begins an unexpected journey.

Little things

I put together a talk proposal for CSSConfAU about better browser devtools, especially support for CSS3 features such as transforms and animations. I didn’t expect it to be accepted, having fully admitted it was ideas and prototypes, not finished features. I also submitted a talk proposal for JSConfAU, just to hedge my bets.

I started a little bit of work on a single prototype devtools extension, but didn’t take it very far. Then I got notified that my proposal was into the second round of proposals. Although I don’t like it, I’m fully aware that I’m at my most prolific and productive when there’s a looming deadline involved. I had one month to complete something that could be presented. PANIC!

A couple of weeks later I was notified on consecutive days that I wouldn’t be presenting at CSSConfAU, but I would be presenting at JSConfAU. Now it was panic of a different kind as I switched contexts completely and worked on a presentation in Keynote. The end result of JSConfAU is a story for another post, but I find it interesting how neatly my GitHub activity log sums up the preparation work.

JSConfAU-GitHub-stats

Scratch an itch

After the dust had settled from JSConfAU and I’d taken a short break from doing anything related to coding (as shown in the picture above), I felt like continuing what I’d started with the devtools prototypes. A bit of playing around with ideas led to not much output, so a switch in tactics was called for.

It has been scientifically studied that writing or talking about an idea stimulates different neurons in the brain. This creates connections and reaches conclusions that might not have been reached had you just stuck with abstract thinking. (This is also why ”Rubber duck debugging” works so often.)

Somewhere in the back of my mind I knew there was something I found not quite right about how browser devtools worked, but I couldn’t pick exactly what it was. So I wrote words, not code. I wrote and wrote and wrote, and while writing I hit the proverbial nail on the head. A bigger theory had been found.

In the end I kept the Dr. Seuss analogy* because it worked so well. *Incidentally, after enquiring with the legal team at Random House, I now know the attribution required when writing a blog post that quotes a Dr. Seuss book. Chalk up another item on the list of “Things I unexpectedly learned by being a web developer.” I worked on it, cleaned it up, edited some more and was finally happy with it.

Time to go back to coding, but this time I had a vision and something to aim for. Thanks to writing a draft blog, I had rich descriptions of what the ideas should look like, so all that was left was to build things as quickly as possible. After that came the almost endless preparation of screenshots, YouTube videos and animated GIFs, as well as creating a heap of feature requests for Chrome and Firefox, but finally I was ready to publish and promote.

Big things

Putting a heap of effort into something like The Sneetches and other DevTools wasn’t worth it if if never reached the people actually responsible for building browser devtools. Over the course of a few days I sent the link to a few key people on Twitter, who worked on either Chrome or Firefox devtools in some capacity.

The response was better than I’d hoped for.

I’ve had brief conversations online with various people involved in Chrome and Firefox devtools and got a bit of traction on some feature requests.

Just recently, I was invited to Google Sydney’s office to get a preview of what’s planned for animation support in Chrome’s devtools, and give feedback on what works and what doesn’t. Soon I’ll be meeting someone working on WebKit devtools to discuss ideas in a similar manner.

Regardless of how accurate it may be, I feel like I’ve had at least a small impact on the future of browser devtools, which is pretty damn surprising. If someone had told me a year ago that I’d be in this situation today, frankly I’d have wondered how I’d become trapped in such a trite cliché of retrospective story telling, but I’d be amazed nonetheless.

And it all started with a random Twitter conversation. Thanks, Ben! By coincidence, the Call for Proposals for the next CSSConfAU has just opened up this week…

So here’s to 2015. I have absolutely no clue where it’s going to take me, I’m just going to continue to make it up as I go along and see what happens.

P.S.

No blog post with this title from an Australian could possibly be complete without including the following song. In fact, I’m pretty sure it’s required by an Australian law somewhere. Honest.

Web Directions South 2014 in 4 words

An experiment in brevity.

What happens when you combine a conference report with Four Word Film Reviews? An attempt to review the conference – and each of the presentations I saw – in just four words.

WDS14: “Don’t forget we’re human”

  • Matt Webb – Interconnected: “Extend yourself. Start small.”
  • Scott Thomas – Doing Simple. Honest. Work: “Understand first; design later.”
  • Emily Nakashima – The Operable Front End: “Log -> monitor -> fix -> repeat.”
  • Erin Moore – Convenient Fictions: Real time not ‘realtime’”
  • Mark Dalgleish – A State of Change: “Immutability beats observing mutations.”
  • Julio Cesar Ody – The Loaded Javascript: “Defer, async or other?”
  • Sarah Maddox – Bit Rot in the Docs: “Test docs like code.”
  • Dan Hon – An Internet for Humans, Too: “Reduce the empathy gap.”
  • Genevieve Bell – Being Human in a Digital World: “Behavioural fundamentals don’t change.”
  • Johnny Mack – Building Trust: “Form. Storm. Norm. Perform.”
  • Jonathon Colman – Build Better Content: “Seek clarity, speak human.”
  • Douglas Bowman – A Voice for Everyone: “Positivity of random connections.”
  • Tobias Revell – Haunted Machines: “Technological magicians: Think responsibly.”

Some background context

Two weeks ago I attended Web Directions South 2014. I wanted to take notes so I could write a recap to share internally at work, but I didn’t want to write a wall of text like last year’s recap.

This year I took the approach that I would write some minimal notes after each presentation. This meant that I wasn’t so distracted by writing notes that I stopped absorbing the presentation content. It also gave me time to reflect on the presentation in “down time” and distill the key points. (The only exception to the rule was when I wanted to capture a link or an exact quote.)

When trying to summarise the presentations for the write-up, I forced myself to give a synopsis of each talk using only one sentence. While doing this, I suddenly remembered one of my old favourite sites that I hadn’t looked at in a long time: Four Word Film Reviews.

Inspiration struck, and this post is the result. (Though unfortunately it probably doesn’t make sense to anyone who wasn’t at the conference to begin with.)

Potential browser devtools support for responsive images

Just a quick one as I’m still in the developer tools headspace after The Sneetches and other DevTools.

On the weekend I saw that support for responsive images – via the <picture> element – landed in Chrome 38. Naturally I tried out the test page listed in the article to see how it was put together.

After a short time inspecting the page, I realised that while the Chrome browser has support for <picture>, the Chrome DevTools don’t. When inspecting a <picture> element I found myself completely unable to tell which image was actually showing at any one time.

Obviously this feature is still in its infancy and I’m sure the developer tools will catch up soon. In the meantime, though, I’m going to present my completely-not-asked-for suggestions for Chrome DevTools support of <picture>.

Suggestion #1: Fade out non-matching sources

Within a <picture> element there are multiple sources of images for the browser to choose from. These are provided by <source> elements with an <img> element as the final fallback image (to ensure compatibility with browsers that don’t support <picture>).

The current DevTools display of a <picture> element

My suggestion is to use the technique seen in the Styles panel and fade out any sources that are not being shown, only highlighting the currently-visible source.

DevTools with non-active image sources faded out

In fact, a <source> element can define multiple images in its srcset parameter to support multiple resolutions. This technique could be extended to the srcset parameter as well, to show which specific part of the srcset is currently applied.

DevTools with non-active image sources faded out in the srcset attribute

Of course, the <img> element is the one actually doing the display of the chosen source, so it’s debateable whether it should be faded out. Either way, I’m just playing with ideas here.

Suggestion #2: Hover previews for srcset

<img> elements show a preview of the image when you hover over the src attribute.

Hovering over an image source in DevTools shows a preview

This should also be extended to the srcset parameter of a <source> element, even if they list multiple sources.

Hovering over an srcset attribute in DevTools shows a preview

Suggestion #3: Explicitly indicate the currently visible image source

I don’t know if “computed attributes” (my made-up term) are likely to become a thing in devtools, but here is an attempt anyway. The basic idea is to show an attribute on the <img> element that indicates the current source image, as a representation of the currentSrc property.

DevTools with an image's current source shown as a computed attribute

(Clearly it’s not a perfect visual design, but it gets the point across.)

Conclusion

I don’t have one, other than saying that using devtools to alter devtools is a surprising amount of fun, and that this mini project was a wonderful distraction from whatever it was I was meant to be doing in my spare time.

The Sneetches and other DevTools

In which I try to distill some of my vapourware thoughts into fluid ideas on improving the guiding principle behind developer tools in browsers.

TL;DR: In the wonderful words of one of my colleagues: “There is no TL;DR – learn to Instapaper.” Some things actually take time to read.

Part I – Pre(r)amble

When it comes to browser developer tools, Chrome and Firefox are currently in a sort of friendly arms race; each innovation from one drives new ideas for the other, and vice versa. This can only be a benefit to web developers everywhere.

This speed of innovation is also necessary in the face of the rapid development of new W3C specs for both CSS and JS. There are new CSS modules and JS APIs popping up all the time. Some of these require new thinking about how to interact with them in devtools. Firefox’s new Web Audio API node inspector is a brilliant example of this.

A little while ago, Mozilla started using UserVoice to collect ideas for ways to improve Firefox’s devtools, which spurred me to write this. I’d been thinking for a few months about ways of improving the devtools to cater for certain new(ish) features of CSS. Along the way I realised that there’s a fundamental assumption in most devtools features that should be challenged.

So, rather than continue to have the ideas just rattling around my head and failing to be developed, it’s time to subject them to a mass audience.

Continue reading

Auto-detecting time zones at JSConfAU

Last week I was lucky enough to present at JSConfAU. My talk was titled “Auto-detecting time zones in JS” (with a sub-title of “Are you a masochist?”).

Part I – Technical details

Taking a tour of terrible temporal twists and turns

A lot of the talk delved into the history of time and time zones, pointing out various oddities and things that can trip you up if you try to deal with them. I won’t repeat all of them here – partly because I doubt anyone would read the whole lot, partly because I just want to focus on the two main points I made: Continue reading

Compute a DOM element’s effective background colour

While working on a new JS library today, I wanted to create a menu overlay element that automatically has the same background colour as its trigger element.

For some context, the basic structure looks like this:

Basic element structure

Over-the-top colours added to highlight element structure

My first thought was to just use <a href="https://developer.mozilla.org/en-US/docs/Web/API/Window.getComputedStyle">getComputedStyle()</a> and read the backgroundColor property of the trigger. This works only if the trigger element itself has a background style.

Of course, if the background styles are set on a container element higher up the DOM tree, that method is a great big ball of uselessness. The problem is that the trigger element reports its background colour as being transparent (which is correct, but still annoying for my purpose).

My solution was to walk up the DOM tree until an element with a background colour is found:

function getBackgroundColour(elem, limitElem) {
  var bgColour = '';
  var styleElem = elem;
  var rTrans = /^transparent|rgba\(0, 0, 0, 0\)$/;
  var style;
  limitElem || (limitElem = document.body);
  while (!bgColour && styleElem !== limitElem.parentNode) {
    style = getComputedStyle(styleElem);
    if (!rTrans.test(style.backgroundColor)) {
      bgColour = style.backgroundColor;
    }
    styleElem = styleElem.parentNode;
  }
  return bgColour;
}

Notes

  • Different browsers return either “transparent” or “rgba(0, 0, 0, 0)” for a transparent background, so I had to make sure to account for both.
  • The limitElem parameter is there as a short-circuit, in case you only want to check within a certain container element rather than bubbling all the way up to document.body.
  • getComputedStyle() doesn’t work in IE8, but I wasn’t planning on making my library work in IE8 anyway. An equivalent API for that browser is element.currentStyle.
  • This behaviour assumes that there aren’t any funky absolute positioning tricks happening that make the child elements appear outside their parent container.

Why am I blogging about this seemly simple technique? Because I searched for a solution first and found bugger all, that’s why.

And before anyone asks: No, using jQuery will not help – it just defers to getComputedStyle() anyway.