Category Archives: JavaScript

New project: A Visual Studio Code extension for time zone data files

(A boring title, I know, but it’s stuffed full of juicy SEO keywords. Mmmmm.)

As mentioned in my previous project-related post

From now on, I’m going to attempt to write a proper explanation here for each side project I complete. Even if no-one else reads it, at least I’ve created some historical documentation for my future self.

As many people know by now, I’m a time zone nerd. I’ve given talks at conferences and meetups about problems and solutions when dealing with time zones. I sit on the time zone database mailing list and read about last-minute changes to a country’s time zone with a mixture of fascination, horror, and (mostly) amusement.

The IANA time zone database (also known as the “Olson database”) source files are incredibly rich with information. About two thirds of the lines in each file are comments about the data and its history. The project is part database, part history lesson, part social commentary.

A couple of years ago I wrote a Sublime Text package that added syntax highlighting specifically for the formatting of the time zone database files (available via Package Control). While it’s great to read the comments surrounding the data, sometimes you want the comments to be styled like comments and fade away a little, to let the lines of data stand out.
It seems to have been well-received, and Matt Johnson of Microsoft (a frequent contributor to the mailing list) suggested an improvement to the text formatting.

A few months ago, Matt contacted me asking if I was interested in porting the package over to an extension for Visual Studio Code (hereafter referred to as “VS Code”, because I’m lazy and it’s easier to type). I regularly use VS Code for coding, so I figured it was a good way for me to explore the extension ecosystem. I’ll describe how I converted it, and some of the mistakes and improvements I made along the way, in case it’s helpful for someone else.

If you want to cut to the chase and use the finished extension, it’s available as zoneinfo in the VS Code marketplace.

Version 1 — Conversion and syntax highlighting

The initial conversion was remarkably easy. As highlighted in the documentation, the VS Code team provides an extension generator. The generator has a feature that will take a Text Mate language definition file (used by Sublime Text) and convert it into a VS Code extension. Using the generator in this way meant that most of the hard work was done in one go.

npm install -g yo generator-code
yo code

A screenshot of using the "yo code" command line generator tool

I provided the name vscode-zoneinfo and the location of the .tmLanguage file from the Sublime Text package, and used the default values for the rest of the options (which were mostly derived from the contents of the .tmLanguage file). At the end of it I had a zoneinfo directory containing most of what I needed.

I had to tweak some of the generated data to properly match the Sublime Text package. For example, the zoneinfo.tmLanguage file defined specific file names (due to the naming convention of the tzdb source files), but the generator converted those to file extensions, which I had to change back.

The Sublime Text package also contains a default settings file to use the correct indentation, which was not detected by the VS Code extension generator.

    "detect_indentation": false,
    "translate_tabs_to_spaces": false,
    "tab_size": 8

Adding this into the VS Code extension was as easy as adding the same JSON properties to a specific section of the package.json file:

  // Other package.json stuff goes here

  "contributes": {
    "configurationDefaults": {
      "[zoneinfo]": {
        "editor.detectIndentation": false,
        "editor.insertSpaces": false,
        "editor.tabSize": 8

Testing that the changes worked was easy thanks to the extension generator’s default configuration. It provides a “Launch Extension” command that starts up a new window of VS Code with your extension installed.

After that, it was a matter of the usual tuning and polishing of a new project before publishing it: a proper description and keywords in package.json, taking screenshots, writing a README and changelog, adding a licence file, and going through the rigmarole of signing up to yet another publishing platform.

Version 2 — Taking advantage of a new editor

With the initial request taken care of (thanks for the review, Matt), I decided to take advantage of the extra features VS Code provided. Sublime Text is a text editor, while VS Code is an IDE, so there are many more things that can be done.

This also provided a good excuse for me to try using TypeScript. VS Code is written using TypeScript and has first-class support for it, while the extension generator can also handily create all the files and configuration needed to get started.

The first step was to turn my simple config-based extension into something that could hook into the VS Code APIs and lifecycle. I re-ran the extension generator in a separate directory to create a default TypeScript extension, then copied over the files I didn’t already have in my project.

Rather than trying to do everything at once, I broke down the tasks into manageable chunks. I knew that I’d probably have to rewrite a fair amount of code by the end of it, but it helped me stay on track (and not get overwhelmed to the point of giving up). I’ll just highlight the main points here, rather than going into full super-technical details of each step (because that’s what a commit history is for).

Get something working

“Make it work, make it right, make it fast.”

The first step when venturing into new territory like this (new APIs and new syntax in TypeScript) is to follow the documentation and get the smallest, simplest thing working. Quickly getting to a state of “yep, that works” is the best motivating factor. If it takes a long time to get anything working, that doesn’t bode well for the rest of the project.

With that in mind, I whipped up the quickest working implementation I could. A command to “Go to Symbol in File” would parse the currently-focused file, extract the names of any Link, Rule, or Zone declarations, and return them as a list of symbol objects.

But first, the documentation says there are two different models for creating a language extension:

  • In-process — everything is handled within the extension process.
  • Client/server — the extension process just sends commands to a different process, and receives data back.

Uh oh, an architectural decision to make already! Luckily, in this case, the decision turned out to be an easy one. The client/server model is best for programming languages that have their own runtime. The JS-run extension can call out to a server written in another language (Go, Ruby, Python, etc.). Whereas my extension is for a “language” that is just specially-structured data with a whole lot of comments. In this way it’s similar to writing an extension for Markdown or SQL files.

So I followed the documentation examples, hooked up the simple text parser and… huzzah it worked!

A screenshot of the "Go to Symbol in File" feature


“Now that I can get symbols for one file, it should be easy enough to get them for all the tz source files in the workspace,” thought Past Gil. Of course, with the benefit of Current Gil’s hindsight, Past Gil was naïve and should have foreseen that there would be problems. Silly Past Gil.

Overall, the documentation for writing VS Code extensions is very good, and tells me everything I need to know. However, trying to complete this particular task was the one area where I felt lost. The API docs didn’t quite explain the difference between a few seemingly-similar APIs, and one method didn’t work the way I expected it to based on its name. I tried looking for an example of what I was trying to do in the vscode-extension-samples repository, but all the language extension examples were for the client/server model, not the single-process model I was using.

I saved some notes about “things that could be improved in the docs” and persevered. Between confusion of which APIs to use and confusion about how I wanted this feature to work, I ended up changing the file structure a few times.

Some basic diagrams describing multiple attempts at structuring file includes. It took 4 attempts to get the right structure.

Eventually I got it working, and I was now able to quickly search for a zone definition across the whole tz directory. Double huzzah!

A screenshot of the "Go to Symbol in Workspace" feature

Shiny and chrome

Having got the big chunk of confusion out of the way, the rest was pretty smooth. I had to make the text file parsing smarter to find all references from one symbol to another, which then allowed for more features:

  • Click-through navigation for going to the definition of a Zone or Rule. This also brought in an inline preview when hovering, absolutely free of charge (i.e. I didn’t have to do anything, it Just Worked™).
  • Find all references to a Zone or Rule, which is really just the reverse operation of “go to definition”.
  • Smarter text matching when searching for symbols in a workspace
  • Hooking into document events to re-parse a changed document. This ensures that symbol references are correct even for unsaved changes.

Async I can

While looking for examples of other extensions, I realised that TypeScript would allow me to use the async/await syntax without adding any extra build processes. Since most of the VS Code APIs already use promises, it was fairly straightforward to convert to use async/await. The only real trouble I had was working out exactly where to put the async and await keywords when using Promise.all() and Once that was sorted, the code certainly did look neater and easier to follow. I’ll definitely be trying to use async/await more in future where I can.

After those improvements, it was time for another round of cleanup: remove all the debug logging, update the README file, and add more screenshots (often the most time-consuming part of publishing a project). And lo, version 2 was done.

Campsite rule

One thing I try to abide by when working with any external system (browser, IDE, someone else’s open source code, etc.) is the Campsite Rule: “Leave the campsite cleaner than you found it.”

As mentioned earlier, while I was building the extension I made a note of anything I thought could be improved about the experience. Once the extension was finished, I started raising issues and pull requests in VS Code-related repositories. I doubt that they’ll all get fixed, because there are priorities to be maintained on a large project, but at least I can say I did my bit to try to make it better for the next person.

Here’s a list of all the issues and pull requests I created for various VS Code projects:

In the end, I’m not too fussed about the future popularity of this extension. It was designed for a niche audience (i.e. people who read and navigate the source files of the IANA time zone database), and will get limited use. But it was a great learning opportunity for me, and I had fun making it. When my free time for coding is restricted to my morning train trip only, that’s mostly what counts.

New projects: Emoji Censor and Numberwang

I’ve generally been rather lax when it comes to writing about side projects I’ve done. I’ll finish the project, mention it on Twitter a couple of times, then… well, that’s about it. Not a particularly effective way to do promotion of ideas.

Time to do something it. From now on, I’m going to attempt to write a proper explanation here for each side project I complete. Even if no-one else reads it, at least I’ve created some historical documentation for my future self.

Take me down to Distraction City, where the… oooh shiny!

I have a history of talking with people at tech user groups and meetups and coming up with ridiculous ideas for side projects. Mostly the ideas go no further than that because, well, they’re ridiculous (and taking a joke too far often ruins the humour). Sometimes, though, the idea is just too tempting to leave alone.

The first of these ideas came from a SydCSS meetup:

For the rest of the evening, a friend and I would whisper “that’s Numberwang!” whenever a bunch of numbers appeared on a presentation slide. Yes, we’re far too easily amused.

Wait, what the hell is Numberwang?

Numberwang started as a series of sketches on the TV show That Mitchell and Webb Look. The basic premise is a number-focused game show that makes no sense to viewers, but everyone on the show knows exactly what’s happening. I highly recommend watching the full collection of sketches.

The next morning on the train I whipped up a super-quick prototype of a browser extension that would randomly exclaim “That’s Numberwang!” when you typed a number on a web page. I let it linger for a few months, then another conversation spurred me to write it properly. Here’s a video of it in action:

It works in Chrome and Firefox, but I’m not planning on publishing it to the Chrome Web Store or Mozilla Add-ons any time soon, because jokes have limits. I was running it in my day-to-day browser for a while to see if it became really annoying. In the end, it felt like a form of Russian Roulette (albeit one with far less severe consequences). Every time the notification popped up, I’d wonder if the next one would be the event that triggered a full-page rotating animated GIF while I was using my laptop on a crowded train.

Still, the value of even silly and frivolous side projects is in learning new things. In this particular case I learned:

  • How to use browser notifications.
  • How much compatibility there is these days between Chrome and Firefox extension APIs, thanks to the WebExtensions initiative.
  • How to efficiently parse and cache all numbers found in an input, so that only changed numbers would trigger the notification (hooray for ES6 Maps and their ability to store DOM elements as keys).
  • How many people in my immediate social network do or do not know about Numberwang.

Completely worth it.

My ❤️/👿 relationship with emoji

Those who know me well (or even for an hour) know that I’m a grumpy old man on the inside. Emoji characters are just one of many topics I could rant about for a while, but that’s a post for another day. So when Ben Buchanan tweeted this…

…I knew he was on to a winning idea.

I had a tremendous amount of fun working with the Web Speech API and Web Audio API to produce a prototype, then whipped up a quick site to allow anyone to play with it.

The result is Emoji Censor, which will also redact (black out) the emoji characters visually, to match the audio censorship.

Screenshot of Emoji Censor site

Once you get into the mindset of every emoji being mentally swapped for a censorship bleep, social media sites become much funnier to read. (Especially those incredibly condescending “clappy” tweets THAT 👏 LOOK 👏 LIKE 👏 THIS.)

For an extra piece of fun, I then integrated Monica Dincolescu’s emoji-translate library for extra bleeping fun. This way you can convert a bunch of English words to their approximate emoji equivalents, which also get censored.

Although this was yet another distraction from what I was meant to be doing, I still managed to learn:

  • How to use speech synthesis in browsers.
  • The history and commonly-used audio frequency of censorship bleeps.
  • What defines an “emoji character” (hint: like everything to do with languages, it’s complicated).
  • Even on silly side projects, you can still end up making valuable contributions to open source libraries.


After this blog post was first published, I got to present a 5-minute lightning talk at the SydJS meetup about building Emoji Censor. It was even recorded, which is unusual for my 5-minute rants:

After that talk, I had enough people ask me about making an Emoji Censor browser extension that I figured it was worth doing. So, live redacting of emoji characters as you browse the web is now available as an extension for Chrome and Firefox. Enjoy!

Embrace the ridiculous

During these projects, I was ruminating on a chat I had a while ago with Tim Holman. It was innocuous enough that I doubt he remembers it, but it stuck with me nonetheless. I used to feel a bit guilty about making silly projects like these—like somehow I was wasting time that should be used for “proper” projects. Tim indirectly taught me to embrace the whimsical distractions that inevitably pop up (as you’d expect from the author of elevator.js, console.frog, and BSOD.js).

There’s still enormous value in making things that have no real practical use. Not only do you find yourself learning new and unexpected things, but you end up feeling more refreshed and willing to go back to “serious” ideas. A palate cleanser for the mind, in effect. Inspired by Tim, I followed through on these two ideas. I’ve also made all my frivolous projects the first things listed on my coding portfolio site at

To finish off this post, here’s one final thought that only occurred to me while writing it. Emoji Censor changes the length of the audio bleep for combination sequences. For example, the sequence of {U+1F469 WOMAN} {U+200D ZERO-WIDTH JOINER} {U+1F467 GIRL} {U+200D ZERO-WIDTH JOINER} {U+1F466 BOY} (or 👩{ZWJ}👧{ZWJ}👦) is designed to be rendered as one single glyph: 👩‍👧‍👦 (whether it displays as 1 or 3 glyphs depends on your device). Emoji Censor will play this sequence as a bleep about 3 times longer than normal. Theoretically you could craft a string of emoji together that produce either short or long bleeps and create a hidden message in morse code. I’ll leave that as an exercise for the reader.

Strip analytics URL parameters before bookmarking

Have you ever found yourself bookmarking a website URL that contains a heap of tracking parameters in the URL? Generally they’re Google Analytics campaign referral data (e.g.

I use browser bookmarks a lot, mostly as a collection of references around particular topics. The primary source for most of my web development-related bookmarks is a variety of regular email newsletters. My current list of subscriptions is:

What many of these have in common is the addition of Google Analytics tracking parameters to the URL. This is great for the authors of the content, as they have the option to see where spikes in traffic come from. When I’m saving a link for later, though, I don’t want to keep any of the extra URL cruft. It would also be giving false tracking results if I opened a link with that URL data a year or two later.

So I wrote a quick snippet to strip any Google Analytics tracking parameters from a URL. I can run this just before bookmarking the link:

 * Strip out Google Analytics query parameters from the current URL.
 * Makes bookmarking canonical links easier.
(function () {
    var curSearch =;
    if (!curSearch) {
    var curParams = curSearch.slice(1).split('&');
    console.log('Stripping query parameters:', curParams);
    var newParams = curParams.filter(function (param) {
        return param.substr(0, 4) !== 'utm_';
    if (newParams.length === curParams.length) {
    var newSearch = newParams.join('&');
    if (newSearch) {
        newSearch = '?' + newSearch;
    var newUrl = location.href.replace(curSearch, newSearch);
    history.replaceState({}, document.title, newUrl);

I have saved this snippet in my browser’s devtools. See for more details of how these work.

However, some people prefer to use bookmarklets, without fiddling around in browser devtools. I’ve converted the above snippet into a bookmarklet as well. If you’re using a desktop browser, you can drag the following link to your bookmarks area to save it:

Strip GA URL params

Animated demonstration of using the bookmarklet link

From little things big things grow

It’s nearly the end of 2014 — time for a little bit of (perhaps clichéd) reflection on the year that was. Sometimes you look back on a sequence of events in your life and can trace them back to a single catalyst. This particular story starts with a tweet:

This strikes a chord with ideas I’d been vaguely mulling over, and a quick conversation ensues:

And so begins an unexpected journey.

Little things

I put together a talk proposal for CSSConfAU about better browser devtools, especially support for CSS3 features such as transforms and animations. I didn’t expect it to be accepted, having fully admitted it was ideas and prototypes, not finished features. I also submitted a talk proposal for JSConfAU, just to hedge my bets.

I started a little bit of work on a single prototype devtools extension, but didn’t take it very far. Then I got notified that my proposal was into the second round of proposals. Although I don’t like it, I’m fully aware that I’m at my most prolific and productive when there’s a looming deadline involved. I had one month to complete something that could be presented. PANIC!

A couple of weeks later I was notified on consecutive days that I wouldn’t be presenting at CSSConfAU, but I would be presenting at JSConfAU. Now it was panic of a different kind as I switched contexts completely and worked on a presentation in Keynote. The end result of JSConfAU is a story for another post, but I find it interesting how neatly my GitHub activity log sums up the preparation work.


Scratch an itch

After the dust had settled from JSConfAU and I’d taken a short break from doing anything related to coding (as shown in the picture above), I felt like continuing what I’d started with the devtools prototypes. A bit of playing around with ideas led to not much output, so a switch in tactics was called for.

It has been scientifically studied that writing or talking about an idea stimulates different neurons in the brain. This creates connections and reaches conclusions that might not have been reached had you just stuck with abstract thinking. (This is also why ”Rubber duck debugging” works so often.)

Somewhere in the back of my mind I knew there was something I found not quite right about how browser devtools worked, but I couldn’t pick exactly what it was. So I wrote words, not code. I wrote and wrote and wrote, and while writing I hit the proverbial nail on the head. A bigger theory had been found.

In the end I kept the Dr. Seuss analogy* because it worked so well. *Incidentally, after enquiring with the legal team at Random House, I now know the attribution required when writing a blog post that quotes a Dr. Seuss book. Chalk up another item on the list of “Things I unexpectedly learned by being a web developer.” I worked on it, cleaned it up, edited some more and was finally happy with it.

Time to go back to coding, but this time I had a vision and something to aim for. Thanks to writing a draft blog, I had rich descriptions of what the ideas should look like, so all that was left was to build things as quickly as possible. After that came the almost endless preparation of screenshots, YouTube videos and animated GIFs, as well as creating a heap of feature requests for Chrome and Firefox, but finally I was ready to publish and promote.

Big things

Putting a heap of effort into something like The Sneetches and other DevTools wasn’t worth it if if never reached the people actually responsible for building browser devtools. Over the course of a few days I sent the link to a few key people on Twitter, who worked on either Chrome or Firefox devtools in some capacity.

The response was better than I’d hoped for.

I’ve had brief conversations online with various people involved in Chrome and Firefox devtools and got a bit of traction on some feature requests.

Just recently, I was invited to Google Sydney’s office to get a preview of what’s planned for animation support in Chrome’s devtools, and give feedback on what works and what doesn’t. Soon I’ll be meeting someone working on WebKit devtools to discuss ideas in a similar manner.

Regardless of how accurate it may be, I feel like I’ve had at least a small impact on the future of browser devtools, which is pretty damn surprising. If someone had told me a year ago that I’d be in this situation today, frankly I’d have wondered how I’d become trapped in such a trite cliché of retrospective story telling, but I’d be amazed nonetheless.

And it all started with a random Twitter conversation. Thanks, Ben! By coincidence, the Call for Proposals for the next CSSConfAU has just opened up this week…

So here’s to 2015. I have absolutely no clue where it’s going to take me, I’m just going to continue to make it up as I go along and see what happens.


No blog post with this title from an Australian could possibly be complete without including the following song. In fact, I’m pretty sure it’s required by an Australian law somewhere. Honest.

Auto-detecting time zones at JSConfAU

Last week I was lucky enough to present at JSConfAU. My talk was titled “Auto-detecting time zones in JS” (with a sub-title of “Are you a masochist?”).

Part I – Technical details

Taking a tour of terrible temporal twists and turns

A lot of the talk delved into the history of time and time zones, pointing out various oddities and things that can trip you up if you try to deal with them. I won’t repeat all of them here – partly because I doubt anyone would read the whole lot, partly because I just want to focus on the two main points I made: Continue reading

Compute a DOM element’s effective background colour

While working on a new JS library today, I wanted to create a menu overlay element that automatically has the same background colour as its trigger element.

For some context, the basic structure looks like this:

Basic element structure

Over-the-top colours added to highlight element structure

My first thought was to just use getComputedStyle() and read the backgroundColor property of the trigger. This works only if the trigger element itself has a background style.

Of course, if the background styles are set on a container element higher up the DOM tree, that method is a great big ball of uselessness. The problem is that the trigger element reports its background colour as being transparent (which is correct, but still annoying for my purpose).

My solution was to walk up the DOM tree until an element with a background colour is found:

function getBackgroundColour(elem, limitElem) {
  var bgColour = '';
  var styleElem = elem;
  var rTrans = /^transparent|rgba\(0, 0, 0, 0\)$/;
  var style;
  limitElem || (limitElem = document.body);
  while (!bgColour && styleElem !== limitElem.parentNode) {
    style = getComputedStyle(styleElem);
    if (!rTrans.test(style.backgroundColor)) {
      bgColour = style.backgroundColor;
    styleElem = styleElem.parentNode;
  return bgColour;


  • Different browsers return either “transparent” or “rgba(0, 0, 0, 0)” for a transparent background, so I had to make sure to account for both.
  • The limitElem parameter is there as a short-circuit, in case you only want to check within a certain container element rather than bubbling all the way up to document.body.
  • getComputedStyle() doesn’t work in IE8, but I wasn’t planning on making my library work in IE8 anyway. An equivalent API for that browser is element.currentStyle.
  • This behaviour assumes that there aren’t any funky absolute positioning tricks happening that make the child elements appear outside their parent container.

Why am I blogging about this seemly simple technique? Because I searched for a solution first and found bugger all, that’s why.

And before anyone asks: No, using jQuery will not help – it just defers to getComputedStyle() anyway.

Webcam tile sliding game

Another WebRTC experiment shown at SydJS

My first experiments with WebRTC were during the early phases of its development, at a time when there were very few demos and references to work from. One of those early demos was Exploding Pixels (part of Opera’s “Shiny Demos” to coincide with the release of Opera 12) which inspired my very first WebRTC experiment.

We all know the old tile-sliding games that are often given out as cheap toys to kids, where you’re presented with a jumbled picture or pattern and need to slide the pieces around one-by-one until the puzzle is complete.
I wanted to build one of those using JavaScript, but with a live webcam feed as the picture.

The first step was to build a simple tile game library that took some basic parameters and handled the interaction and game state.

var tiler = new Tiler('game', {
    width: 640,
    height: 480,
    rows: 3,
    cols: 3


Hooking up a webcam

Next came the fun part — integrating with WebRTC. To do this I build the library in such a way that the game could be redrawn at any stage with a different background image. It also accepted a parameter for the background that could be a URL path to an image, an image element or a canvas element.

Then it was a simple matter of hooking up the webcam using navigator.getUserMedia, updating a hidden canvas with the video stream (I didn’t use a video element directly as I wanted to flip the image so it acted like a mirror), then setting it as the game’s background image repeatedly.

var tiler = new Tiler('game', {
    width: 640,
    height: 480,
    rows: 4,
    cols: 4
var video = document.getElementById('webcam');
var source = document.getElementById('source');
var ctx = source.getContext('2d');
navigator.getUserMedia({video: true}, function (stream) {
    // A custom helper method to set the webcam stream
    // as the video source - different for each browser
    setStreamSrc(video, stream);
    // Flip the video image so it’s mirrored
    ctx.translate(640, 0);
    ctx.scale(-1, 1);
    (function draw() {
        ctx.drawImage(video, 0, 0);
}, function () {
    console.error('Oh noes!');


Extra difficulty

Taking advantage of the fact that these aren’t real tiles, I added some extra difficulty by randomly flipping or rotating some tiles during setup. It obviously means that the game needs a “selection mode”, where clicking a tile selects it (at which point it can be flipped or rotated) and a double-click moves it.

var tiler = new Tiler('game', {
    width: 640,
    height: 480,
    rows: 4,
    cols: 4,
    move: 'dblclick',
    flip: true,
    rotate: true

This works for both static images and live webcam feeds.



All the code for these demos is on Github at

NOTE: The usual caveats apply for these demos (i.e. They were built as quick experiments for a presentation and are not guaranteed to work properly in all browsers). I’ve tested in Chrome, Opera and Firefox as they were the first browsers to enable WebRTC.



About 18 months ago, I had the idea of using Javascript to take live audio input and make it animate the mouth of a virtual ventriloquist dummy. The original seed of the idea was to play a prank on someone at SydJS. After a bit of research I was disappointed to find out there wasn’t any way to do it. The idea was pushed aside, but never completely forgotten.

Since then, there has been an explosion in new web technologies and standards. The ones that really piqued my interest were the Web Audio API and WebRTC.

The Web Audio API provides fine-grained control over sound processing and manipulation in JS.
WebRTC allows for peer-to-peer video communication between browsers, along with the associated access to webcams and microphones. Bingo.

Time to code

At the time I started playing with WebRTC, the few browsers that had implemented it only supported getting a webcam stream; microphone support was yet to come. I figured I could still work on getting the idea right using an HTML range input.

The first implementation was simple. I found a picture of a ventriloquist dummy online and hard-coded the image and drawing data, then set the position of the dummy’s mouth to be bound to the value of the input.

Then came the part I’d been waiting for: Google Chrome enabled getting microphone data in their Canary build.

All I had to do was get access to the microphone via the navigator.getUserMedia() API, pipe that input into a Web Audio API AnalyzerNode, then change the position of the dummy’s mouth based on the maximum audio level detected in the microphone. One final adjustment was made to lock the dummy’s mouth movement to regular “steps”, in order to give it a more old-fashioned wooden feel.

And thus was born the first demo of the library that has been christened “Chuckles”.


More interaction

While the first version worked well enough, it still required hard-coding of all the data. So I built in some new display modes to make it more interactive:

  • Normal mode, the default state
  • Drawing mode, where you can draw on the image to define where the “mouth” segment is
  • Dragging mode, where you can drag the “mouth” around to set its position when fully open

A quick addition of drag-and-drop for adding your own image and you too can make your own ventriloquist dummy:


Next steps

The code is on GitHub at, and there are a few more things I’d like to do with it at some point.

  • Better definition for the mouth segment (possibly using a border).
  • Allow transforming the mouth segment using rotate, skew, etc.
  • Define eye regions that can move left/right independently of the mouth.

But adding those features all depends on whether I can be convinced the idea is actually useful, rather than just being a throw-away demo.