From little things big things grow

It’s nearly the end of 2014 — time for a little bit of (perhaps clichéd) reflection on the year that was. Sometimes you look back on a sequence of events in your life and can trace them back to a single catalyst. This particular story starts with a tweet:

This strikes a chord with ideas I’d been vaguely mulling over, and a quick conversation ensues:

And so begins an unexpected journey.

Little things

I put together a talk proposal for CSSConfAU about better browser devtools, especially support for CSS3 features such as transforms and animations. I didn’t expect it to be accepted, having fully admitted it was ideas and prototypes, not finished features. I also submitted a talk proposal for JSConfAU, just to hedge my bets.

I started a little bit of work on a single prototype devtools extension, but didn’t take it very far. Then I got notified that my proposal was into the second round of proposals. Although I don’t like it, I’m fully aware that I’m at my most prolific and productive when there’s a looming deadline involved. I had one month to complete something that could be presented. PANIC!

A couple of weeks later I was notified on consecutive days that I wouldn’t be presenting at CSSConfAU, but I would be presenting at JSConfAU. Now it was panic of a different kind as I switched contexts completely and worked on a presentation in Keynote. The end result of JSConfAU is a story for another post, but I find it interesting how neatly my GitHub activity log sums up the preparation work.

JSConfAU-GitHub-stats

Scratch an itch

After the dust had settled from JSConfAU and I’d taken a short break from doing anything related to coding (as shown in the picture above), I felt like continuing what I’d started with the devtools prototypes. A bit of playing around with ideas led to not much output, so a switch in tactics was called for.

It has been scientifically studied that writing or talking about an idea stimulates different neurons in the brain. This creates connections and reaches conclusions that might not have been reached had you just stuck with abstract thinking. (This is also why ”Rubber duck debugging” works so often.)

Somewhere in the back of my mind I knew there was something I found not quite right about how browser devtools worked, but I couldn’t pick exactly what it was. So I wrote words, not code. I wrote and wrote and wrote, and while writing I hit the proverbial nail on the head. A bigger theory had been found.

In the end I kept the Dr. Seuss analogy* because it worked so well. *Incidentally, after enquiring with the legal team at Random House, I now know the attribution required when writing a blog post that quotes a Dr. Seuss book. Chalk up another item on the list of “Things I unexpectedly learned by being a web developer.” I worked on it, cleaned it up, edited some more and was finally happy with it.

Time to go back to coding, but this time I had a vision and something to aim for. Thanks to writing a draft blog, I had rich descriptions of what the ideas should look like, so all that was left was to build things as quickly as possible. After that came the almost endless preparation of screenshots, YouTube videos and animated GIFs, as well as creating a heap of feature requests for Chrome and Firefox, but finally I was ready to publish and promote.

Big things

Putting a heap of effort into something like The Sneetches and other DevTools wasn’t worth it if if never reached the people actually responsible for building browser devtools. Over the course of a few days I sent the link to a few key people on Twitter, who worked on either Chrome or Firefox devtools in some capacity.

The response was better than I’d hoped for.

I’ve had brief conversations online with various people involved in Chrome and Firefox devtools and got a bit of traction on some feature requests.

Just recently, I was invited to Google Sydney’s office to get a preview of what’s planned for animation support in Chrome’s devtools, and give feedback on what works and what doesn’t. Soon I’ll be meeting someone working on WebKit devtools to discuss ideas in a similar manner.

Regardless of how accurate it may be, I feel like I’ve had at least a small impact on the future of browser devtools, which is pretty damn surprising. If someone had told me a year ago that I’d be in this situation today, frankly I’d have wondered how I’d become trapped in such a trite cliché of retrospective story telling, but I’d be amazed nonetheless.

And it all started with a random Twitter conversation. Thanks, Ben! By coincidence, the Call for Proposals for the next CSSConfAU has just opened up this week…

So here’s to 2015. I have absolutely no clue where it’s going to take me, I’m just going to continue to make it up as I go along and see what happens.

P.S.

No blog post with this title from an Australian could possibly be complete without including the following song. In fact, I’m pretty sure it’s required by an Australian law somewhere. Honest.

Web Directions South 2014 in 4 words

An experiment in brevity.

What happens when you combine a conference report with Four Word Film Reviews? An attempt to review the conference – and each of the presentations I saw – in just four words.

WDS14: “Don’t forget we’re human”

  • Matt Webb – Interconnected: “Extend yourself. Start small.”
  • Scott Thomas – Doing Simple. Honest. Work: “Understand first; design later.”
  • Emily Nakashima – The Operable Front End: “Log -> monitor -> fix -> repeat.”
  • Erin Moore – Convenient Fictions: Real time not ‘realtime’”
  • Mark Dalgleish – A State of Change: “Immutability beats observing mutations.”
  • Julio Cesar Ody – The Loaded Javascript: “Defer, async or other?”
  • Sarah Maddox – Bit Rot in the Docs: “Test docs like code.”
  • Dan Hon – An Internet for Humans, Too: “Reduce the empathy gap.”
  • Genevieve Bell – Being Human in a Digital World: “Behavioural fundamentals don’t change.”
  • Johnny Mack – Building Trust: “Form. Storm. Norm. Perform.”
  • Jonathon Colman – Build Better Content: “Seek clarity, speak human.”
  • Douglas Bowman – A Voice for Everyone: “Positivity of random connections.”
  • Tobias Revell – Haunted Machines: “Technological magicians: Think responsibly.”

Some background context

Two weeks ago I attended Web Directions South 2014. I wanted to take notes so I could write a recap to share internally at work, but I didn’t want to write a wall of text like last year’s recap.

This year I took the approach that I would write some minimal notes after each presentation. This meant that I wasn’t so distracted by writing notes that I stopped absorbing the presentation content. It also gave me time to reflect on the presentation in “down time” and distill the key points. (The only exception to the rule was when I wanted to capture a link or an exact quote.)

When trying to summarise the presentations for the write-up, I forced myself to give a synopsis of each talk using only one sentence. While doing this, I suddenly remembered one of my old favourite sites that I hadn’t looked at in a long time: Four Word Film Reviews.

Inspiration struck, and this post is the result. (Though unfortunately it probably doesn’t make sense to anyone who wasn’t at the conference to begin with.)

Potential browser devtools support for responsive images

Just a quick one as I’m still in the developer tools headspace after The Sneetches and other DevTools.

On the weekend I saw that support for responsive images – via the <picture> element – landed in Chrome 38. Naturally I tried out the test page listed in the article to see how it was put together.

After a short time inspecting the page, I realised that while the Chrome browser has support for <picture>, the Chrome DevTools don’t. When inspecting a <picture> element I found myself completely unable to tell which image was actually showing at any one time.

Obviously this feature is still in its infancy and I’m sure the developer tools will catch up soon. In the meantime, though, I’m going to present my completely-not-asked-for suggestions for Chrome DevTools support of <picture>.

Suggestion #1: Fade out non-matching sources

Within a <picture> element there are multiple sources of images for the browser to choose from. These are provided by <source> elements with an <img> element as the final fallback image (to ensure compatibility with browsers that don’t support <picture>).

The current DevTools display of a <picture> element

My suggestion is to use the technique seen in the Styles panel and fade out any sources that are not being shown, only highlighting the currently-visible source.

DevTools with non-active image sources faded out

In fact, a <source> element can define multiple images in its srcset parameter to support multiple resolutions. This technique could be extended to the srcset parameter as well, to show which specific part of the srcset is currently applied.

DevTools with non-active image sources faded out in the srcset attribute

Of course, the <img> element is the one actually doing the display of the chosen source, so it’s debateable whether it should be faded out. Either way, I’m just playing with ideas here.

Suggestion #2: Hover previews for srcset

<img> elements show a preview of the image when you hover over the src attribute.

Hovering over an image source in DevTools shows a preview

This should also be extended to the srcset parameter of a <source> element, even if they list multiple sources.

Hovering over an srcset attribute in DevTools shows a preview

Suggestion #3: Explicitly indicate the currently visible image source

I don’t know if “computed attributes” (my made-up term) are likely to become a thing in devtools, but here is an attempt anyway. The basic idea is to show an attribute on the <img> element that indicates the current source image, as a representation of the currentSrc property.

DevTools with an image's current source shown as a computed attribute

(Clearly it’s not a perfect visual design, but it gets the point across.)

Conclusion

I don’t have one, other than saying that using devtools to alter devtools is a surprising amount of fun, and that this mini project was a wonderful distraction from whatever it was I was meant to be doing in my spare time.

The Sneetches and other DevTools

In which I try to distill some of my vapourware thoughts into fluid ideas on improving the guiding principle behind developer tools in browsers.

TL;DR: In the wonderful words of one of my colleagues: “There is no TL;DR – learn to Instapaper.” Some things actually take time to read.

Part I – Pre(r)amble

When it comes to browser developer tools, Chrome and Firefox are currently in a sort of friendly arms race; each innovation from one drives new ideas for the other, and vice versa. This can only be a benefit to web developers everywhere.

This speed of innovation is also necessary in the face of the rapid development of new W3C specs for both CSS and JS. There are new CSS modules and JS APIs popping up all the time. Some of these require new thinking about how to interact with them in devtools. Firefox’s new Web Audio API node inspector is a brilliant example of this.

A little while ago, Mozilla started using UserVoice to collect ideas for ways to improve Firefox’s devtools, which spurred me to write this. I’d been thinking for a few months about ways of improving the devtools to cater for certain new(ish) features of CSS. Along the way I realised that there’s a fundamental assumption in most devtools features that should be challenged.

So, rather than continue to have the ideas just rattling around my head and failing to be developed, it’s time to subject them to a mass audience.

Continue reading

Auto-detecting time zones at JSConfAU

Last week I was lucky enough to present at JSConfAU. My talk was titled “Auto-detecting time zones in JS” (with a sub-title of “Are you a masochist?”).

Part I – Technical details

Taking a tour of terrible temporal twists and turns

A lot of the talk delved into the history of time and time zones, pointing out various oddities and things that can trip you up if you try to deal with them. I won’t repeat all of them here – partly because I doubt anyone would read the whole lot, partly because I just want to focus on the two main points I made: Continue reading

Compute a DOM element’s effective background colour

While working on a new JS library today, I wanted to create a menu overlay element that automatically has the same background colour as its trigger element.

For some context, the basic structure looks like this:

Basic element structure

Over-the-top colours added to highlight element structure

My first thought was to just use <a href="https://developer.mozilla.org/en-US/docs/Web/API/Window.getComputedStyle">getComputedStyle()</a> and read the backgroundColor property of the trigger. This works only if the trigger element itself has a background style.

Of course, if the background styles are set on a container element higher up the DOM tree, that method is a great big ball of uselessness. The problem is that the trigger element reports its background colour as being transparent (which is correct, but still annoying for my purpose).

My solution was to walk up the DOM tree until an element with a background colour is found:

function getBackgroundColour(elem, limitElem) {
  var bgColour = '';
  var styleElem = elem;
  var rTrans = /^transparent|rgba\(0, 0, 0, 0\)$/;
  var style;
  limitElem || (limitElem = document.body);
  while (!bgColour && styleElem !== limitElem.parentNode) {
    style = getComputedStyle(styleElem);
    if (!rTrans.test(style.backgroundColor)) {
      bgColour = style.backgroundColor;
    }
    styleElem = styleElem.parentNode;
  }
  return bgColour;
}

Notes

  • Different browsers return either “transparent” or “rgba(0, 0, 0, 0)” for a transparent background, so I had to make sure to account for both.
  • The limitElem parameter is there as a short-circuit, in case you only want to check within a certain container element rather than bubbling all the way up to document.body.
  • getComputedStyle() doesn’t work in IE8, but I wasn’t planning on making my library work in IE8 anyway. An equivalent API for that browser is element.currentStyle.
  • This behaviour assumes that there aren’t any funky absolute positioning tricks happening that make the child elements appear outside their parent container.

Why am I blogging about this seemly simple technique? Because I searched for a solution first and found bugger all, that’s why.

And before anyone asks: No, using jQuery will not help – it just defers to getComputedStyle() anyway.

Web Directions South 2013 recap

Last week I went the Web Directions South 2013 conference, my first experience of WDS. Now, within Atlassian, we’d normally rely on WDS old-timer Ben Buchanan to do a great big write-up of the conference each year.

Unfortunately Ben left Atlassian earlier this year, so I took some notes of my own during the presentations, in order to post a recap blog within Atlassian. Then I remembered I have this blog as well, and in the sprit of “The Road to Hell…” I figured it would be worth re-posting the content.

For reference, Ben’s traditional “big stonking post” summary of WDS is at http://weblog.200ok.com.au/2013/10/wds13-big-stonking-post.html

Other than the keynotes, the conference was split into two tracks: Development and Design. Obviously no-one can see all the presentations, so I just wrote up what I saw.

Any quotes I’ve listed are based on my memory, and therefore might not be exactly right, but the message is the same. I apologise to anyone I may have misrepresented.

Day 1

The opening credits this year were done by Small Multiples. A glowing blob traveled across a dynamically generated landscape littered with the speakers’ names, using WebGL. The credits are detailed (and can be played) at http://www.south.im/; during the breaks between talks people could play with an interactive version where you could control a heap of landscape/lighting settings.

Opening keynote – Rachel Binx: People, Not Users

I didn’t take many notes from this one, mostly because I forgot to.

  • Described the differences between the “prescribed self” (single identity; Facebook, Google+) and the “flexible self” (multiple personas; Twitter, Blogger)
  • Highlighted some quotes from Zuckerberg and Brin talking about the death of privacy, and pointed out that only incredibly sheltered and privileged people could make those statements

Twitter: @rachelbinx

Good quotes:

“Facebook is a game of whack-a-mole with privacy settings”

Andrew Betts: Conquering the Uncanny Valley

Andrew is from FT Labs, the division of the Financial Times responsible for playing with new web tech that has put out several interesting and useful JS libraries (like FastClick, currently used in Confluence Mobile and JIRA Mobile).

I took a lot of notes for this one, but won’t dump them all here (but if anyone’s interested I can put them somewhere else).

  • The 3 key things for mobile web apps:
    • Keep all transitions to 16ms or less for smooth frame rate
    • No pauses more than 100ms – anything under 100ms feels instantaneous
    • Matching expectations of native apps – don’t venture into Uncanny Valley territory
  • Network performance – other than the already-known cost of making multiple network requests on a mobile device, he also pointed out that the speed of a network request is dependent on what the CPU is doing at the time, and whether the radio antenna is in an active or dormant state.
  • They wrote all their REST APIs to handle being called in a batch, then on the client their API wrapper transparently auto-batched any API requests within a certain time period and only sent off one network request.
  • Typically 70-95% of web page data is image, so optimising them is especially important. Their testing indicated that only loading one high-resolution, highly compressed image and scaling it down was better than loading different images for different resolutions. “Image decoding is probably the most expensive thing you ask the browser to do when your page loads.”
  • Rendering – various techniques to improve rendering performance, including disabling any hover effects when scrolling a page. Perf hits can also come from unlikely sources (like animation of one element being slowed by box-shadow on an unrelated element).
  • They wrote FastDOM to do async DOM reads/writes and batch them up, using requestAnimationFrame
  • Storing data – There are too many options, each with pros/cons. He came up with a brilliant “dysfunctional family” metaphor to describe the difference between cookies, localStorage, IndexedDB, AppCache and the Files API
    • Mobile devices have limited storage space, so they halved the storage requirements of their strings by doing a clever trick that relies on JS strings being UTF-16 (well, UCS-2, technically)
  • Perception – When you can’t make it any faster… make it seem faster

Twitter: @triblondon / @ftlabs

Links:

Good quotes:

“Financial Times released our first mobile app in 1888 – back then we called it a newspaper”

“We need to care about supporting existing features as much as creating new ones”

“The iPad FT app is a compromise between the ideal vision and the technical constraints given to us, while trying to avoid more constraints”

John Allsopp: Animating web content with CSS transitions, animations and transforms

A great start-to-finish walk-through of how animations work in CSS, building up to a re-creation of iOS 7 Safari’s 3D tab browser.

Once again John proved what a great explainer and educator he is for web content, which is to be expected from one of the conference organisers.

Twitter: @johnallsopp

Pasquale d’Silva: Stiff and Static Sucks

Excellent breakdown of the principles of animation and how they should be used in interfaces. Pasquale is a designer and animator who was trained in classic animation (the hand-drawn type).

He wrote a brilliant article a little while ago on “Transitional Interfaces” which got him invited to speak at this conference, and he didn’t disappoint. A remarkably comfortable and humorous speaker for someone who had never done a public presentation before.

His key points to remember about animation in interfaces:

  • Static interfaces suck
  • Animation is a clue
  • Great animation feels invisible
  • Learn from the classics

A lot of the talk was showing examples of sites/apps that do animation well, those that don’t do it well at all, and how they can be improved.

Twitter: @pasql

Links:

Good quotes:

“Use animations between states to avoid merge conflicts in our brains”

Twitter’s loading of new tweets is “a slap to the eyeballs”

Quartz Composer is “like trying to draw with a microwave”

“I wanted to start a site like Dribble but for animators; only instead of circle jerking over 400×300 flat images, it would actually be useful”

Ryan Seddon: Flexbox

This started a round of 3 shorter talks in the development track, so I didn’t have many notes.

Good overview of the basics of flexbox, the different specs and decent real-world use cases. Rather than hyping up something just because it’s new, he advocated use in moderation. Use it for small modules, but don’t overdo it (“use it where it makes sense”) – still use floats or inline-block as needed.

Fiona Chan: Oh No! Spaghetti Code!

A CSS-focused talk about splitting up old, bloated, messy codebases into neat, modular components. Fiona has had to do this at several different jobs and has gained a fair bit of insight into how to Get It Done.

Key points:

  • Build the simple components first – find commonalities and abstract from the start
  • Good naming for components / classes is important – make them generic as possible, but not so generic that they’re unusable like “box1” (try “box-feature” instead)
  • Make components “just work” – don’t make developers resort to clearfixes
  • Namespacing using SASS/LESS nesting can be good, but it’s better to just include the namespace in the class name itself
  • The most important thing is communication. Have a code standard within a team and write a living style guide.

Jared Wyles: CSS – (Finally) Making the Web a Less Blocky Place

Jared hates CSS, but he likes that there are now tools available that can help us avoid writing hacks. He showed off two CSS features that have been created by Adobe: Regions and Shapes.

Regions allow you to write content in one element, but have it flow into multiple defined regions made up of other elements. This makes it possible to do true magazine-style layouts where text content flows into multiple columns. It’s supported in the latest versions of Safari because Adobe have been putting a lot of work into submitting patches to Webkit. There’s also a JS API so that you can query named regions.

Shapes allow you to define non-rectangular content areas using basic polygons, which is something that CSS has needed for a long time. It’s early days so far, but looking promising.

Twitter: @rioter

Links:

Good quotes:

“While we’ve been struggling with these basic layout concepts, the print industry has been laughing at us.”

Closing keynote – Maciej Cegłowski: Barely Succeed – It’s Easier!

Words can not do justice to the humour of this talk. Maciej is the creator and maintainer of http://pinboard.in/ and spent a large part of the presentation bagging out the bullshit of startup culture, using bizarre slides about animal parasite lifecycles.

Key points:

  • Startups are constantly trying to disrupt the publishing industry and the film industry and the record industry, but they’re using the same broken business models and not realising it. They’re not disrupting the business of running a business.
  • There’s a different model to follow: Barely Succeed. A lone operator charging a reasonable fee for a high quality but narrow-scope service. Keep control and be free to change things to maintain the vision.

Twitter: @baconmeteor

Good quotes:

“Startup culture is rotting from the inside”

“You too can find success within your mildest dreams”

“I’m a Slav. Slavs believe the world is misery and pain. This worldview makes it difficult to be a motivational speaker.”


Day 2

Opening keynote – Scott Jenson: Beyond Mobile, Beyond Web

A wonderfully inspiring talk about how to think for the future and stop using “default thinking” when trying to come up with future devices. Scott created the first Human Interface Guidelines at Apple and this talk is worth watching when the video gets posted online.

Key points:

  • Stop thinking everything has to be/have an app.
  • When making things “just work,” beware of false positives.
  • Don’t forget that “smart devices” don’t have to mean putting a touch screen on your toaster. Things can be “barely smart” – e.g. broadcast a simple URL via bluetooth that points you to a support page for that specific model.
  • When coming up with something new, closed and proprietary will win… at first. Then open and shared will come roaring past and take over. It’s happened before, it will happen again.
  • Products and features should be evaluated by the golden rule of Value > Pain
    • If value goes up, but pain doesn’t, it’s a win
    • If value stays the same, but pain reduces, it’s a win
  • Forget responsive design – imagine if you built a website designed to work on 2 or 3 different screens at the same time

Twitter: @scottjenson

Links:

Good quotes:

“I’m not going to play World of Warcraft on my toaster”

“You know what QR codes are called, don’t you? Robot barf.”

“There is no ‘Cloud’; there are [proprietary] Clouds and they don’t like each other”

“We evaluate tomorrow’s tech by how well it handles yesterday’s tasks”

“Everyone wants innovation, but no one wants risk.”

Troy Hunt: Hack Yourself First

Decent overview of using security tools to try to break your own sites/apps before letting the public do it. He used various techniques to highlight just how easy it is to get free airport lounge wifi, get a free credit card number, and do man-in-the-middle attacks (see the link to the right).

Aarron Walter: Connected UX

An overview of how the people at MailChimp collated all their disconnected fragments of user feedback, stats and research into one giant repository of information using Evernote. By careful use of notebooks and tags (they tagged by feature as well as personas) they were able to gain new insights that they hadn’t seen before because the data wasn’t in one place.

They got every department of the company to start sending data to a shared Evernote account, automating it as much as possible. They even took screenshots of Survey Monkey charts and relied on Evernote’s OCR to convert them into searchable text fragments.

The other problem they had was at the other end of the process. So instead of writing 40-page research documents that would be read by no-one, Aarron teamed up with a video specialist to create 2-minute videos detailing the research. Suddenly everyone in the company was watching them.

Chris Liener: Validating Forms with the HTML5 Pattern Attribute

Another batch of short talks in the dev track

A basic overview of the different options available in the HTML5 forms spec, and validating fields with regular expressions in the pattern attribute.

Twitter: @cliener

Links:

Good quotes

“I’ve just noticed that the required attribute doesn’t work any more in Safari 7, because Apple hates people”

Mark Dalgleish: Web Components

The key message here was that if you’ve built for the web, you’re already an expert in Web Components.

Simon Elvery: Responsive Images

A quick look at options for loading different images for different screen sizes. The end result was that there are no good implementations, and a lot of arguments over the “best way”

He has also created a “choose your own adventure” site to choose which image loading technique will work best for your situation.

Adam Ahmed: I Yield To Generators

An overview of generator functions that are coming to JS in ES6. IMO much more clear and understandable than the 3 (yes, really) separate lightning talks on the subject at SydJS only two days earlier.

Patrick Catanzariti: JavaScript Beyond the Web Page

Some quick demos of using Ninja Blocks to control hardware via REST APIs. Won the unofficial prize for Best Prop for the use of a bubble generator that blew bubbles whenever he spoke.

Glen Maddern: The Z Dimension

This started off with a quick discussion on how browsers render page elements, and in what order. But rather than focus on the complicated rules (“I don’t expect anyone to remember that spec”), he focused on how to use Chrome dev tools to debug layout/stacking problems.

Some of the experimental features of Chrome dev tools are fantastic. The bit that astounded everyone was being able to analyse page layouts and paints frame-by-frame, with a replay tool that also gives you an interactive 3D view of the paint area at that point in time.

Twitter: @glenmaddern

Links:

Good quotes:

“There are no Layers, just Order”

Closing keynote – Heather Gold: Nerd, Know Thyself

Part presentation, part audience interviews. Key points:

  • She reminded us that we are all humans with emotions, and it’s ridiculous to expect people to switch off their emotions from 9 to 5 while at work.
  • You can’t expect users of your product to care about it if you don’t care yourself.
  • Reading a room in stand-up comedy is the same skillset as reading the mood of your users – there may not be direct signals, just intuition.

Twitter: @heathr

Good quotes:

“We could be just as anti-social without the web…the problem isn’t the Internet, it’s us.”

“Everyone wants to talk about communities, platforms, but no one wants to talk about why anyone would care.”

Webcam tile sliding game

Another WebRTC experiment shown at SydJS

My first experiments with WebRTC were during the early phases of its development, at a time when there were very few demos and references to work from. One of those early demos was Exploding Pixels (part of Opera’s “Shiny Demos” to coincide with the release of Opera 12) which inspired my very first WebRTC experiment.

We all know the old tile-sliding games that are often given out as cheap toys to kids, where you’re presented with a jumbled picture or pattern and need to slide the pieces around one-by-one until the puzzle is complete.
I wanted to build one of those using JavaScript, but with a live webcam feed as the picture.

The first step was to build a simple tile game library that took some basic parameters and handled the interaction and game state.

var tiler = new Tiler('game', {
  width: 640,
  height: 480,
  rows: 3,
  cols: 3
});
tiler.setSrc('../common/img/catimov.jpg');

VIEW BASIC DEMO

Hooking up a webcam

Next came the fun part — integrating with WebRTC. To do this I build the library in such a way that the game could be redrawn at any stage with a different background image. It also accepted a parameter for the background that could be a URL path to an image, an image element or a canvas element.

Then it was a simple matter of hooking up the webcam using navigator.getUserMedia, updating a hidden canvas with the video stream (I didn’t use a video element directly as I wanted to flip the image so it acted like a mirror), then setting it as the game’s background image repeatedly.

var tiler = new Tiler('game', {
  width: 640,
  height: 480,
  rows: 4,
  cols: 4
});

var video = document.getElementById('webcam');
var source = document.getElementById('source');
var ctx = source.getContext('2d');
navigator.getUserMedia({video: true}, function (stream) {
  // A custom helper method to set the webcam stream
  // as the video source - different for each browser
  setStreamSrc(video, stream);
  // Flip the video image so it’s mirrored
  ctx.translate(640, 0);
  ctx.scale(-1, 1);
  (function draw() {
    ctx.drawImage(video, 0, 0);
    tiler.setSrc(source);
    requestAnimationFrame(draw);
  })();
}, function () {
  console.error('Oh noes!');
});

VIEW WEBCAM DEMO

Extra difficulty

Taking advantage of the fact that these aren’t real tiles, I added some extra difficulty by randomly flipping or rotating some tiles during setup. It obviously means that the game needs a “selection mode”, where clicking a tile selects it (at which point it can be flipped or rotated) and a double-click moves it.

var tiler = new Tiler('game', {
  width: 640,
  height: 480,
  rows: 4,
  cols: 4,
  move: 'dblclick',
  flip: true,
  rotate: true
});

This works for both static images and live webcam feeds.

VIEW STATIC IMAGE DEMO

VIEW WEBCAM DEMO

All the code for these demos is on Github at https://github.com/gilmoreorless/experiments/tree/gh-pages/tiles

NOTE: The usual caveats apply for these demos (i.e. They were built as quick experiments for a presentation and are not guaranteed to work properly in all browsers). I’ve tested in Chrome, Opera and Firefox as they were the first browsers to enable WebRTC.

Chuckles

Background

About 18 months ago, I had the idea of using Javascript to take live audio input and make it animate the mouth of a virtual ventriloquist dummy. The original seed of the idea was to play a prank on someone at SydJS. After a bit of research I was disappointed to find out there wasn’t any way to do it. The idea was pushed aside, but never completely forgotten.

Since then, there has been an explosion in new web technologies and standards. The ones that really piqued my interest were the Web Audio API and WebRTC.

The Web Audio API provides fine-grained control over sound processing and manipulation in JS.
WebRTC allows for peer-to-peer video communication between browsers, along with the associated access to webcams and microphones. Bingo.

Time to code

At the time I started playing with WebRTC, the few browsers that had implemented it only supported getting a webcam stream; microphone support was yet to come. I figured I could still work on getting the idea right using an HTML range input.

The first implementation was simple. I found a picture of a ventriloquist dummy online and hard-coded the image and drawing data, then set the position of the dummy’s mouth to be bound to the value of the input.

Then came the part I’d been waiting for: Google Chrome enabled getting microphone data in their Canary build.

All I had to do was get access to the microphone via the navigator.getUserMedia() API, pipe that input into a Web Audio API AnalyzerNode, then change the position of the dummy’s mouth based on the maximum audio level detected in the microphone. One final adjustment was made to lock the dummy’s mouth movement to regular “steps”, in order to give it a more old-fashioned wooden feel.

And thus was born the first demo of the library that has been christened “Chuckles”.

VIEW BASIC DEMO

More interaction

While the first version worked well enough, it still required hard-coding of all the data. So I built in some new display modes to make it more interactive:

  • Normal mode, the default state
  • Drawing mode, where you can draw on the image to define where the “mouth” segment is
  • Dragging mode, where you can drag the “mouth” around to set its position when fully open

A quick addition of drag-and-drop for adding your own image and you too can make your own ventriloquist dummy:

VIEW FULL DEMO

Next steps

The code is on GitHub at https://github.com/gilmoreorless/chuckles, and there are a few more things I’d like to do with it at some point.

  • Better definition for the mouth segment (possibly using a border).
  • Allow transforming the mouth segment using rotate, skew, etc.
  • Define eye regions that can move left/right independently of the mouth.

But adding those features all depends on whether I can be convinced the idea is actually useful, rather than just being a throw-away demo.

Presentation: WebRTC / Web Audio at SydJS

I gave a presentation about WebRTC and the Web Audio API (titled “WebRTC: Look Ma, no Flash!”) at the SydJS tech meetup on February 20, 2013.

There is no video available of the presentation (fortunately or unfortunately, depending on your perspective), but the slides are online at https://gilmoreorless.github.io/sydjs-preso-webrtc.

The presentation was really just me showing a heap of examples and demos of using WebRTC and Web Audio to do fun things. Most of the examples were written by other people, but a few of them were mine, showing things I’ve been doing in my experiments repository.

Instead of just being lazy and leaving my experiments hidden away, I’m going to force myself to write some blog posts explaining some of them and how they work.

The first one should be up tomorrow. Should.