Reduce your spread

A quick prelude: I wrote the first draft of this post a couple of months ago, but it still needed editing and polishing. Then Rich Snapp published The reduce ({…spread}) anti-pattern and made all the main points I was going to make.

Feeling rather deflated, I initially abandoned the draft. However, to force myself to complete things, I’m publishing it anyway. Although our primary argument is the same, we approach the topic from slightly different angles. I recommend reading Rich’s post too as it’s well-written, and he explains some things better than I do. But mine has a bonus grumpy rant at the end. Contrast and compare.

Object spread — a primer

The object spread syntax has become one of the more readily-adopted[citation needed] pieces of the ever-evolving ECMAScript specification. Over the past few years I’ve worked on several application codebases that have made use of it, and seen many more open source libraries using it.

First I should define what I mean by the object spread syntax. It’s best to explicitly set the terms of reference here, because different people might have different interpretations of it. Other people might have used the syntax but not known what it was called.

Object spread, at its simplest, is used for copying the keys and values of one object into a new object. It’s important to note that it does a shallow copy rather than a deep copy, but in many cases that’s all that’s required anyway. (For more details on shallow vs deep copies, see this Stack Overflow post.)

let sourceObject = { element: 'input', type: 'radio' };

let sourceCloned = { ...sourceObject };
// A new copy of the original: { element: 'input', type: 'radio' }

let extendedObject = { ...sourceObject, name: 'newthing', type: 'checkbox' };
// A modified copy: { element: 'input', name: 'newthing', type: 'checkbox' }

The problem

Object spread syntax is not a problem in and of itself. Quickly and easily copying an object’s data to another object is a very common use case. Indeed, the plethora of solutions that were created in the years beforehand show how much it was needed. Long gone are the days where some projects would include the entirety of jQuery just so they could use $.extend().

The problem arises when it becomes the only way to assign properties to an object in a codebase. Over the past few years I’ve noticed a trend of using object spread wherever possible, even if it isn’t the best tool for the job.

In the worst cases, it has caused exponential performance bottlenecks. Helping someone fix a slow-running script is what prompted me to write this post. Object spread syntax should be used with caution.

Why is it such a problem?

Here is a very simplified example. It’s representative of a pattern I’ve seen pop up in more and more codebases.

const arrayOfStuff = [
  { id: 'thing1', name: 'Thing 1' },
  { id: 'thing2', name: 'Thing 2' },
  // and so on...
const stuffById = arrayOfStuff.reduce((memo, item) => (
), {});

// stuffById == { thing1: 'Thing 1', thing2: 'Thing 2' }

It looks harmless enough at first glance. The basic idea is to turn an array of objects into a single object with key/value pairs. However, the object spread call lurking there (...memo) makes this loop exponentially slower.

To understand why, let’s look at the specification for object spread (bold emphasis is mine):

PropertyDefinition: … AssignmentExpression

  1. Let exprValue be the result of evaluating AssignmentExpression.
  2. Let fromValue be ? GetValue(exprValue).
  3. Let excludedNames be a new empty List.
  4. Return ? CopyDataProperties(object, fromValue, excludedNames).

In short, it will copy all enumerable properties from one object to another. Now imagine that we didn’t have object spread syntax available (which doesn’t really take us back very far in time). How would we implement the code snippet above? The next best option is Object.assign(), which arrived in ES6 / ES2015. This also performs a shallow copy, but as a method on the global Object. That makes it much easier to polyfill, rather than requiring new syntax parsing.

Here’s an equivalent code sample to the one above, but using Object.assign():

const stuffById = arrayOfStuff.reduce((memo, item) => (
  Object.assign({}, memo, {
), {});

For this purpose, there’s no difference between that example and the one using object spread syntax*. *Technical note: Object spread and Object.assign() do slightly different things, especially regarding setter methods. As mentioned in the MDN web docs, object spread syntax “cannot replace nor mimic the Object.assign() function”. The potential performance problem is still somewhat abstracted away and isn’t yet completely obvious.

Now imagine we don’t even have Object.assign() available to us, and we have to implement the copying ourselves. We could use a library helper like jQuery’s $.extend() or Underscore/Lodash’s _.extend(), but under the covers they do the same thing. It will be more instructive to build it from scratch.

Here’s the simplest example I can think of to copy properties and values from one object to another. Unlike Object.assign() and the libraries’ extends() methods, it doesn’t handle an arbitrary number of objects to copy from. But that functionality isn’t required for the example, and cutting it out simplifies things. To show just how basic it is, I’ll even use “old-skool” (sarcasm intended) ES5 syntax and methods, without all your arrow function malarkey.

(Keep in mind that this doesn’t account for the nuances of different property descriptors, but so far I’ve never seen the object spread syntax used in a scenario that needs them.)

function copyProps(toObj, fromObj) {
  Object.keys(fromObj).forEach(function (key) {
    toObj[key] = fromObj[key];

With that done, we can insert it into the original example in place of the object spread syntax. Ordinarily I’d just call the copyProps() helper method to keep the code neater, but to make the problem super-dooper obvious, I’ll add the helper code directly to the reduce() body.

const stuffById = arrayOfStuff.reduce(function (memo, item) {
  let ret = {};
  // Here we go loop-the-loop
  Object.keys(memo).forEach(function (key) {
    ret[key] = memo[key];
  ret[] =;
  return ret;
}, {});

What was originally intended to be a single loop using reduce() has turned into nested loops. This is where the performance hit comes in. For every iteration of the reduce() loop, there is a second iteration of every property defined up to that point.

I’ll illustrate this in table form, for an array for 10 items. Here we count how many times an object property is assigned a value in each iteration of the reduce() loop. The last column in the table shows how many assignment operations in total have been performed across the entire loop.


The most efficient implementation of mapping an array into key/value pairs is one assignment per item in the array. Using Big O notation to indicate the complexity of that implementation, it is represented as O(n), meaning linear time.

However, in the object spread implementation, the total number of assignments for a 10-value array is 55 — way beyond what it should be! In fact, the pattern of numbers in the last column is a common mathematical sequence known as the triangle numbers. This sequence has a well-defined formula of n(n + 1) / 2. In Big O notation this complexity cost is defined as being in the order of O(n<sup>2</sup>), as the time to run the function increases exponentially with the size of the array. In fact, Wikipedia has a detailed explanation of precisely this scenario.

The simplest solution to this problem is to drop the fancy-pants object spread syntax and go back to basics. Create an object, then assign properties to it in a single loop:

const stuffById = {};
arrayOfStuff.forEach(function (item) {
  stuffById[] =;

Not only is that far better for both speed and memory usage, but it is also fewer lines of code and more obvious about what it’s doing. To my eyes it’s also easier to read, but that’s thoroughly subjective. If you really want to keep the semantics of reduce(), that’s easy enough too:

const stuffById = arrayOfStuff.reduce(function (memo, item) {
  memo[] =;
  return memo;
}, {});

How bad can it be, really?

I first discovered this pattern while helping a colleague work out why a data-processing script was running so slowly. Part of the script was making key/value pairs out of an array of around 12,000 items. The script was taking 18 seconds to run, which seemed very suspicious. Using the formula above, a quick calculation shows that using the nested loops, an array of 12,000 items would use 72,006,000 assignment operations!

After we removed the object spread way of doing the reduce, and used the simplified method above, the script ran in 1 second. This is not just a theoretical performance issue — it’s one that is all too easy to encounter.

There is an additional performance penalty in the object spread way of doing this. Every iteration creates a temporary object and assigns its properties, only to immediately throw it away in the next iteration. Reducing an array of 100 items this way will create 100 new objects, but only one of them is used afterwards. Therefore 99 luftballons objects only last as long as a single iteration of the loop.

This builds up a bunch of unnecessary objects in memory until the next garbage collection kicks in. Realistically, that’s not going to be a concern for most uses of this pattern, but in a single-threaded language environment it’s important to keep in mind.

So next time you see { ...thing } in your code, have a think about whether you really want to be looping over every property of that object. There are still many cases where object spread makes things much better — just be careful when using it inside a loop.

If you only came to this post for the dry technical content, it just finished and you can stop reading now. For everyone else, you can read…

A tangential rant about the industry

This is the part where I act as a Grumpy Old Man (in spirit, at least). Taken on its own, this gotcha with object spread is an easy enough mistake to make. But viewed in a wider context, it’s just one symptom in a wider problem of how modern web development is being taught.

The rise of Babel, TypeScript, and other transpilation tools has created an inflection point. I’ve seen a distinct difference between application codebases written pre- and post-ES6-slash-2015. New codebases have tended to be all-in on as much of the new syntax and features as possible. This isn’t such a bad thing, but I’ve noticed a sort of collective amnesia about any JS features and techniques that existed beforehand. It’s almost like a whole class of developers have decided to actively SHUN anything from ES5 and earlier.

Immutability has been the keyword du jour. While I agree with the advantages of immutability, they should be balanced against the realities of how JS works. Pushing for immutability and functional programming (FP) everywhere makes perfect sense in a language designed for it from the start, such as Haskell or Elm*. *Full disclosure: I haven’t used either language in a real project, other than just kicking the tyres. I may be simply talking a load of shite. However, the feverish dogma of immutability and FP in JS appears to be blinding people to the pragmatics and practicalities of running code in a browser. To pick on one specific example (purely because it’s the most recent one I’ve seen) here’s an article on tips for using .reduce(), which promotes the object spread pattern I’ve decried:

Some readers might point out that we could get a performance gain by mutating the accumulator. That is, we could change the object, instead of using the spread operator to create a new object every time. I code it this way because I want to keep in the habit of avoiding mutation. If it proved to be an actual bottleneck in production code, then I would change it.

Argue with me if you wish, but I find it a ridiculous idea to introduce an exponentially-expensive loop into my code purely to practise working with immutability. This is a great example of a wider pattern in the community, where there’s apparently no analysis of the context in which the code is used. Yes, my simplified code sample of a reduce pattern mutates an object. But it’s an object which was freshly created for just that purpose, and isn’t changed once it’s returned.

But the dogma doesn’t just apply to hidden problems like this object spread / reduce performance cost. There now appears to be a community consensus that every language feature that’s “new and shiny” must be the best.

In other words, I’m going to keep linking people to Dan McKinley’s “Choose Boring Technology” for as long as it’s needed.

Angry cat that looks like it's yelling

And another thing…!

(And don’t even get me started on suddenly using const for everything possible, even massive functions. It’s fine to still use the function keyword to describe functions!)

I know that there’s a staggering number of brand new developers every year, and that’s a good thing. There are now huge cohorts of people using JS who have never known anything from before ES6. In some ways I greatly envy them, and their lack of useless vestigial knowledge like the difference between document.all and document.layers. But with every successive wave of new web developers entering the industry, I ponder how they’re learning the tools of the trade. Blog posts and articles continually promoting only the newest features provide a skewed perspective.

From what I can see, coding schools and intensive crash courses are teaching a decent smattering of the surface-level knowledge. It’s enough JS to tick boxes and get hired, but skips the fundamentals. I know they only have a limited amount of time to teach a vast moving target, and they can’t cover everything. But where are the course modules on how to think critically about the technology you’re using?

The colleague I mentioned earlier, with the extremely slow-running script, was fairly new to web development. When I showed him the problem with using object spread within .reduce(), he lamented, “Then why do they teach us to do it this way?”

That’s a damn good question.

Of course, people will disagree with me — this is the internet after all. But that’s ok. It’s my blog, and my opinions only, not a dictum from on high. In fact, if everyone only did what I wanted, that would simply be replacing one form of dogma with another.

On the whole, the advances in JS over the last 5-10 years have been phenomenal. Boom-and-bust cycles of hype are also an inevitable part of any new technology, and can be beneficial in the long term. But that doesn’t mean we should all sit back and avoid any kind of criticism of the process along the way. Having better new ways to do things doesn’t mean that everything old is now irrelevant. There are no silver bullet solutions—everything in programming is a matter of context.

But good luck working that out on Twitter, the Place Where Nuance Goes To Die.