## Why “The Singularity” is bollocks

There’s a lot of nonsense talked about the supposedly looming “technological singularity“. For the uninitiated, this is based on a principle called Moore’s Law. Broadly speaking, Moore’s Law states that every couple of years, computers get twice as fast for the same cost. The impressive thing is that in the 40 years or so since Gordon Moore first cooked it up, it’s been remarkably accurate, even though the tech has gone through changes he could never had foreseen.

What gets all the singularity nuts excited is the idea of what happens next. According to Moore’s “Law”, computers will get smarter and smarter, until eventually they’re smarter than us, and begin (according to the singularity folks) designing themselves at a rate we couldn’t match. The machines take over, humanity becomes irrelevant, etc, etc.

Except that’s a load of bollocks, and here’s why:

In the real world stuff never continues to grow exponentially forever. Projections based on unbridled exponential growth are the mathematical equivalent of perpetual motion machines. It’s called a Malthusian growth model, after a bloke who made some very dire predictions about world population back in the 19th century. Lucky for us, the complete lameness of this type of model meant that the world didn’t implode under the combined weight of humanity, and the mathematicians went back to the drawing board. The result was a new, better model they called the Logistic model, which acknowledges the idea that even if something can grow at an exponential rate for a while, eventually forces that may have been too small to notice begin to slow the growth rate. This model has been far more successful at accurately modeling real-world processes.

Sure the idea of the singularity is fun, but the less sexy reality is that your wrist watch is unlikely to ever be able to out-smart you, let alone usurp your position at the top of the food chain by creating it’s own army of super-intelligent wrist watches. Moore’s Law will eventually break down, machines will stop getting smarter so quickly, or even stop getting smarter at all.

Tags: maths, singularity
Except that there’s an obvious easy limit to how many razor blades is ideal–more isn’t better beyond a certain point, so there won’t be any 30-blade razors. But faster and smarter computers have no such obvious end point.

It’s also considered likely among tech singularity students that the rate of progress will indeed reach a plateau. There is a theoretical limit to the information storage capacity of matter, which would mean rapid progress up to that point, then a rapid flattening out of the curve, perhaps with no future improvements at all. The same would likely be true for most technologies.

But that limit would be hit AFTER the arrival of what one might call the “singularity.”