NOTE: Today I am continuing with publishing revisions to my book World After Capital. This is the second half of the chapter on the incredibly properties of digital technology. If you missed it, the first half of the chapter is on zero marginal cost.
Universality of Computation
Zero marginal cost is only the first property of digital technology that dramatically expands the space of the possible. The second property is in some ways even more amazing.
Computers are universal machines. I mean this in a rather precise sense: anything that can be computed in the universe at all can be computed by the kind of machine that we already have, given enough memory and enough time. We have known this since the groundbreaking work by Alan Turing on computation. Turing invented an abstract computer, which we now call a Turing machine [8]. He then came up with an ingenious proof to show that this machine, which turns out to be extremely simple, can compute anything [9].
What do I mean here by computation? I mean any process that takes some information inputs, executes a series of processing steps and produces an information output. That is—for better or worse—all that a human brain does either. The brain receives inputs via nerves, carries out some internal processing, and produces outputs (also via nerves). In principle, there is nothing a human brain can do that a digital machine cannot do.
The “in principle” limitation will turn out to be significant only if quantum effects matter in the brain. This is a hotly debated topic [NEED REFERENCE]. Quantum effects do not change what can be computed per se, because even a Turing machine can simulate a quantum effect, but it would take an impractically long time to do so, potentially millions of years or more [10]. If quantum effects were to matter in the brain then we would need to wait for further progress in quantum computing to simulate a brain. Personally, I believe that quantum effects are unlikely to matter and that we will be able to simulate an entire human brain in a digital computer with sufficient detail. We can’t do it quite yet, as our present digital hardware is too slow and has insufficient memory (we also do not yet have a complete map of a human brain).
Unless you want to believe in something beyond what physics has determined to date, there is nothing that a human brain can do that a computer cannot do also. Likely a digital computer will suffice, but it is possible that we have to get to quantum computers to cover everything. Now there is always some wiggle room in the future. We may discover something new about physical reality that we don’t yet know, and that changes our view of what is computable. But not so far.
For a long time this universality property didn’t seem to matter all that much. Computers were pretty dumb compared to humans. This was frustrating to computer scientists who, going back as far as Turing himself, had the belief that it should be possible to build a machine that does, well, smart things. But they couldn’t get it to work. Even something that is really simple for most humans, such as recognizing objects, had computers completely stumped. Until now that is, when we suddenly find ourselves with computers that can do all sorts of smart things.
An analogy here is heavier than air flight. We knew for a long time that it must be possible—we knew that birds were heavier than air and yet they could fly. But it took until 1903, when the Wright Brothers built the first successful airplane, for us to figure out how to do it [11]. Once they and several others around the same time had figured it out, though, progress was rapid. We went from not knowing how to fly for thousands of years to passenger jet planes crossing the Atlantic in 55 years (BOAC’s first transatlantic jet passenger flight was in 1958 [12]). If you graph this, you see a perfect example of a non-linearity. We didn’t get gradually better at flying. We couldn’t do it at all and then suddenly we did, and quickly did it very well.
Similarly, with digital technology, we have finally made a series of breakthroughs, which have taken us from essentially no machine intelligence to machines outperforming humans on many different tasks, including reading handwriting and recognizing faces [13]. More impressive, maybe, is that machines have learned how to drive cars. The rate of progress in driving is a great example of the non-linearity of improvement. DARPA, the Defense Advanced Research Projects Agency, held its first so-called Grand Challenge for self driving cars in 2004. At the time they picked a 150 mile closed course in the Mojave Desert region, and yet no car got further than 7 miles before getting stuck (less than 5% of the course). By 2012, less than a decade later, Google’s self-driving cars had successfully driven over 300,000 miles on public roads with traffic [15].
Some people will object that reading handwriting, recognizing faces, or driving a car is not what we mean by intelligence. This just points out, though, that we don’t really have a good definition of “intelligence.” For instance, if you had a dog that could perform any of these tasks, let alone all three, you would likely call it an “intelligent” dog.
Other people will say that humans also have creativity and these machines, even if we grant them some form of intelligence, won’t ever be creative. This amounts to arguing that creativity is something other than computation. The word “creativity” suggests the idea of “something from nothing,” of outputs without inputs. But that is not the nature of human creativity: musicians create new music after having heard lots of music, engineers create new machines after having seen many existing ones, and so on. There is no evidence that creativity is more than computation.
Recently, Google achieved a relevant breakthrough in machine intelligence. The AlphaGo program beat Korean Go grandmaster Lee Sedol 4-1 [16]. Previously, progress with software that could play Go had been comparatively slow and even the best programs could not beat strong club players, let alone masters. The search space in Go is extremely large, which means a search approach, which works for Chess, cannot be used to find moves. Instead, candidate moves need to be conjectured. Put differently, playing Go involves creativity.
The approach used to train the AlphaGo program, so-called adversarial training of neural networks, can also be applied to other domains that require creativity. There is already progress in applying these techniques to composing music and creating designs. Maybe even more surprisingly, machines can learn to be creative not just from studying prior human games or designs, but from creating their own based on rules. A newer version of AlphaGo called AlphaZero, starts out just knowing the rules of a game such as Go or chess, and learns from games it plays against itself [NEED REFERENCE]. This approach allows machines to be creative in areas that have limited or no prior human work to go on.
With digital technologies, the space of the possible has thus expanded to include machines that can most likely do anything that a human can do.
Universality at Zero Marginal Cost
Now, impressive as these two properties of zero marginal cost and universality are on their own, their combination is truly magical. I will just give one example: we are well on our way to a computer program that will be able to diagnose any disease from a patient’s symptoms in a series of steps, including ordering new tests and interpreting their results [14]. We have expected this based on universality, but now we are making tangible progress and accomplishing this is a matter of decades at best. Once we can do it, then thanks to zero marginal cost we can, and should, provide free diagnosis to anyone, anywhere in the world. (Okay—the actual lab tests, to the extent they are required, will still cost something). Still, one needs to let that sink in slowly to really grasp its extent. The realm of possibility for mankind will soon include free medical diagnosis for all humans.
Universality of computation at zero marginal cost is unlike anything we have had with prior technologies. Being able to give all of humanity access to all the world’s information and knowledge was never before possible. Intelligent machines were not previously possible. Now we have both. This is as profound an increase in what is possible for humanity as agriculture and industry were before. Each of those ushered in an entirely different age.
To help us think better about the next age made possible by digital technologies, we now need to put some foundations in place.