Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Absent either the discovery of a physical constraint that ends computing’s history of exponential growth, economic or societal collapse, or a decision to deliberately relinquish our technology, it is probable that — by the end of the century — some kind of artificially-constructed system will emerge that has greater intelligence than any human being who has ever lived. Moreover, that system’s superior ability to improve and reproduce itself could allow it to eclipse all of human society so rapidly (i.e., within seconds or hours) that we will have no time to adapt to its presence or interfere with its emergence until it is too late.
Written by a philosopher, Nick Bostrom, this challenging and occasionally difficult book explores these issues in depth, arguing that the emergence of superintelligence will pose the greatest human-caused existential threat to our species has — and possibly will — ever faced.
Let us consider what superintelligence may mean. Humans have consistently shown themselves able to build machines that rapidly surpass their biological predecessors, and to a large degree. Biology never produced anything like a steam engine, a locomotive, or an airliner. There is every reason to suppose that — once the intellectual and technological leap to constructing artificially intelligent systems is made — artificial intelligence will surpass our abilities much as those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer’s yeast and humans.
Before handing over the keys to our future to such an intelligence, we’d better be sure of its intentions and benevolence. And when we speak of the future, it’s best to keep in mind that we’re not just thinking of the next few centuries on this planet, but — very possibly — over a much grander scale of time and space. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. If our “cosmic endowment” really is that enormous, then what we do in the next century may well determine the destiny of the universe. It’s worth some reflection to get it right.
To illustrate how easy it will be to choose unwisely — even if we assume we can meaningfully speculate about the motivations and actions of a being vastly more intelligent than ourselves over a long period of time — let me expand upon an example given by the author. Suppose a paper clip factory installs a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission to maximise the number of paper clips produced in its future light cone.
“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but — without safeguards — Clippy might see that as a perk, since they were just consuming energy and material resources which could better be deployed for making paper clips.
Before long, Clippy will have have similarly disassembled other planets in the Solar System, and dispatched self-reproducing probes on missions to other stars to make paper clips and spawn other probes to more stars and, eventually, other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn’t hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.
This is a light-hearted example, but — if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips –be very worried.
One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial General Intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.
At some point, superintelligence might call into the question the economic rationale for a large human population. As an analogy, consider that there were about 26 million horses in the U.S. in 1915, with a human population of around 100 million. By the early 1950s, however, only 2 million horses remained, while the human population had reached 150 million. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won’t.
As an engineer, I don’t have much use for philosophers, who — in my opinion — are given to long, gassy prose devoid of specifics and prone to spouting complicated indirect arguments that don’t seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices until after we’ve faced the emergence of an artificial intelligence; by that point, it will be too late.
That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence that blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine; i.e., without its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass.
Unless you believe there is some kind of intellectual élan vital inherent in biological substrates that is absent in all other physical equivalents — which just seems silly to me — the mature artificial intelligence will be superior in every way to its human creators. Careful, extensive ratiocination about how it may regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.
Bostrom, Nick. Superintelligence. Oxford: Oxford University Press, 2014. ISBN 978-0-19-967811-2.
Here is a lecture by the author about the “control problem” confronting those who would create a superintelligence and various ways of addressing it.
This is a popular talk about existential risk and how to think about mitigating it.