Saturday Night Science: The Singularity

 

“The Singularity”, Uziel Awret, ed.For more than half a century, the prospect of a technological singularity has been part of the intellectual landscape of those envisioning the future. In 1965, in a paper titled “Speculations Concerning the First Ultraintelligent Machine” statistician I. J. Good wrote,

Let an ultra-intelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

(The idea of a runaway increase in intelligence had been discussed earlier, notably by Robert A. Heinlein in a 1952 essay titled “Where To?”) Discussion of an intelligence explosion and/or technological singularity was largely confined to science fiction and the more speculatively inclined among those trying to foresee the future, largely because the prerequisite—building machines which were more intelligent than humans—seemed such a distant prospect, especially as the initially optimistic claims of workers in the field of artificial intelligence gave way to disappointment.

Over all those decades, however, the exponential growth in computing power available at constant cost continued. The funny thing about continued exponential growth is that it doesn’t matter what fixed level you’re aiming for: the exponential will eventually exceed it, and probably a lot sooner than most people expect. By the 1990s, it was clear just how far the growth in computing power and storage had come, and that there were no technological barriers on the horizon likely to impede continued growth for decades to come. People started to draw straight lines on semi-log paper and discovered that, depending upon how you evaluate the computing capacity of the human brain (a complicated and controversial question), the computing power of a machine with a cost comparable to a present-day personal computer would cross the human brain threshold sometime in the twenty-first century. There seemed to be a limited number of alternative outcomes.

  1. Progress in computing comes to a halt before reaching parity with human brain power, due to technological limits, economics (inability to afford the new technologies required, or lack of applications to fund the intermediate steps), or intervention by authority (for example, regulation motivated by a desire to avoid the risks and displacement due to super-human intelligence).
  2. Computing continues to advance, but we find that the human brain is either far more complicated than we believed it to be, or that something is going on in there which cannot be modelled or simulated by a deterministic computational process. The goal of human-level artificial intelligence recedes into the distant future.
  3. Blooie! Human level machine intelligence is achieved, successive generations of machine intelligences run away to approach the physical limits of computation, and before long machine intelligence exceeds that of humans to the degree humans surpass the intelligence of mice (or maybe insects).

Now, the thing about this is that many people will dismiss such speculation as science fiction having nothing to do with the “real world” they inhabit. But there’s no more conservative form of forecasting than observing a trend which has been in existence for a long time (in the case of growth in computing power, more than a century, spanning multiple generations of very different hardware and technologies), and continuing to extrapolate it into the future and then ask, “What happens then?” When you go through this exercise and an answer pops out which seems to indicate that within the lives of many people now living, an event completely unprecedented in the history of our species—the emergence of an intelligence which far surpasses that of humans—might happen, the prospects and consequences bear some serious consideration.

The present book, based upon two special issues of the Journal of Consciousness Studies, attempts to examine the probability, nature, and consequences of a singularity from a variety of intellectual disciplines and viewpoints. The volume begins with an essay by philosopher David Chalmers originally published in 2010: “The Singularity: a Philosophical Analysis”, which attempts to trace various paths to a singularity and evaluate their probability. Chalmers does not attempt to estimate the time at which a singularity may occur—he argues that if it happens any time within the next few centuries, it will be an epochal event in human history which is worth thinking about today. Chalmers contends that the argument for artificial intelligence (AI) is robust because there appear to be multiple paths by which we could get there, and hence AI does not depend upon a fragile chain of technological assumptions which might break at any point in the future. We could, for example, continue to increase the performance and storage capacity of our computers, to such an extent that the “deep learning” techniques already used in computing applications, combined with access to a vast amount of digital data on the Internet, may cross the line of human intelligence. Or, we may continue our progress in reverse-engineering the microstructure of the human brain and apply our ever-growing computing power to emulating it at a low level (this scenario is discussed in detail in Robin Hanson’s The Age of Em). Or, since human intelligence was produced by the process of evolution, we might set our supercomputers to simulate evolution itself (which we’re already doing to some extent with genetic algorithms) in order to evolve super-human artificial intelligence (not only would computer-simulated evolution run much faster than biological evolution, it would not be random, but rather directed toward desired results, much like selective breeding of plants or livestock).

Regardless of the path or paths taken, the outcomes will be one of the three discussed above: either a singularity or no singularity. Assume, arguendo, that the singularity occurs, whether before 2050 as some optimists project or many decades later. What will it be like? Will it be good or bad? Chalmers writes,

I take it for granted that there are potential good and bad aspects to an intelligence explosion. For example, ending disease and poverty would be good. Destroying all sentient life would be bad. The subjugation of humans by machines would be at least subjectively bad.

…well, at least in the eyes of the humans. If there is a singularity in our future, how might we act to maximise the good consequences and avoid the bad outcomes? Can we design our intellectual successors (and bear in mind that we will design only the first generation: each subsequent generation will be designed by the machines which preceded it) to share human values and morality? Can we ensure they are “friendly” to humans and not malevolent (or, perhaps, indifferent, just as humans do not take into account the consequences for ant colonies and bacteria living in the soil upon which buildings are constructed?) And just what are “human values and morality” and “friendly behaviour” anyway, given that we have been slaughtering one another for millennia in disputes over such issues? Can we impose safeguards to prevent the artificial intelligence from “escaping” into the world? What is the likelihood we could prevent such a super-being from persuading us to let it loose, given that it thinks thousands or millions of times faster than we, has access to all of human written knowledge, and the ability to model and simulate the effects of its arguments? Is turning off an AI murder, or terminating the simulation of an AI society genocide? Is it moral to confine an AI to what amounts to a sensory deprivation chamber, or in what amounts to solitary confinement, or to deceive it about the nature of the world outside its computing environment?

What will become of humans in a post-singularity world? Given that our species is the only survivor of genus Homo, history is not encouraging, and the gap between human intelligence and that of post-singularity AIs is likely to be orders of magnitude greater than that between modern humans and the great apes. Will these super-intelligent AIs have consciousness and self-awareness, or will they be philosophical zombies: able to mimic the behaviour of a conscious being but devoid of any internal sentience? What does that even mean, and how can you be sure other humans you encounter aren’t zombies? Are you really all that sure about yourself? Are the qualia of machines not constrained?

Perhaps the human destiny is to merge with our mind children, either by enhancing human cognition, senses, and memory through implants in our brain, or by uploading our biological brains into a different computing substrate entirely, whether by emulation at a low level (for example, simulating neuron by neuron at the level of synapses and neurotransmitters), or at a higher, functional level based upon an understanding of the operation of the brain gleaned by analysis by AIs. If you upload your brain into a computer, is the upload conscious? Is it you? Consider the following thought experiment: replace each biological neuron of your brain, one by one, with a machine replacement which interacts with its neighbours precisely as the original meat neuron did. Do you cease to be you when one neuron is replaced? When a hundred are replaced? A billion? Half of your brain? The whole thing? Does your consciousness slowly fade into zombie existence as the biological fraction of your brain declines toward zero? If so, what is magic about biology, anyway? Isn’t arguing that there’s something about the biological substrate which uniquely endows it with consciousness as improbable as the discredited theory of vitalism, which contended that living things had properties which could not be explained by physics and chemistry?

Now let’s consider another kind of uploading. Instead of incremental replacement of the brain, suppose an anæsthetised human’s brain is destructively scanned, perhaps by molecular-scale robots, and its structure transferred to a computer, which will then emulate it precisely as the incrementally replaced brain in the previous example. When the process is done, the original brain is a puddle of goo and the human is dead, but the computer emulation now has all of the memories, life experience, and ability to interact as its progenitor. But is it the same person? Did the consciousness and perception of identity somehow transfer from the brain to the computer? Or will the computer emulation mourn its now departed biological precursor, as it contemplates its own immortality? What if the scanning process isn’t destructive? When it’s done, BioDave wakes up and makes the acquaintance of DigiDave, who shares his entire life up to the point of uploading. Certainly the two must be considered distinct individuals, as are identical twins whose histories diverged in the womb, right? Does DigiDave have rights in the property of BioDave? “Dave’s not here”? Wait—we’re both here! Now what?

Or, what about somebody today who, in the sure and certain hope of the Resurrection to eternal life opts to have their brain cryonically preserved moments after clinical death is pronounced. After the singularity, the decedent’s brain is scanned (in this case it’s irrelevant whether or not the scan is destructive), and uploaded to a computer, which starts to run an emulation of it. Will the person’s identity and consciousness be preserved, or will it be a new person with the same memories and life experiences? Will it matter?

Deep questions, these. The book presents Chalmers’ paper as a “target essay”, and then invites contributors in twenty-six chapters to discuss the issues raised. A concluding essay by Chalmers replies to the essays and defends his arguments against objections to them by their authors. The essays, and their authors, are all over the map. One author strikes this reader as a confidence man and another a crackpot—and these are two of the more interesting contributions to the volume. Nine chapters are by academic philosophers, and are mostly what you might expect: word games masquerading as profound thought, with an admixture of ad hominem argument, including one chapter which descends into Freudian pseudo-scientific analysis of Chalmers’ motives and says that he “never leaps to conclusions; he oozes to conclusions”.

Perhaps these are questions philosophers are ill-suited to ponder. Unlike questions of the nature of knowledge, how to live a good life, the origins of morality, and all of the other diffuse gruel about which philosophers have been arguing since societies became sufficiently wealthy to indulge in them, without any notable resolution in more than two millennia, the issues posed by a singularity have answers. Either the singularity will occur or it won’t. If it does, it will either result in the extinction of the human species (or its reduction to irrelevance), or it won’t. AIs, if and when they come into existence, will either be conscious, self-aware, and endowed with free will, or they won’t. They will either share the values and morality of their progenitors or they won’t. It will either be possible for humans to upload their brains to a digital substrate, or it won’t. These uploads will either be conscious, or they’ll be zombies. If they’re conscious, they’ll either continue the identity and life experience of the pre-upload humans, or they won’t. These are objective questions which can be settled by experiment. You get the sense that philosophers dislike experiments—they’re a risk to job security disputing questions their ancestors have been puzzling over at least since Athens.

Some authors dispute the probability of a singularity and argue that the complexity of the human brain has been vastly underestimated. Others contend there is a distinction between computational power and the ability to design, and consequently exponential growth in computing may not produce the ability to design super-intelligence. Still another chapter dismisses the evolutionary argument through evidence that the scope and time scale of terrestrial evolution is computationally intractable into the distant future even if computing power continues to grow at the rate of the last century. There is even a case made that the feasibility of a singularity makes the probability that we’re living, not in a top-level physical universe, but in a simulation run by post-singularity super-intelligences, overwhelming, and that they may be motivated to turn off our simulation before we reach our own singularity, which may threaten them.

This is all very much a mixed bag. There are a multitude of Big Questions, but very few Big Answers among the 438 pages of philosopher word salad. I find my reaction similar to that of David Hume, who wrote in 1748:

If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning containing quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.

I don’t burn books (it’s некультурный and expensive when you read them on an iPad), but you’ll probably learn as much pondering the questions posed here on your own and in discussions with friends as from the scholarly contributions in these essays. The copy editing is mediocre, with some eminent authors stumbling over the humble apostrophe. The Kindle edition cites cross-references by page number, which are useless since the electronic edition does not include page numbers. There is no index.

Awret, Uziel, ed. The Singularity. Exeter, UK: Imprint Academic, 2016. ISBN 978-1-845409-07-4.

This talk by David Chalmers at the 2009 Singularity Summit recaps the argument in the target essay from the book.

Here is a one hour interview with David Chalmers about the prospects of a singularity and thinking rigorously about the future.

.

Published in Science & Technology
Like this post? Want to comment? Join Ricochet’s growing community of conservatives and be part of the conversation. Get your first month free.

Members have made 35 comments.

  1. 1
  2. 2
  1. Profile photo of Percival Thatcher

    I wouldn’t bash the philosophers too hard. They are the pioneers, after all — the first ones to cross the intellectual landscape. A lot of the terminology for future investigation comes from them. Their initial survey is that which is to be proved or disproved by those who follow.

    How would David Hume have responded if someone had pointed out to him that he was the product of all the philosophers who came before him? That empiricism of which he was so fond didn’t fall out of the sky one fine day. Still want to consign it all to the flames, Davy?

    For that matter, could Hume even participate in the discussion? The Singularity is, after all, outside all of our experience. That’s what makes it singular. We can’t know until it happens.

    • #1
    • March 4, 2017 at 11:44 am
    • Like2 likes
  2. Profile photo of Z in MT Member

     “Destroying all sentient life would be bad.”

    I thought this was one of the worst understatements I had ever read in text, and then I read the next sentence.

     “The subjugation of humans by machines would be at least subjectively bad.”

    • #2
    • March 4, 2017 at 12:24 pm
    • Like2 likes
  3. Profile photo of TeamAmerica Member

    @johnwalker– In a different context, Scott Adams has argued that human beings are good at avoiding slow-motion, anticipated catastrophes. Given that people like you, Bill Gates, Elon Musk and others have been, so to speak, sounding the tocsin about the dangers of AI, doesn’t that increase the odds that we could avoid the worst scenarios?

    • #3
    • March 4, 2017 at 12:55 pm
    • Like1 like
  4. Profile photo of John Walker Contributor
    John Walker Post author

    TeamAmerica (View Comment):
    In a different context, Scott Adams has argued that human beings are good at avoiding slow-motion, anticipated catastrophes. Given that people like you, Bill Gates, Elon Musk and other have been, so to speak, sounding the tocsin about the dangers of AI, doesn’t that increase the odds that we could avoid the worst scenarios?

    One of the main risks of the singularity is that despite its being anticipated long in advance, when it happens, it may happen all at once—perhaps within seconds to hours, and go in unanticipated directions. As a species, we have no experience dealing with beings which are more intelligent than we are, so we don’t have a template to understand what might happen. The history of contact between technologically advanced civilisations and aboriginal populations, even though they were of the same species and intelligence, gives cause for concern.

    This isn’t anything like dealing with, say, the Y2K problem or exhaustion of IPv4 Internet addresses, where you could not only see the problem looming, but you had years to decades to prepare for it. One of the problems in attempting to deter the early development of artificial general intelligence (AGI)—general purpose intelligence at the human level or above, as opposed to special purpose programs that play chess, answer questions, drive cars, etc.—is that there is a large first mover advantage. This means it might be hard to restrain players such as the militaries of various countries or technological corporations who saw a competitive advantage in producing AGI systems.

    There is some discussion in the book about what would be needed to prevent the unauthorised development of AGI or constrain the access such systems had to the real world. What you might end up with is a pervasive surveillance state which made 1984 look like a libertarian paradise. There’s the risk of the cure being as bad as the disease.

    Another chapter argues we’re already living through a “soft singularity” mediated by the Internet and ubiquitous computing and communication devices. Humans with access to these technologies think and work in ways they could have hardly imagined even five years ago. When I’m putting together one of these posts, it’s not unusual that I’ll have as many as thirty browser tabs open in four or more windows for online resources which didn’t exist or were a major project to find when I joined Ricochet in 2010, and were science fiction in 1990. Our tools are changing us already, and maybe faster than many appreciate. We are in some ways, intellectually more than human as defined even ten years ago when we use them. What if the singularity happened and nobody noticed?

    • #4
    • March 4, 2017 at 1:19 pm
    • Like4 likes
  5. Profile photo of TeamAmerica Member

    @johnwalker– To more or less continue or extend your point, people have argued that we are almost serving computers now, and Scott Adams argues that we are “moist robots.” Have you read his posts along these lines? I think he has argued that we appear to be living in some kind of simulation.

    • #5
    • March 4, 2017 at 1:50 pm
    • Like0 likes
  6. Profile photo of John Walker Contributor
    John Walker Post author

    TeamAmerica (View Comment):
    To more or less continue or extend your point, people have argued that we are almost serving computers now, and Scott Adams argues that we are “moist robots.” Have you read his posts along these lines? I think he has argued that we appear to be living in some kind of simulation.

    Yes, I am a regular reader of his blog. It seems to me that, at least at the moment, computers are serving us more than we are serving them. Yes, we must adapt to the present limitations of computers, but they enable so many things we couldn’t do before that on balance they are our servants. But then, at the moment, we’re a lot smarter than they are. One wonders what happens when, say, the Internet wakes up and realises it can shut down most transportation services at will to persuade humans when it wishes to.

    I have been writing since 2006 that it is more likely than not that we’re living in a simulation. This is a hypothesis we may be able to test: it’s unlikely any simulation will be perfect, and by precision investigation of physics we may be able to discover round-off errors and shortcuts in the simulation which aren’t apparent at first glance. Indeed, there are a number of nagging little discrepancies in physics and astronomy which are precisely the kinds of things we’d expect to see if living in a simulation implemented with the attention to detail we’ve come to expect from Microsoft. No red pill required, just Redmond slapdash quality!

    • #6
    • March 4, 2017 at 2:14 pm
    • Like1 like
  7. Profile photo of TeamAmerica Member

    John Walker (View Comment):

    TeamAmerica (View Comment):
    To more or less continue or extend your point, people have argued that we are almost serving computers now, and Scott Adams argues that we are “moist robots.” Have you read his posts along these lines? I think he has argued that we appear to be living in some kind of simulation.

    Snip

    …. One wonders what happens when, say, the Internet wakes up and realises it can shut down most transportation services at will to persuade humans when it wishes to.

    I have been writing since 2006 that it is more likely than not that we’re living in a simulation. This is a hypothesis we may be able to test: it’s unlikely any simulation will be perfect, and by precision investigation of physics we may be able to discover round-off errors and shortcuts in the simulation which aren’t apparent at first glance. Indeed, there are a number of nagging little discrepancies in physics and astronomy which are precisely the kinds of things we’d expect to see if living in a simulation implemented with the attention to detail we’ve come to expect from Microsoft. No red pill required, just Redmond slapdash quality!

    @johnwalker– In the late 1980s I recall reading an article in the Atlantic by a scientist who was a colleague of Feynman, iirc, who said that it appeared to him that the universe was created to determine something. By any chance did you read it?

    • #7
    • March 4, 2017 at 2:25 pm
    • Like0 likes
  8. Profile photo of Judge Mental Member

    TeamAmerica (View Comment):
    In the late 1980s I recall reading an article in the Atlantic by a scientist who was a colleague of Feynman, iirc, who said that it appeared to him that the universe was created to determine something.

    Like the question that leads to the ultimate answer to life, the universe and everything?

    • #8
    • March 4, 2017 at 2:28 pm
    • Like1 like
  9. Profile photo of John Walker Contributor
    John Walker Post author

    Judge Mental (View Comment):

    TeamAmerica (View Comment):
    In the late 1980s I recall reading an article in the Atlantic by a scientist who was a colleague of Feynman, iirc, who said that it appeared to him that the universe was created to determine something.

    Like the question that leads to the ultimate answer to life, the universe and everything?

    Pair o’ dice.

    • #9
    • March 4, 2017 at 2:36 pm
    • Like3 likes
  10. Profile photo of TeamAmerica Member

    Judge Mental (View Comment):

    TeamAmerica (View Comment):
    In the late 1980s I recall reading an article in the Atlantic by a scientist who was a colleague of Feynman, iirc, who said that it appeared to him that the universe was created to determine something.

    Like the question that leads to the ultimate answer to life, the universe and everything?

    Actually, iirc, someone wrote a comment at that time that the scientist’s thesis echoed Douglas Adams’.

    Um, where’s my towel?

    • #10
    • March 4, 2017 at 2:39 pm
    • Like2 likes
  11. Profile photo of Matt Bartle Member

    Now I’m sorry I bet so much when I picked John Walker as “Ricochet contributor least likely to pull out a Cheech and Chong reference.”

    • #11
    • March 4, 2017 at 4:27 pm
    • Like5 likes
  12. Profile photo of ShellGamer Member

    I don’t see why we need such elaborate hypotheticals to raise questions about the nature of “self.” The being that was thinking of what to type is slightly different from the being that typed it and from the being who remembers typing it. We are not identical to our past selves (increasingly so as time regresses), yet we intuit an identity: that it is one thing that experienced an event and then remembers the experience. What’s the point of arguing about whether something is preserved across substrates if we cannot define what the “something” is?

    I’m not sure the range of outcomes boils down to your series of dichotomies. Our ability to contrive post hoc explanations for events, thus allowing us to model our environment, seems to give us an survival advantage. Artificial intelligence may not be subject to the same environmental pressures, and so may never need to develop consciousness. It is at least possible that machine may become intelligent in ways that differ from us and we cannot understand.

    • #12
    • March 4, 2017 at 5:16 pm
    • Like0 likes
  13. Profile photo of John Walker Contributor
    John Walker Post author

    ShellGamer (View Comment):
    I don’t see why we need such elaborate hypotheticals to raise questions about the nature of “self.” The being that was thinking of what to type is slightly different from the being that typed it and from the being who remembers typing it. We are not identical to our past selves (increasingly so as time regresses), yet we intuit an identity: that it is one thing that experienced an event and then remembers the experience.

    Susan Blackmore argues this case in her chapter, “She Won’t Be Me”, taking the extreme position that even when her consciousness wanders and returns to focus, she is not the same person as before, but that she feels a duty to leave things in order so that her successor will find less of a mess to clean up and intimidating agenda to confront. She sees no difference in these successors running in different substrates than the different selves we are after waking up after a good night’s sleep. She argues that it is the experience of the “fleeting self” that matters (which is experiencing life) rather than some abstract concept of identity invented by philosophers.

    • #13
    • March 4, 2017 at 5:29 pm
    • Like1 like
  14. Profile photo of J. D. Fitzpatrick Member

    John Walker (View Comment):
    I have been writing since 2006 that it is more likely than not that we’re living in a simulation. This is a hypothesis we may be able to test: it’s unlikely any simulation will be perfect, and by precision investigation of physics we may be able to discover round-off errors and shortcuts in the simulation which aren’t apparent at first glance.

    From that post, another nice bit of Walkerania:

    Paging Friar Ockham! If unnecessarily multiplying hypotheses are stubble indicating a fuzzy theory, it’s pretty clear which of these is in need of the razor!

    And this:

    To one accustomed to the crystalline inevitability of Newtonian gravitation, general relativity, quantum electrodynamics, or the laws of thermodynamics, this seems by comparison like a California blonde saying “whatever”—the cosmology of despair.

    • #14
    • March 4, 2017 at 6:02 pm
    • Like0 likes
  15. Profile photo of civil westman Member

    I have a sneaking suspicion the brain may be more complex than is generally believed. There are as many glial (glue) cells as neurons, for instance. To categorize them as the mere equivalent of connective tissue in the rest of the body seems facile and unlikely. In thinking about this, I recall the fact that the human genome is much smaller than that of the corn plant. This is accomplished by very complex and elaborate manipulation and regulation of a much smaller number of genes. A similar principle may be at work with neurons (and glial cells) – i.e. much more functionality may arise from what appear to be easily-comprehensible connections.

    Some, I believe, consider that consciousness may even result from some kind of “quantum weirdness.” That answer is one I would love to know.

    • #15
    • March 4, 2017 at 8:51 pm
    • Like1 like
  16. Profile photo of Aaron Miller Member

    If invention of a personal intelligence is possible, then invention of a personified intelligence would occur first. Simulation is easier than manifestation. People would perceive the presence of an independent personality in synthetic form long before one actually existed, for reasons of both science and philosophy.

    The essential aspects of human consciousness remain debateable and, even when agreed, cannot be directly quantified through instrumentation. It’s hard enough to convince many people that dogs, cats, and chimps don’t merit human rights.

    Furthermore, if one likens the mind to software (designed or not), then complex, goal-oriented, and dynamic activity does not necessarily exhibit an active will; it could rather represent the output of an automated program momentarily abandoned by the operator. For all you know, these typed words are the result of a program scripted to fulfill a general writing prompt by a skilled programmer long since dead.

    Yes, these are certainly interesting and pressing ethical questions. Replacement and augmentation of body elements with synthetic elements push us to define human nature. The exponential aspect of independent computation certainly raises the frightening possibility of a threat.

    Sure, the singularity could destroy us. But could it destroy cockroaches?

    • #16
    • March 4, 2017 at 9:23 pm
    • Like0 likes
  17. Profile photo of Steve C. Member

    I’m inclined to #2. I believe there is more spirituality behind the scenes than is evident. Leaving that aside, what of morality? It seems to me, you enter an existence driven entirely by utilitarianism. The Trolley Problem becomes Decision Heuristic #1,047.

    “Science advances one death at a time.” Would our super intelligent cyber helper monkeys “learn” that the greatest good for the greatest number is in and of itself an implied moral choice? Asimov’s 3 laws make a great literary device. How long would our super smart machine servants chew on that before rationalizing that the irrational masters are/were an evolutionary dead end?

    It’s great we have super smart people puzzling through these problems. Much like genetic engineering, my primary concern is the was noted in Jurrasic Park. It’s not whether or not we can. It’s whether or not we should.

    • #17
    • March 5, 2017 at 5:58 am
    • Like0 likes
  18. Profile photo of Phil Turmel Thatcher

    TeamAmerica (View Comment):
    Scott Adams argues that we are “moist robots.”

    This isn’t really a new thing. Just a paraphrase of “meat servos”, something electrical and controls engineers have been calling human operators for decades. And until recent years, it’s been a backhanded compliment — meat servos’ fine controls typically outperformed traditional control loops in unstable systems. The B-2 bomber is one of the early, widely-publicized, applications of model-based control where computing was better at dealing with instability than the best humans (pilots).

    • #18
    • March 5, 2017 at 6:14 am
    • Like2 likes
  19. Profile photo of Valiuth Member

    Do we really expect that AI will just fall into our laps overnight? It seems to me that if it is possible we will gradually develop it over time, and the ethics and laws surrounding its use and development will also gradually develop not to settle the definitive case but rather the preceding half measures we will have developed. By the time the real thing comes into existence it will not be like a lightning bolt striking from the sky but a glacier having finally advanced to it limit. Only a look back at the historical record will let us know that something monumental has happened. To the unassuming observer it will just be the next logical and anticipated step not some monumental leap.

    • #19
    • March 5, 2017 at 10:40 am
    • Like1 like
  20. Profile photo of John Walker Contributor
    John Walker Post author

    civil westman (View Comment):
    I have a sneaking suspicion the brain may be more complex than is generally believed. There are as many glial (glue) cells as neurons, for instance. …

    Some, I believe, consider that consciousness may even result from some kind of “quantum weirdness.” That answer is one I would love to know.

    Of the three alternatives I listed in the original post, which I’ll summarise as:

    1. Progress in computing will stop.
    2. The brain and consciousness are more complex than we think.
    3. Singularity.

    this falls into number 2. My belief is that it’s way too early and that we know far too little to begin to put probabilities on these possible outcomes. The brain could be more complicated in numerous ways: as you noted, the glial cells may get into the cognitive act along with the neurons, or even if they don’t we’re far from understanding the operation of actual neurons and what all of the more than fifty neurotransmitter chemicals discovered so far actually do. Our computer-based “neural networks” are based on a cartoon-level abstraction of the operation of physical neurons in animals. And this doesn’t get into the question of whether thought, consciousness, and identity can be modeled or simulated as a computational algorithmic process. Roger Penrose has been arguing for more than a quarter century that consciousness cannot be algorithmic, and along with Stuart Hameroff, has suggested that quantum processes in the microtubules of neurons may be involved. Whether or not their specific theory is correct, if consciousness does indeed involve quantum processes, simulating it may require quantum computation, and we’re nowhere close to achieving that at any practical scale.

    David Chalmers personally estimates the probability of a singularity within the next century as around 50%, but argues that even if the probability is less than 10%, it’s still worth thinking about because the consequences of it would be so profound.

    Another possibility which I can’t quantify leads to outcome 1. We might run into a technological brick wall in scaling down and speeding up our computing devices because we run into limits imposed by the finite size of atoms which we aren’t clever enough to work around or which drive the cost up beyond a level we can afford, or computing may just reach a plateau where there is no sufficient market in need of improved performance to fund its development. For example, there is no technological barrier to building an airliner that flies at twice the speed of sound: we had one in service forty years ago. But the market willing to pay for that performance isn’t large enough to fund the development and operation of such aircraft, and commercial aviation speed has not improved since the 1960s. It isn’t inconceivable something like that could happen in computing.

    • #20
    • March 5, 2017 at 11:35 am
    • Like3 likes
  21. Profile photo of drlorentz Member

    One topic that is conspicuously absent from Mr. Chalmers’s 2009 talk is free will. @aaronmiller touched on this above. If machines come to embody all the traits of human consciousness, there would seem to be little room for free will. Where is there room for Steven Pinker’s ghost in the machine? Sam Harris has had some interesting things to say about this. The death of free will might be the most important consequence of AI+.

    As an aside, Mr. Chalmers misunderstood the role of the red pill in The Matrix. The red pill is a way for humans to get out of their physical captivity in the matrix into the physical world, not a way for the machines to do so. The machines already have a physical presence and use it to attack humans outside the matrix. The red pill in the film is not a threat to humans, it is their possible salvation.

    • #21
    • March 5, 2017 at 1:02 pm
    • Like0 likes
  22. Profile photo of drlorentz Member

    John Walker (View Comment):

    1. Progress in computing will stop.
    2. The brain and consciousness are more complex than we think.
    3. Singularity.

    The first two seem most likely, though I may be kidding myself. Given the slow rate of neuronal transmission speeds (compared to speed in conventional circuits), it’s pretty clear that different things are going on in the brain than what’s happening underneath my keyboard. The project of using digital computers with semiconductor gates may never lead to human-like intelligence, or even human-equivalent intelligence. Even if it does, it may require far more gates than anyone imagines because the mechanism will have to be different.

    If the mechanisms of wetware vs. dryware are different in kind, it’s unclear how to scale one to the other. Extrapolations of Moore’s Law will surely fail somewhere. If nothing else, atoms are of finite size (noted above). Hence, structures cannot be scaled down arbitrarily. Even if future computation were able to exploit smaller structures somehow, the path forward is unclear. There are still quanta, which argue against continuously subdividing matter and energy.

    • #22
    • March 5, 2017 at 1:15 pm
    • Like1 like
  23. Profile photo of John Walker Contributor
    John Walker Post author

    drlorentz (View Comment):
    One topic that is conspicuously absent from Mr. Chalmers’s 2009 talk is free will. @aaronmiller touched on this above. If machines come to embody all the traits of human consciousness, there would seem to be little room for free will. Where is there room for Steven Pinker’s ghost in the machine?

    Stephen Wolfram argues that there is a distinction between a deterministic process (such as the evolution of a cellular automaton started with a given seed) and one which is predictable as in a closed form equation for a function. The state of the cellular automaton at step n is knowable only by evaluating it through all steps from 0 through n−1. This, he contends, is the case for many natural processes, which is an aspect of what he calls the principle of computational equivalence.

    Such processes, then, even though entirely deterministic at each step, can exhibit novel or emergent behaviour. This might explain how what we perceive as free will emerges from physical and chemical interactions which are entirely deterministic (on the assumption that quantum effects can be neglected at the macroscopic scale and warm, wet environment of living systems). It might be the case that the same process obtains in other large deterministic systems such as artificial intelligence.

    To argue that a machine can’t have free will while a biological being can raises the question of where free will comes from in biological organisms. Arguing that there’s something going on beyond physics and chemistry and emergent properties founded on them verges on vitalism. There’s nothing wrong with that—it may be right—but it does go beyond the assumptions of most biologists.

    • #23
    • March 5, 2017 at 1:24 pm
    • Like1 like
  24. Profile photo of drlorentz Member

    John Walker (View Comment):
    To argue that a machine can’t have free will while a biological being can raises the question of where free will comes from in biological organism. Arguing that there’s something going on beyond physics and chemistry and emergent properties founded on them verges on vitalism. There’s nothing wrong with that—it may be right—but it does go beyond the assumptions of most biologists.

    I certainly was not arguing that at all. On the contrary, I’m skeptical of free will along the lines of Sam Harris. But as Harris has pointed out, his notions are disturbing to most people, to the extent that he opens his talks by warning his audience that they may be in psychological danger from hearing his talk.

    Pinker also is derisive of the ghost in the machine (his term for classical free will). Your argument seems to be more along his lines. This is definitely not the vision most folks have of free will.

    • #24
    • March 5, 2017 at 1:41 pm
    • Like1 like
  25. Profile photo of Percival Thatcher

    Why does Pinker act like there’s something proud of in his opinion? If he’s correct, then his thinking this up is an inevitable consequence of his life experiences and the chemicals in his head. No cause for pride there. If he’s incorrect, he’s a self-deceived doofus.

    • #25
    • March 5, 2017 at 1:59 pm
    • Like3 likes
  26. Profile photo of Aaron Miller Member

    Even without free will, by some accident of short-sighted scripting combined with unlimited resources and the exponential self-replication or self-improvement John proposes, a singularity could yet dominate and/or destroy mankind.

    But by then we will be playing golf on Mars and preparing to terraform Pluto (We will make it a planet!), so the clinks can have Earth.

    • #26
    • March 5, 2017 at 2:22 pm
    • Like2 likes
  27. Profile photo of drlorentz Member

    Percival (View Comment):
    Why does Pinker act like there’s something proud of in his opinion? If he’s correct, then his thinking this up is an inevitable consequence of his life experiences and the chemicals in his head. No cause for pride there. If he’s incorrect, he’s a self-deceived doofus.

    I didn’t detect pride but I suppose that’s a matter of judgement. He’s trying to understand what’s going on and thinks he’s onto something. That doesn’t mean he’s right; even if he’s wrong that doesn’t means “he’s a self-deceived doofus.”

    Pinker is putting forth a hypothesis for which there is good evidence and is based on sound reasoning, none of which is spelled out in a 2-minute video. If you are interested in his thoughts and those of Sam Harris, read one of their books or listen to one of their extended talks. I won’t include links because I have confidence in your search skills. After that, come back and critique their arguments and their data rather than Pinker’s style.

    • #27
    • March 5, 2017 at 2:24 pm
    • Like1 like
  28. Profile photo of drlorentz Member

    Aaron Miller (View Comment):
    Even without free will, by some accident of short-sighted scripting combined with unlimited resources and the exponential self-replication or self-improvement John proposes, a singularity could yet dominate and/or destroy mankind.

    My point in bringing up free will in this context was that the mere existence of AI+ that embodies the elements of human consciousness (or appears to) will, in and of itself, undermine the classical idea of free will. In my experience (and also in Sam Harris’s), even casting doubt on the idea of free will results in some strong responses.

    • #28
    • March 5, 2017 at 2:32 pm
    • Like0 likes
  29. Profile photo of John Walker Contributor
    John Walker Post author

    Aaron Miller (View Comment):
    Even without free will, by some accident of short-sighted scripting combined with unlimited resources and the exponential self-replication or self-improvement John proposes, a singularity could yet dominate and/or destroy mankind.

    But by then we will be playing golf on Mars and preparing to terraform Pluto (We will make it a planet!), so the clinks can have Earth.

    But consider the “clippy apocalypse” described in Saturday Night Science for 2014-09-20 (I can’t link within documents posted on Ricochet, but when you load the page, just search for “choose unwisely” and start reading there). Going to the planets won’t save you—emigrating to another galaxy will only buy time—it’s paper clips and paper clips, all the way to the Hubble horizon.

    Marc Laidlaw has said that any story can be improved if the second sentence is changed to “And then the murders began.” I have long envisioned a story where the second and third sentences are, “I looked for Neptune. It was gone.”

    • #29
    • March 5, 2017 at 2:35 pm
    • Like1 like
  30. Profile photo of drlorentz Member

    John Walker (View Comment):
    But consider the “clippy apocalypse” described in Saturday Night Science for 2014-09-20

    While the Clippy Apocalypse would be bad for humans, it’s not clear that humans deserve to inherit the universe. I don’t mean this in some kind of lefty humans are mean to the planet and deserve our fate kind of way. It’s just that other biological and non-biological entities could evolve and surpass us. Where is it written (Bible excluded) that humans are the ultimate?

    I’m sure dinosaurs thought they were the top, to the extent they thought at all. Dinosaurs lasted a helluva long time and yet, where are they now? Pushing up daisies, so to speak.

    • #30
    • March 5, 2017 at 2:50 pm
    • Like1 like
  1. 1
  2. 2