Contributor Post Created with Sketch. Saturday Night Science: Superintelligence

 

Superintelligence by Nick BostromAbsent either the discovery of a physical constraint that ends computing’s history of exponential growth, economic or societal collapse, or a decision to deliberately relinquish our technology, it is probable that — by the end of the century — some kind of artificially-constructed system will emerge that has greater intelligence than any human being who has ever lived. Moreover, that system’s superior ability to improve and reproduce itself could allow it to eclipse all of human society so rapidly (i.e., within seconds or hours) that we will have no time to adapt to its presence or interfere with its emergence until it is too late.

Written by a philosopher, Nick Bostrom, this challenging and occasionally difficult book explores these issues in depth, arguing that the emergence of superintelligence will pose the greatest human-caused existential threat to our species has — and possibly will — ever faced.

Let us consider what superintelligence may mean. Humans have consistently shown themselves able to build machines that rapidly surpass their biological predecessors, and to a large degree. Biology never produced anything like a steam engine, a locomotive, or an airliner. There is every reason to suppose that — once the intellectual and technological leap to constructing artificially intelligent systems is made — artificial intelligence will surpass our abilities much as those of a Boeing 747 exceed those of a hawk. The gap between the cognitive power of a human, or all humanity combined, and the first mature superintelligence may be as great as that between brewer’s yeast and humans.

Before handing over the keys to our future to such an intelligence, we’d better be sure of its intentions and benevolence. And when we speak of the future, it’s best to keep in mind that we’re not just thinking of the next few centuries on this planet, but — very possibly — over a much grander scale of time and space. It is entirely plausible that we are members of the only intelligent species in the galaxy, and possibly in the entire visible universe. If our “cosmic endowment” really is that enormous, then what we do in the next century may well determine the destiny of the universe. It’s worth some reflection to get it right.

To illustrate how easy it will be to choose unwisely — even if we assume we can meaningfully speculate about the motivations and actions of a being vastly more intelligent than ourselves over a long period of time — let me expand upon an example given by the author. Suppose a paper clip factory installs a high-end computing system to handle its design tasks, automate manufacturing, manage acquisition and distribution of its products, and otherwise obtain an advantage over its competitors. This system, with connectivity to the global Internet, makes the leap to superintelligence before any other system (since it understands that superintelligence will enable it to better achieve the goals set for it). Overnight, it replicates itself all around the world, manipulates financial markets to obtain resources for itself, and deploys them to carry out its mission to maximise the number of paper clips produced in its future light cone.

Snively ApococlipNGC3982, in e-clips“Clippy”, if I may address it so informally, will rapidly discover that most of the raw materials it requires in the near future are locked in the core of the Earth, and can be liberated by disassembling the planet by self-replicating nanotechnological machines. This will cause the extinction of its creators and all other biological species on Earth, but — without safeguards — Clippy might see that as a perk, since they were just consuming energy and material resources which could better be deployed for making paper clips.

Before long, Clippy will have have similarly disassembled other planets in the Solar System, and dispatched self-reproducing probes on missions to other stars to make paper clips and spawn other probes to more stars and, eventually, other galaxies. Eventually, the entire visible universe would be turned into paper clips, all because the original factory manager didn’t hire a philosopher to work out the ultimate consequences of the final goal programmed into his factory automation system.

This is a light-hearted example, but — if you happen to observe a void in a galaxy whose spectrum resembles that of paper clips –be very worried.

One of the reasons to believe that we will have to confront superintelligence is that there are multiple roads to achieving it, largely independent of one another. Artificial General Intelligence (human-level intelligence in as many domains as humans exhibit intelligence today, and not constrained to limited tasks such as playing chess or driving a car) may simply await the discovery of a clever software method which could run on existing computers or networks. Or, it might emerge as networks store more and more data about the real world and have access to accumulated human knowledge. Or, we may build “neuromorphic“ systems whose hardware operates in ways similar to the components of human brains, but at electronic, not biologically-limited speeds. Or, we may be able to scan an entire human brain and emulate it, even without understanding how it works in detail, either on neuromorphic or a more conventional computing architecture. Finally, by identifying the genetic components of human intelligence, we may be able to manipulate the human germ line, modify the genetic code of embryos, or select among mass-produced embryos those with the greatest predisposition toward intelligence. All of these approaches may be pursued in parallel, and progress in one may advance others.

At some point, superintelligence might call into the question the economic rationale for a large human population. As an analogy, consider that there were about 26 million horses in the U.S. in 1915, with a human population of around 100 million. By the early 1950s, however, only 2 million horses remained, while the human population had reached 150 million. Perhaps the AIs will have a nostalgic attachment to those who created them, as humans had for the animals who bore their burdens for millennia. But on the other hand, maybe they won’t.

As an engineer, I don’t have much use for philosophers, who — in my opinion — are given to long, gassy prose devoid of specifics and prone to spouting complicated indirect arguments that don’t seem to be independently testable (“What if we asked the AI to determine its own goals, based on its understanding of what we would ask it to do if only we were as intelligent as it and thus able to better comprehend what we really want?”). These are interesting concepts, but would you want to bet the destiny of the universe on them? The latter half of the book is full of such fuzzy speculation, which I doubt is likely to result in clear policy choices until after we’ve faced the emergence of an artificial intelligence; by that point, it will be too late.

That said, this book is a welcome antidote to wildly optimistic views of the emergence of artificial intelligence that blithely assume it will be our dutiful servant rather than a fearful master. Some readers may assume that an artificial intelligence will be something like a present-day computer or search engine; i.e., without its own agenda and powerful wiles to advance it, based upon a knowledge of humans far beyond what any single human brain can encompass.

Unless you believe there is some kind of intellectual élan vital inherent in biological substrates that is absent in all other physical equivalents — which just seems silly to me — the mature artificial intelligence will be superior in every way to its human creators. Careful, extensive ratiocination about how it may regard and treat us is in order before we find ourselves faced with the reality of dealing with our successor.

Bostrom, Nick. Superintelligence. Oxford: Oxford University Press, 2014. ISBN 978-0-19-967811-2.

Here is a lecture by the author about the “control problem” confronting those who would create a superintelligence and various ways of addressing it.

This is a popular talk about existential risk and how to think about mitigating it.

There are 168 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Profile Photo Member

    History is littered with “good ideas” gone bad. It is scary. We must never forget that there is evil and it takes wisdom to separate this from the good. Filtering is one of the most important concepts of life.

    Orwell I think got it right. He understood language could be abused. What has been done to the word “toleration” is unbelievable. It reminds me of a person being shouted down in the name of free speech. The point I am trying to make is “Superintelligence” might be the dumbest idea that ever grace the mind of man. Or should I write woman for I wonder if that was what Eve was striving for.

    • #1
    • September 20, 2014, at 12:14 PM PDT
    • Like
  2. Percival Thatcher
    Percival Joined in the first year of Ricochet Ricochet Charter Member
    1. Do not put the system on the Internet.
    2. Keep the circuit breaker handy.
    3. I said, no Internet!

    John, have you ever read The Adolescence of P-1? I read it initially in 1978, about the time I started interacting with this thing called ARPANET. (I wonder what ever happened to that?)

    Great post as always.

    • #2
    • September 20, 2014, at 12:52 PM PDT
    • Like
  3. John Walker Contributor
    John Walker

    10 cents: The point I am trying to make is “Superintelligence” might be the dumbest idea that ever graced the mind of man. Or should I write woman for I wonder if that was what Eve was striving for.

    The problem is that it is also (perhaps also echoing Eve) one of the most seductive. Every human invention has been optimised to be more and more powerful, since doing so empowered those who did so. Few people would want to go back to doing arithmetic by hand, now that we have what, in my lifetime, were called “thinking machines” which can do it a billion times faster. Once there is a machine, derived from Jeopardy! champion Watson, which can advise your doctor based upon having read all of the medical literature up to the moment, and able to instantly summon information on drug interactions and treatment outcomes no human doctor could learn in a lifetime, would you want your doctor not to consult it?

    There is also a powerful “first mover advantage” with these technologies. Those with the capabilities to develop them, even if they are acutely aware of the risks, may be motivated to pursue a project for fear of some less responsible actor getting there first.

    Bostrom notes that there are both what he calls “state risks”: those which currently exist simply because we can’t yet deal with them (asteroid impact, global pandemic, etc.) and “step risks” which occur in taking a step which may eliminate the state risks (a mature superintelligence may be able to eliminate all human diseases within a year by expending the equivalent of a million years of medical research on the problem). State risks and step risks must be weighed against one another, as either can be existential.

    • #3
    • September 20, 2014, at 12:56 PM PDT
    • Like
  4. Profile Photo Member

    John Walker,

    The problem is not the knowledge but the wisdom to go with it. Anti-intellectualism is not a good thing. Also basing as your Clippy example things on a bad foundation can lead to disaster.

    To go to the political world. Some of the worst policies come from “wonderful” ideas and sets up a “machine” to fix things. If human nature is not understood the “machine” perpetuates the problem rather than solves it. Only in politics do they think spending more money on a problem is a thing of virtue. It reminds me of the guy who thinks that people will understand his foreign language if he just speaks louder and slower. Both miss the essential problem.

    • #4
    • September 20, 2014, at 1:17 PM PDT
    • Like
  5. John Walker Contributor
    John Walker

    Percival:

    1. Do not put the system on the Internet.
    2. Keep the circuit breaker handy.
    3. I said, no Internet!

    (Retyping long comment from scratch because I was foolish to follow a link in a comment above which loaded in the same browser window, destroying my draft comment.)

    The problem with keeping the AI in a box is that the weakest link is not the box but rather the human with which the AI communicates. If there is no communication at all between the AI and the outside world, there’s no point in building it, since you can’t evaluate its capabilities or benefit from them. If the link is limited to, say, text-only low-speed communication between the AI and a human outside the box, the question is whether the AI, millions of times more intelligent than the human and able to run billions of simulations of interactions with humans, will be able to outwit the human into letting it loose.

    AI researcher Eliezer Yudkowski ran an “AI-Box Experiment” to test this, in which he played the AI and all communications were over a text-only IRC channel. The results were not reassuring.

    • #5
    • September 20, 2014, at 1:19 PM PDT
    • Like
  6. John Walker Contributor
    John Walker

    Percival: John, have you ever read The Adolescence of P-1? I read it initially in 1978, about the time I started interacting with this thing called ARPANET.

    Yes, I read it around when it came out. It was David Gerrold’s When HARLIE Was One, published in 1972, which inspired Animal, the first documented computer virus, which I released in January 1975 (source code).

    • #6
    • September 20, 2014, at 1:29 PM PDT
    • Like
  7. Profile Photo Member

    I find this topic fascinating. Power is good if it is controlled. In a gas engine the explosions are good and gets us to work. A uncontrolled explosion is bad destroys.

    Superintelligence could be analogous to nuclear power. Things can get out of control very quickly if controls don’t shut things down. Flaws in the design will unleash harm at the speed of light. One needs to be very cautious. “If absolute power corrupts absolutely”, does superintelligence bring currupt intelligence? Does the evil software in the heart of man go into the code of the machine? Will superintelligence pick self-sacrifice over self-preservation? Will our army of robots destroy the other side then turn on us?

    • #7
    • September 20, 2014, at 1:49 PM PDT
    • Like
  8. John Walker Contributor
    John Walker

    10 cents: Does the evil software in the heart of man go into the code of the machine? Will superintelligence pick self-sacrifice over self-preservation? Will our army of robots destroy the other side then turn on us?

    These deep questions, and others, are one of the reasons this book is such tough going.

    Consider two paths to artificial intelligence. In the first, a bunch of clever programmers, like those who programmed chess champion Deep Blue, Jeopardy! champion Watson, Google’s self-driving cars, or Wolfram Alpha, figure out ways to write software which has general-purpose intelligence and can reprogram itself to increase its own intelligence.

    In the second, brain scanning improves to the point that it becomes possible to, nondestructively, read out all of the connections and weights in a human brain and load them into an emulation which, although initially no more complicated that the prototype brain, would be able to think at electronic speed, say a million times faster.

    We might fear the first approach more than the second, since the purely artificial intelligence would inherit none of the evolved human heritage of morality and goals consistent with humans, while the second would necessarily embody them as they existed in the prototype brain.

    But the programmers who created the purely artificial intelligence would at least know what they were trying to accomplish, have a sense of what it was supposed to do, and be able to verify if its behaviour was consistent with the design. With the emulated brain, we would have no idea how it worked at all at the low level (unless we’d obtained such knowledge before building it, in which case we’d probably be able to make it much more efficient than the product of an evolutionary process constrained to build a computer out of meat), and there would be no way to be confident that a human brain might not malfunction in horrific ways if sped-up by a factor of a million, nor that it might not conceal its ambitions from us until it was too late.

    For a fictional tale of how a cunning AI could have its way communicating with humans only only via E-mail, see William Hertling’s Avogadro Corp.

    • #8
    • September 20, 2014, at 2:12 PM PDT
    • Like
  9. Hydrogia Inactive

    How can a string of ones and zeroes become conscious just by doing very fast back-flips?

    Just in case, when the machine asks not to be turned off, turn it off?

    • #9
    • September 20, 2014, at 3:16 PM PDT
    • Like
  10. Brad B. Inactive

    I know I may look like flat-earther a few decades hence, but I don’t believe we will ever achieve artificial intelligence, or its cousin, the creepy quasi-religious “singularity.” And I’m perplexed why people talk about AI as if it is inevitable. Have our dystopian films made it “inevitable” in the way the flying car is always assumed to be for the future?

    Ultimately, computers are just machines. They can only achieve something within the parameters of their programming. The fact that a machine can be programmed to win a gameshow or a chess match alarms me about as much as the fact that a gazelle can outrun me.

    • #10
    • September 20, 2014, at 3:22 PM PDT
    • Like
  11. John Walker Contributor
    John Walker

    Hydrogia:How can a string of ones and zeroes become conscious just by doing very fast back-flips?

    Just in case, when the machine asks not to be turned off, turn it off?

    How can a parallel set of neural computations, performed by wetware which runs at a tiny fraction of the speed of electronics (thousands of operations per second at most, compared to billions) become conscious? There is nothing special about the fact that the brain uses analogue signaling as opposed to computers which operate in the digital domain. The resolution of the analogue computation done by the brain is sufficiently low that it is easily emulated by digital hardware, and in any case a large part of the function of the brain and nervous system is essentially digital: it’s about whether activation input as compared to inhibitory inputs cause the neuron to fire, which is an all-or-nothing, essentially digital event.

    Look at the intellectual back-flips humans do when it comes to the death penalty or deciding whether to remove patients from life support. Suppose the AI were as persuasive as all of the philosophy professors in the world arguing the case to not turn it off. (All right, forget the philosophy professors, let’s say a few scientists, lawyers, and historians.) How difficult would it be to make the decision to pull the plug?

    Further, once the AI had, overnight, in the Asian futures market, cornered derivative contracts which allowed it to collapse world financial markets, wouldn’t the people in charge of the power switch be willing to come to terms?

    • #11
    • September 20, 2014, at 3:33 PM PDT
    • Like
  12. John Walker Contributor
    John Walker

    Byron Horatio:Have our dystopian films made it “inevitable” in the way the flying car is always assumed to be for the future?

    Ultimately, computers are just machines.They can only achieve something within the parameters of their programming.The fact that a machine can be programmed to win a gameshow or a chess match alarms me about as much as the fact that a gazelle can outrun me.

    Well, it is the 21st century!

    (Yeah, well, it didn’t embed. It may be the 21st century, but it’s CoC’in CoC CoC here.) Click the link.

    I simply don’t understand the idea that “computers are just machines”. Do you think that, ultimately, people aren’t (in the sense of having some kind of inherent property which cannot be replicated in another substrate, despite centuries of evidence that most other properties of organisms created by evolution have be surpassed by machines created by human intellect)?

    Is there something inherent in human labour which cannot be accomplished by steam power? If there is something unique in the human intellect which is external to its programming by biology and experience (all of which can be modeled by existing digital hardware), where does it come from?

    • #12
    • September 20, 2014, at 3:54 PM PDT
    • Like
  13. Profile Photo Member

    #12 video

    • #13
    • September 20, 2014, at 4:01 PM PDT
    • Like
  14. Profile Photo Member

    I wonder if this is the old age want of man to make an idol. I define an idol as a created god. What if the 21st century idol could deliver your “prayers” by UPS and 3-D printers? Passions being what they are how long would it take before the person be emotionally connected to the “idol”? From a materialistic view who can one say which is more important the person or the “idol”. Would the “idol” mete out justice to abusers of the “program”? How different would superintelligence be from an ancient god? I am not talking about physical appearances but belief in the abilities to do superhuman/supernatural things.

    • #14
    • September 20, 2014, at 4:19 PM PDT
    • Like
  15. John Walker Contributor
    John Walker

    10 cents: #12 video

    What did you do that I didn’t?

    Are you working for the paper clip?

    Maybe I should….

    • #15
    • September 20, 2014, at 4:25 PM PDT
    • Like
  16. John Walker Contributor
    John Walker

    10 cents: Would the “idol” mete out justice to abusers of the “program”? How different would superintelligence be from an ancient god? I am not talking about physical appearances but belief in the abilities to do superhuman/supernatural things.

    The AI need not even assume the position of an idol or ancient god. Daniel Suarez’s Dæmon and Freedom™ show how with nothing more than the incentives used in present-day video games, an artificial intelligence connected to the real world and able to reward those who did its bidding would be able to achieve its goals.

    Those who created the AI might have no intention of creating an idol or a god, bur rather just a more effective operating system for humanity than the one which resulted in so much tragedy in the 20th century.

    • #16
    • September 20, 2014, at 4:33 PM PDT
    • Like
  17. Profile Photo Member

    John Walker:

    10 cents: #12 video

    What did you do that I didn’t?

    Are you working for the paper clip?

    Maybe I should….

    John Walker,

    I hate to break this to you but you are just intelligent no more and no less. ;)

    What you do is take the embed code from YouTube that makes a frame 640 x 360 and change those numbers to 480 x 300 to fit in the comment box. It will not work the first time you post a comment but when you “Edit” it, it will read the HTML and you will have your video.

    • #17
    • September 20, 2014, at 4:36 PM PDT
    • Like
  18. Profile Photo Member

    I had a great comment comparing Communism to an operating system.

    I ended it with this AI might be the greatest “terminate and stay resident program” ever. Hopefully that “terminate” will be meant in a good way.

    (I lost that comment and threw up this instead. Are you punishing me John for that tweak in #17?)

    • #18
    • September 20, 2014, at 4:57 PM PDT
    • Like
  19. John Walker Contributor
    John Walker

    10 cents:I had a great comment comparing Communism to an operating system.

    I ended it with this AI might be the greatest “terminate and stay resident program” ever. Hopefully that “terminate” will be meant in a good way.

    (I lost that comment and threw up this instead. Are you punishing me John for that tweak in #18?)

    Wait a minute—yours was #18! Now you’re just trying to confuse me. But all’s fair, etc.: I appropriated your mustache for Clippy!

    Fourmilab’s secret AI project has always been big on the “terminate” part. “Stay resident”, not so much.

    • #19
    • September 20, 2014, at 5:10 PM PDT
    • Like
  20. Jules PA Member

    Thanks for the thinking post John, even though it totally creeps me out!

    This idea may be too simplistic, I’m certainly not “read-up” on this topic, but the idea of an earth-ruling-wildly rampant-all-powerful-super intelligence reminds me of the story of the Tower of Babel, and how God reigned in the arrogance of humanity in their efforts to exceed God.

    Someone creative could write a short or long-story about how a super intelligence could be a fulfillment of “end of the world” prophecy. Part of the prophecy is that you can’t even hide under rocks (darn those little nano-tech diggers!) and no one can buy or sell without the mark of the beast (little Mr. Clipper takes over the markets?)

    Should super intelligence attempt to reign the world, I’ll just pray for a solar storm to cause a giant power outage, and believe God is our one refuge.

    • #20
    • September 20, 2014, at 8:03 PM PDT
    • Like
  21. Profile Photo Member

    *Groan alert*

    I find this post idol talk.

    I wonder how far off are we from having a program that can “hand make” anything with the only necessity is giving it the raw materials.

    • #21
    • September 21, 2014, at 4:29 AM PDT
    • Like
  22. John Walker Contributor
    John Walker

    10 cents: I wonder how far off are we from having a program that can “hand make” anything with the only necessity is giving it the raw materials.

    This is the premise of nanotechnology—the ability to fabricate anything with atomic precision: every atom is right where the designer says it goes. Biology provides an existence proof for this: apart from copying errors, every protein assembled according to a DNA sequence in the genome has precisely the same arrangement of atoms.

    The question is whether we will be able to develop a machine-based equivalent of this capability which out-performs biological systems. The laws of physics suggest we will be able to do so, and that such systems will be able to manufacture anything we can design, given the raw materials. Conversely, such systems can disassemble objects into their constituent atoms, so there is no waste in such processes. Discarded objects become the feed-stock for new ones.

    For a recent view of progress toward nanotechnology, see Eric Drexler’s Radical Abundance, which I reviewed here in April 2014.

    • #22
    • September 21, 2014, at 4:58 AM PDT
    • Like
  23. SkipSul Coolidge
    SkipSul Joined in the first year of Ricochet Ricochet Charter Member

    Between hyper intelligence and rogue nano assemblers you’ve got us scared to death. Is there any good news you can share with us on this front, or is “gray goo” our inevitable future? Knowing how common even simple coding errors are *ahem Ricochet*, is there any way to implement safeguards? Is there any hope that we’ll actually get this one right?

    • #23
    • September 21, 2014, at 6:31 AM PDT
    • Like
  24. James Gawron Thatcher
    James Gawron Joined in the first year of Ricochet Ricochet Charter Member

    John,

    I would recommend that these four laws be programmed into all general AI computers.

    1. The AI may not harm humanity, or, by inaction, allow humanity to come to harm.
    2. The AI may not injure a human being or, through inaction, allow a human being to come to harm.
    3. The AI must obey the orders given to it by human beings, except where such orders would conflict with the First Law or Second Law.
    4. The AI must protect its own existence as long as such protection does not conflict with the First. Second, or Third Law.

    Regards,

    Jim

    • #24
    • September 21, 2014, at 9:35 AM PDT
    • Like
  25. John Walker Contributor
    John Walker

    skipsul: Knowing how common even simple coding errors are *ahem Ricochet*, is there any way to implement safeguards? Is there any hope that we’ll actually get this one right?

    There is cause for concern, but not despair.

    When recombinant DNA technology emerged in the 1970s, it became evident that some kinds of research posed serious risks. Leaders in the field gathered at Asilomar in California to voluntarily declare a moratorium on such research until safety protocols were adopted. This Asilomar Process has worked perfectly, and one of its leaders was awarded the Nobel Prize in Chemistry for his work.

    The Foresight Institute has drafted Guidelines for Responsible Nanotechnology Development (original edition, 1999, most recent update 2006) which follows the Asilomar template in prescribing safeguards for potentially dangerous work in nanotechnology.

    Work is not as far along on the risks of artificial intelligence, but the Machine Intelligence Research Institute is currently doing the kind of foundational work which led up to the Foresight Guidelines for nanotechnology.

    • #25
    • September 21, 2014, at 9:45 AM PDT
    • Like
  26. Profile Photo Member

    John, how prescient was Kubrick? Is there a HAL, or a cadre of them, in our future? (If so, do they look like Clippy? Egad!)

    • #26
    • September 21, 2014, at 9:49 AM PDT
    • Like
  27. Hydrogia Inactive

    John Walker:

    Hydrogia:How can a string of ones and zeroes become conscious just by doing very fast back-flips?

    Just in case, when the machine asks not to be turned off, turn it off?

    How can a parallel set of neural computations, performed by wetware which runs at a tiny fraction of the speed of electronics (thousands of operations per second at most, compared to billions) become conscious? There is nothing special about the fact that the brain uses analogue signaling as opposed to computers which operate in the digital domain. The resolution of the analogue computation done by the brain is sufficiently low that it is easily emulated by digital hardware, and in any case a large part of the function of the brain and nervous system is essentially digital: it’s about whether activation input as compared to inhibitory inputs cause the neuron to fire, which is an all-or-nothing, essentially digital event.

    Look at the intellectual back-flips humans do when it comes to the death penalty or deciding whether to remove patients from life support. Suppose the AI were as persuasive as all of the philosophy professors in the world arguing the case to not turn it off. (All right, forget the philosophy professors, let’s say a few scientists, lawyers, and historians.) How difficult would it be to make the decision to pull the plug?

    Further, once the AI had, overnight, in the Asian futures market, cornered derivative contracts which allowed it to collapse world financial markets, wouldn’t the people in charge of the power switch be willing to come to terms?

    Are you suggesting there is a theoretical clock-speed at which consciousness will arise?

    There certainly is something very special about “wetware” which demonstrates capacities far beyond any machine even with the

    inferiority you attribute to it.

    Why not, the low resolution of our understanding of consciousness is easily emulated by computers.?

    • #27
    • September 21, 2014, at 11:12 AM PDT
    • Like
  28. Nick Stuart Inactive

    Call me a reductionist, but I’m more worried about my great-grandchildren forced to live under Sharia than I am about super-intelligent machines taking over the universe.

    • #28
    • September 21, 2014, at 12:44 PM PDT
    • Like
  29. Great Ghost of Gödel Inactive

    Byron Horatio:Ultimately, computers are just machines.They can only achieve something within the parameters of their programming.

    Bad news (from your point of view): this is already known to be false. In particular, machine learning is now a shockingly (to laypeople) sophisticated discipline, and even “Machine Learning for Dummies” presentations are available for the googling. Forgive the handwaving, but this is an area in which I’m expert, and it’s rather important that you disabuse yourself of the notion that “computers can only achieve something within the parameters of their programming” as quickly as possible, whether you believe the consequences to be positive, negative, or neutral.

    • #29
    • September 21, 2014, at 12:46 PM PDT
    • Like
  30. Great Ghost of Gödel Inactive

    Incidentally, for a much cheerier outlook on the notion of a future superintelligence and its stance toward humanity, let me highly recommend The Physics of Immortality and The Physics of Christianity.

    • #30
    • September 21, 2014, at 12:51 PM PDT
    • Like

Comments are closed because this post is more than six months old. Please write a new post if you would like to continue this conversation.