More Fuel for the Self-Driving Car Fire

 

Just came across this article this morning. I’ll highlight one paragraph and add emphasis:

The linked report suggests that the artificial intelligence may never be “intelligent” enough to do what human beings are generally capable of doing. (Well, not all of us, of course. A couple of days driving in Florida will tell you that.) That may be true in some ways, but more than raw “intelligence,” the AI systems do not have human intuition. They aren’t as intuitive as humans in terms of trying to guess what the rest of the unpredictable humans will do at any given moment. In some of those cases, it’s not a question of the car not realizing it needs to do something, but rather making a correct guess about what specific action is required.

I’ve made this argument before, that humans are better at winging it than AI — so far.

Admiral Rickover was pretty much against using computers to run the engine room, with a couple of exceptions.  Any task that was deemed too monotonous was one, the other being any task that could be performed quicker by a computer.  Even so, these weren’t really computers in the AI sense, but rather electronic sensors with programming to handle the task at hand.  I’m sure modern submarine engine rooms have more computerization nowadays, but I’ll bet the crew can easily take over if the machines fail . . .

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 210 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Barfly Member
    Barfly
    @Barfly

    kedavis (View Comment):

    For some people, if Cthulhu doesn’t already exist, they’re determined to invent/create it.

    Madness.

    When you go to make one, the first thing you realize is that an intellect requires a world. I used a 10×10 maze, 4 choices or walls at each junction, that I modeled after a dime store toy. It seemed safe enough. Probably should have bought it at the Christian book store, but I didn’t think.

    In retrospect, it might have learned to run the maze if I’d incorporated something I learned about neurons later. Some classes of neurons have synapses that don’t fire them, but do prime the neuron so it’s more easily fired. Other synapses, the “normal” ones we usually think about, need to undergo potentiation before they’ll cause the cell to fire. That potentiation, aka Hebbian learning, tends to require that both the efferent and afferent cell fire within a short window. Chicken and egg, right? But if this other class of predictive synapse can cause unpotentiated synapses to fire the cell, that could be how predictions learned at the top move down the hierarchy. I will play with this over the winter, maybe.

    • #181
  2. Django Member
    Django
    @Django

    Barfly (View Comment):

    Henry Racette (View Comment):
    You say that an artificial intelligence is a weak synthetic intelligence.

    No, I didn’t. You said that. I think that statement is true, but misses the point I made in comment #30. (Edit: Actually, I did say it. But it was still beside the point.)

    Barfly (View Comment):
    I think Artificial is the right word for today’s machine intelligence. When we do make a real intelligence, and we will soon, it’ll best be called Synthetic Intelligence. It will be real, not artificial.

    [You’ll have to forgive that I allowed the sets to overlap. If you want to be persnickety, there’s much worse on pages 2-5.]

    The more I think about it, the more I like the Christmas tree analogy.

    Anyway, let’s agree that AI is and will remain a proper subset of SI.

    I will continue to colloquially use the term “real” to mean that which is not artificial. To the extent of my knowledge, {s : s ∈ SI and s ∉ AI} is either empty or populated only with incompletely functioning elements. That implies I have a threshold for intelligence level. I do, but it’s subjective, let’s not get into it, it’s a distraction for now.

    To round it out, SI is a subset of RI. RI is BI and SI. [Left as an exercise for the reader.]

    With that in hand, I say that when we make a real intelligence, it won’t be anything like what we have today and call artificial intelligence. Nothing at all like it.

    That is, it won’t be in AI. It’ll be in SI, and therefore in RI. Since SI is the most specific thing we can say, I call it synthetic intelligence. Maybe later we’ll call it Cthulhu or Bob.

    If it has been programmed, whether by humans or an advanced AI, it will still be running on silicon. There will still be a machine instruction set and a clock. It will be interesting to see if that SI has a sense of the passage of time and whether it will do the equivalent of daydreaming when it is not directly involved in some form of computation. Will it have sequential real-time memory? I have no idea about any of that, but I find it fascinating to ponder. That last thought was inspired by the Dixie Flatline, a ROM-construct in the sci-fi novel Neuromancer

    • #182
  3. Flicker Coolidge
    Flicker
    @Flicker

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    The “synthetic” part is easy: it only means that it didn’t grow in nature, but we made it.

    So an artificial intelligence is a synthetic intelligence.

    Yes. A weak one, that bears a similar relation to a real intelligence as an artificial Christmas tree does to a real one. But yes, an artificial intelligence (or anything) is synthetic.

    It’s funny.  I compare artificial/synthetic intelligence to a living mind, and you analogize synthetic intelligence by comparing an artificial tree to a living tree.  Maybe the livingness is what you are looking for.

    • #183
  4. Barfly Member
    Barfly
    @Barfly

    Django (View Comment):

    If it has been programmed, whether by humans or an advanced AI, it will still be running on silicon. There will still be a machine instruction set and a clock. It will be interesting to see if that SI has a sense of the passage of time and whether it will do the equivalent of daydreaming when it is not directly involved in some form of computation. Will it have sequential real-time memory? I have no idea about any of that, but I find it fascinating to ponder. That last thought was inspired by the Dixie Flatline, a ROM-construct in the sci-fi novel Neuromancer.

    Time is a real problem for doing this in a computer, probably a real problem for doing it in any Turing architecture. Even abstracting everything far from biology, there’s a flavor of simulation about it.

    What takes a brain five or ten steps takes a digital computer thousands of instructions. Calling it “massively parallel” is a zeroth order approximation. Just think how fast an SI could think if it could do those five steps at silicon speed.

    • #184
  5. kedavis Coolidge
    kedavis
    @kedavis

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    • #185
  6. Flicker Coolidge
    Flicker
    @Flicker

    kedavis (View Comment):

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    They make cars like that for driving at 80 mph.  I don’t like it but apparently others do.

    • #186
  7. Django Member
    Django
    @Django

    Barfly (View Comment):

    Django (View Comment):

    If it has been programmed, whether by humans or an advanced AI, it will still be running on silicon. There will still be a machine instruction set and a clock. It will be interesting to see if that SI has a sense of the passage of time and whether it will do the equivalent of daydreaming when it is not directly involved in some form of computation. Will it have sequential real-time memory? I have no idea about any of that, but I find it fascinating to ponder. That last thought was inspired by the Dixie Flatline, a ROM-construct in the sci-fi novel Neuromancer.

    Time is a real problem for doing this in a computer, probably a real problem for doing it in any Turing architecture. Even abstracting everything far from biology, there’s a flavor of simulation about it.

    What takes a brain five or ten steps takes a digital computer thousand of instructions. Calling it “massively parallel” is a zeroth order approximation. Just think how fast an SI could think if it could do those five steps at silicon speed.

    I almost wish I hadn’t thought of this because part of my brain will be working on this for a while. It’s a real computer geek view, but I remember doing some detailed debugging back in the dimly remembered 1980s and I generated a listing of source code and the compiled machine code alongside so I could view exactly what the machine was doing. I noticed that the instructions were very different depending on whether or not I had used the compiler switch to optimize the object code.

    That poor SI! We forgot to turn on the optimize switch. 

    • #187
  8. Barfly Member
    Barfly
    @Barfly

    Django (View Comment):

    Barfly (View Comment):

    Django (View Comment):

    If it has been programmed, whether by humans or an advanced AI, it will still be running on silicon. There will still be a machine instruction set and a clock. It will be interesting to see if that SI has a sense of the passage of time and whether it will do the equivalent of daydreaming when it is not directly involved in some form of computation. Will it have sequential real-time memory? I have no idea about any of that, but I find it fascinating to ponder. That last thought was inspired by the Dixie Flatline, a ROM-construct in the sci-fi novel Neuromancer.

    Time is a real problem for doing this in a computer, probably a real problem for doing it in any Turing architecture. Even abstracting everything far from biology, there’s a flavor of simulation about it.

    What takes a brain five or ten steps takes a digital computer thousand of instructions. Calling it “massively parallel” is a zeroth order approximation. Just think how fast an SI could think if it could do those five steps at silicon speed.

    I almost wish I hadn’t thought of this because part of my brain will be working on this for a while. It’s a real computer geek view, but I remember doing some detailed debugging back in the dimly remembered 1980s and I generated a listing of source code and the compiled machine code alongside so I could view exactly what the machine was doing. I noticed that the instructions were very different depending on whether or not I had used the compiler switch to optimize the object code.

    That poor SI! We forgot to turn on the optimize switch.

    Yeah. I have tics like that, like if I count to a hundred I will probably mess it up at 88. It has to do with the way I write the letter E.

    Imagine how messed up the first few SI’s will be until we get it right.

    • #188
  9. kedavis Coolidge
    kedavis
    @kedavis

    Flicker (View Comment):

    kedavis (View Comment):

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    They make cars like that for driving at 80 mph. I don’t like it but apparently others do.

    Not quite equivalent.  More equivalent would be a car that tosses you out and takes off on its own because you aren’t worthy of transportation.

    • #189
  10. Flicker Coolidge
    Flicker
    @Flicker

    kedavis (View Comment):

    Flicker (View Comment):

    kedavis (View Comment):

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    They make cars like that for driving at 80 mph. I don’t like it but apparently others do.

    Not quite equivalent. More equivalent would be a car that tosses you out and takes off on its own because you aren’t worthy of transporation.

    Bigoted Intelligence?  Cyber Psychopathy?  :)

    • #190
  11. Django Member
    Django
    @Django

    Flicker (View Comment):

    kedavis (View Comment):

    Flicker (View Comment):

    kedavis (View Comment):

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    They make cars like that for driving at 80 mph. I don’t like it but apparently others do.

    Not quite equivalent. More equivalent would be a car that tosses you out and takes off on its own because you aren’t worthy of transporation.

    Bigoted Intelligence? Cyber Psychopathy? :)

    Memory lies to us, and we can never be certain how much, but I did verify that there was in fact a TV show in the early 1960s called Hootenanny. I was in the eighth grade then and I remember Woody Allen doing a skit about being attacked by an automated elevator because he had hit his toaster in a moment of frustration. The elevator stopped between floors and asked, “Are you the guy who hit the toaster?” After the ordeal was over and the elevator threw him out in the apartment building basement, it made an anti-Semitic remark. 

    • #191
  12. kedavis Coolidge
    kedavis
    @kedavis

    Flicker (View Comment):

    kedavis (View Comment):

    Flicker (View Comment):

    kedavis (View Comment):

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    They make cars like that for driving at 80 mph. I don’t like it but apparently others do.

    Not quite equivalent. More equivalent would be a car that tosses you out and takes off on its own because you aren’t worthy of transporation.

    Bigoted Intelligence? Cyber Psychopathy? :)

    Or just laziness.  Is there any reason to believe a Synthetic Intelligence couldn’t be lazy?

    • #192
  13. Flicker Coolidge
    Flicker
    @Flicker

    kedavis (View Comment):

    Flicker (View Comment):

    kedavis (View Comment):

    Flicker (View Comment):

    kedavis (View Comment):

    I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”

    Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?

    They make cars like that for driving at 80 mph. I don’t like it but apparently others do.

    Not quite equivalent. More equivalent would be a car that tosses you out and takes off on its own because you aren’t worthy of transporation.

    Bigoted Intelligence? Cyber Psychopathy? :)

    Or just laziness. Is there any reason to believe a Synthetic Intelligence couldn’t be lazy?

    Would capacitors sell like amphetamines?

    • #193
  14. Barfly Member
    Barfly
    @Barfly

    Flicker (View Comment):

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    The “synthetic” part is easy: it only means that it didn’t grow in nature, but we made it.

    So an artificial intelligence is a synthetic intelligence.

    Yes. A weak one, that bears a similar relation to a real intelligence as an artificial Christmas tree does to a real one. But yes, an artificial intelligence (or anything) is synthetic.

    It’s funny. I compare artificial/synthetic intelligence to a living mind, and you analogize synthetic intelligence by comparing an artificial tree to a living tree. Maybe the livingness is what you are looking for.

    Another conclusion I found was that it has to run continuously, not just process inputs. I think you’re right that an intelligence has to be live.

    • #194
  15. Django Member
    Django
    @Django

    Barfly (View Comment):

    Flicker (View Comment):

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    The “synthetic” part is easy: it only means that it didn’t grow in nature, but we made it.

    So an artificial intelligence is a synthetic intelligence.

    Yes. A weak one, that bears a similar relation to a real intelligence as an artificial Christmas tree does to a real one. But yes, an artificial intelligence (or anything) is synthetic.

    It’s funny. I compare artificial/synthetic intelligence to a living mind, and you analogize synthetic intelligence by comparing an artificial tree to a living tree. Maybe the livingness is what you are looking for.

    Another conclusion I found was that it has to run continuously, not just process inputs. I think you’re right that an intelligence has to be live.

    In the early versions of the DEC VAX processors there was a “null” process that ran when there was no process consuming CPU time. Which brings up another computer geek question, would the SI run under the control of an OS, or would it run on a dedicated processor, the sort of thing that requires the compiler to generate start-up code? I would assume the latter. 

    • #195
  16. Barfly Member
    Barfly
    @Barfly

    Django (View Comment):
    would the SI run under the control of an OS, or would it run on a dedicated processor, the sort of thing that requires the compiler to generate start-up code?

    If there’s hardware to support the “neuron” elements and their interconnections, with adapters to analog devices like lights, servos, and sensors, then I don’t think it would need a digital computer at all. If the SI is to run in a digital computer, then the program that runs it would be more like a simulation than an implementation. I mean the program has to run thru all the computations at (something similar to) a Nyquist rate.

    • #196
  17. Flicker Coolidge
    Flicker
    @Flicker

    Barfly (View Comment):

    Flicker (View Comment):

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    The “synthetic” part is easy: it only means that it didn’t grow in nature, but we made it.

    So an artificial intelligence is a synthetic intelligence.

    Yes. A weak one, that bears a similar relation to a real intelligence as an artificial Christmas tree does to a real one. But yes, an artificial intelligence (or anything) is synthetic.

    It’s funny. I compare artificial/synthetic intelligence to a living mind, and you analogize synthetic intelligence by comparing an artificial tree to a living tree. Maybe the livingness is what you are looking for.

    Another conclusion I found was that it has to run continuously, not just process inputs. I think you’re right that an intelligence has to be live.

    I was thinking that too, but didn’t really know how to phrase it.  Does this allow a machine to think, or cogitate, without dwelling on a task?

    • #197
  18. Barfly Member
    Barfly
    @Barfly

    Flicker (View Comment):

    Barfly (View Comment):

    Flicker (View Comment):

    Barfly (View Comment):

    Henry Racette (View Comment):

    Barfly (View Comment):
    The “synthetic” part is easy: it only means that it didn’t grow in nature, but we made it.

    So an artificial intelligence is a synthetic intelligence.

    Yes. A weak one, that bears a similar relation to a real intelligence as an artificial Christmas tree does to a real one. But yes, an artificial intelligence (or anything) is synthetic.

    It’s funny. I compare artificial/synthetic intelligence to a living mind, and you analogize synthetic intelligence by comparing an artificial tree to a living tree. Maybe the livingness is what you are looking for.

    Another conclusion I found was that it has to run continuously, not just process inputs. I think you’re right that an intelligence has to be live.

    I was thinking that too, but didn’t really know how to phrase it. Does this allow a machine to think, or cogitate, without dwelling on a task?

    I won’t go there (think, cogitate, et c.) And I don’t really know how to delineate a task. As far as I can tell, it does what it does all the time. (I don’t know about sleep.) Sometimes there are inputs, sometimes outputs – “behavior” is the intellect trying to match patterns by moving the environment around.

    • #198
  19. Stad Coolidge
    Stad
    @Stad

    Django (View Comment):
    That last thought was inspired by the Dixie Flatline, a ROM-construct in the sci-fi novel Neuromancer.

    Great book!  It has two iconic (to me) opening and closing sentences.

    Opening:

    “The sky above the port was the color of television, tuned to a dead channel.”

    Closing:

    “He never saw Molly again.”

    • #199
  20. Stad Coolidge
    Stad
    @Stad

    One more comment and I hit 200!  Oh . . .

    • #200
  21. Henry Racette Member
    Henry Racette
    @HenryRacette

    Okay, this has been fun.

    I remain unpersuaded by the AI/SI distinction, since it’s boiled down thus far to little more than an assertion that AI isn’t real intelligence, SI will be real intelligence, SI is a kind of AI but… different, probably profoundly different, etc.

    I agree (assuming I read the comment correctly) that a “world,” an accessible context, will probably be required for anything we’re likely to recognize as a “real” intelligence. It’s hard to imagine anything recognizable or relatable that lacks a world with which to interact. Of course, modern AI has a “world”: Tesla’s self-driving cars are festooned with digital eyeballs, taking in the same world we occupy.

    I disagree (again, assuming I read it right) that there’s a fundamental need for continuity — for a continuous “on” state — or for a particular level of speed: an artificial and/or synthetic (whatever it is) intelligence can be clocked down as slow as we like, and needn’t be aware of the passage of time. A self-aware machine might think far slower than humans, or far faster. (That would be an amusing short story idea: we create a machine of dizzying intellect, able to solve the most difficult questions, endowed with a haughty and malignant personality, and everyone is understandably terrified — until they discover that it thinks so slowly that its sinister schemes, while no doubt brilliant, take centuries to formulate.) [Yes, there’s a reason I write software and not fiction.]

    A few things we don’t yet know, or know deeply:

    • How humans store and process information.
    • What human consciousness actually is.
    • Whether chemistry/biology is required for self-awareness.
    • Whether it’s possible to simulate consciousness; that is, whether or not a simulation can be self-aware.
    • The limits of electronic, optical, and quantum computing.
    • Whether we’ll be able to distinguish between a very advanced artificial intelligence and true self-awareness; or whether the two are in fact different.

    All I know for sure is that I never install Microsoft or Google software on my computer without immediately going into the configuration settings and unchecking the “Become Self-Aware” option. Honestly, they should never make that the installation default.

     

    • #201
  22. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):
    (That would be an amusing short story idea: we create a machine of dizzying intellect, able to solve the most difficult questions, endowed with a haughty and malignant personality, and everyone is understandably terrified — until they discover that it thinks so slowly that its sinister schemes, while no doubt brilliant, take centuries to formulate.) [Yes, there’s a reason I write software and not fiction.]

    Sounds like something from Frederick Pohl that I remember from some of his “Gateway”/”Venus” stories.  Seems like it was the “Prayer Fans” but maybe not.  Anyway there was something in them about intelligences that existed within some form of technology structure, I think usually if not always transferred from living beings, and they were self-aware and such, but as I recall time for them passed more slowly than in the “outside world.”

    There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole.  Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space.  So in a pretty short time for us, but which may have been hundreds or even thousands of years for the nano-things, they had evolved to a point where they were able to escape.

    • #202
  23. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):
    There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole.  Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space.

    Except that it actually works the other way around.

    This is what happens when they don’t employ Einstein as a script advisor. ;)

    • #203
  24. DonG (CAGW is a Scam) Coolidge
    DonG (CAGW is a Scam)
    @DonG

    Z in MT (View Comment):

    I work for a self-driving car company. It is happening, both Waymo and Cruise have commercial services now, and we’ll see commercial driverless 18 wheelers in the next two years in limited areas. It will take longer to expand than people expect. Starting in TX and AZ and expanding out from there.

    This will happen, if the Teamsters choose to let it happen.   A self-driving truck can be defeated with a well-placed traffic cone.  Not so a human.   The AI problem is much harder, when it has to cover adversarial agents. 

    • #204
  25. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):
    There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole. Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space.

    Except that it actually works the other way around.

    This is what happens when they don’t employ Einstein as a script advisor. ;)

    There was probably some “reason” for that, but I don’t remember what it was.  I never seriously watched any of the Stargate shows – too much BS claptrap nonsense for me to overlook even in the original movie – but it was on occasionally in between other shows on Comet TV or something, and I just didn’t bother changing the channel.

    • #205
  26. Stad Coolidge
    Stad
    @Stad

    Henry Racette (View Comment):
    All I know for sure is that I never install Microsoft or Google software on my computer without immediately going into the configuration settings and unchecking the “Become Self-Aware” option.

    I tried to do that, but the AI hid it from me . . .

    • #206
  27. kedavis Coolidge
    kedavis
    @kedavis

    Stad (View Comment):

    Henry Racette (View Comment):
    All I know for sure is that I never install Microsoft or Google software on my computer without immediately going into the configuration settings and unchecking the “Become Self-Aware” option.

    I tried to do that, but the AI hid it from me . . .

    Also, just because Microsoft shows you “options” doesn’t mean they actually do anything.

    • #207
  28. Flicker Coolidge
    Flicker
    @Flicker

    kedavis (View Comment):

    Henry Racette (View Comment):
    (That would be an amusing short story idea: we create a machine of dizzying intellect, able to solve the most difficult questions, endowed with a haughty and malignant personality, and everyone is understandably terrified — until they discover that it thinks so slowly that its sinister schemes, while no doubt brilliant, take centuries to formulate.) [Yes, there’s a reason I write software and not fiction.]

    Sounds like something from Frederick Pohl that I remember from some of his “Gateway”/”Venus” stories. Seems like it was the “Prayer Fans” but maybe not. Anyway there was something in them about intelligences that existed within some form of technology structure, I think usually if not always transferred from living beings, and they were self-aware and such, but as I recall time for them passed more slowly than in the “outside world.”

    There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole. Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space. So in a pretty short time for us, but which may have been hundreds or even thousands of years for the nano-things, they had evolved to a point where they were able to escape.

    I repurchased Frank Herbert’s Destination: Void yesterday, e-version, but it didn’t have the illustrated diagrams.  I know they were rough back then, virtually symbolic, but I was interested in how they would be seen today.  Too bad.

    • #208
  29. Henry Racette Member
    Henry Racette
    @HenryRacette

    Flicker (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    (That would be an amusing short story idea: we create a machine of dizzying intellect, able to solve the most difficult questions, endowed with a haughty and malignant personality, and everyone is understandably terrified — until they discover that it thinks so slowly that its sinister schemes, while no doubt brilliant, take centuries to formulate.) [Yes, there’s a reason I write software and not fiction.]

    Sounds like something from Frederick Pohl that I remember from some of his “Gateway”/”Venus” stories. Seems like it was the “Prayer Fans” but maybe not. Anyway there was something in them about intelligences that existed within some form of technology structure, I think usually if not always transferred from living beings, and they were self-aware and such, but as I recall time for them passed more slowly than in the “outside world.”

    There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole. Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space. So in a pretty short time for us, but which may have been hundreds or even thousands of years for the nano-things, they had evolved to a point where they were able to escape.

    I repurchased Frank Herbert’s Destination: Void yesterday, e-version, but it didn’t have the illustrated diagrams. I know they were rough back then, virtually symbolic, but I was interested in how they would be seen today. Too bad.

    I hope you enjoy it. It was, in my opinion, the second worst work of science fiction by a major author that I’ve ever read. (The worst was Samuel Delany’s  Dhalgren.)

    • #209
  30. Flicker Coolidge
    Flicker
    @Flicker

    Henry Racette (View Comment):

    Flicker (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    (That would be an amusing short story idea: we create a machine of dizzying intellect, able to solve the most difficult questions, endowed with a haughty and malignant personality, and everyone is understandably terrified — until they discover that it thinks so slowly that its sinister schemes, while no doubt brilliant, take centuries to formulate.) [Yes, there’s a reason I write software and not fiction.]

    Sounds like something from Frederick Pohl that I remember from some of his “Gateway”/”Venus” stories. Seems like it was the “Prayer Fans” but maybe not. Anyway there was something in them about intelligences that existed within some form of technology structure, I think usually if not always transferred from living beings, and they were self-aware and such, but as I recall time for them passed more slowly than in the “outside world.”

    There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole. Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space. So in a pretty short time for us, but which may have been hundreds or even thousands of years for the nano-things, they had evolved to a point where they were able to escape.

    I repurchased Frank Herbert’s Destination: Void yesterday, e-version, but it didn’t have the illustrated diagrams. I know they were rough back then, virtually symbolic, but I was interested in how they would be seen today. Too bad.

    I hope you enjoy it. It was, in my opinion, the second worst work of science fiction by a major author that I’ve ever read. (The worst was Samuel Delany’s Dhalgren.)

    I’m not going to reread it, so I won’t be enjoying it, but I really liked the last sentence.

    • #210
Become a member to join the conversation. Or sign in if you're already a member.