Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
More Fuel for the Self-Driving Car Fire
Just came across this article this morning. I’ll highlight one paragraph and add emphasis:
The linked report suggests that the artificial intelligence may never be “intelligent” enough to do what human beings are generally capable of doing. (Well, not all of us, of course. A couple of days driving in Florida will tell you that.) That may be true in some ways, but more than raw “intelligence,” the AI systems do not have human intuition. They aren’t as intuitive as humans in terms of trying to guess what the rest of the unpredictable humans will do at any given moment. In some of those cases, it’s not a question of the car not realizing it needs to do something, but rather making a correct guess about what specific action is required.
I’ve made this argument before, that humans are better at winging it than AI — so far.
Admiral Rickover was pretty much against using computers to run the engine room, with a couple of exceptions. Any task that was deemed too monotonous was one, the other being any task that could be performed quicker by a computer. Even so, these weren’t really computers in the AI sense, but rather electronic sensors with programming to handle the task at hand. I’m sure modern submarine engine rooms have more computerization nowadays, but I’ll bet the crew can easily take over if the machines fail . . .
Published in Technology
When you go to make one, the first thing you realize is that an intellect requires a world. I used a 10×10 maze, 4 choices or walls at each junction, that I modeled after a dime store toy. It seemed safe enough. Probably should have bought it at the Christian book store, but I didn’t think.
In retrospect, it might have learned to run the maze if I’d incorporated something I learned about neurons later. Some classes of neurons have synapses that don’t fire them, but do prime the neuron so it’s more easily fired. Other synapses, the “normal” ones we usually think about, need to undergo potentiation before they’ll cause the cell to fire. That potentiation, aka Hebbian learning, tends to require that both the efferent and afferent cell fire within a short window. Chicken and egg, right? But if this other class of predictive synapse can cause unpotentiated synapses to fire the cell, that could be how predictions learned at the top move down the hierarchy. I will play with this over the winter, maybe.
If it has been programmed, whether by humans or an advanced AI, it will still be running on silicon. There will still be a machine instruction set and a clock. It will be interesting to see if that SI has a sense of the passage of time and whether it will do the equivalent of daydreaming when it is not directly involved in some form of computation. Will it have sequential real-time memory? I have no idea about any of that, but I find it fascinating to ponder. That last thought was inspired by the Dixie Flatline, a ROM-construct in the sci-fi novel Neuromancer.
It’s funny. I compare artificial/synthetic intelligence to a living mind, and you analogize synthetic intelligence by comparing an artificial tree to a living tree. Maybe the livingness is what you are looking for.
Time is a real problem for doing this in a computer, probably a real problem for doing it in any Turing architecture. Even abstracting everything far from biology, there’s a flavor of simulation about it.
What takes a brain five or ten steps takes a digital computer thousands of instructions. Calling it “massively parallel” is a zeroth order approximation. Just think how fast an SI could think if it could do those five steps at silicon speed.
I maintain that one of the biggest reasons for computers being USEFUL is that they AREN’T “intelligent.”
Would you want an “intelligent” hammer that move around because it doesn’t want to hit the nail?
They make cars like that for driving at 80 mph. I don’t like it but apparently others do.
I almost wish I hadn’t thought of this because part of my brain will be working on this for a while. It’s a real computer geek view, but I remember doing some detailed debugging back in the dimly remembered 1980s and I generated a listing of source code and the compiled machine code alongside so I could view exactly what the machine was doing. I noticed that the instructions were very different depending on whether or not I had used the compiler switch to optimize the object code.
That poor SI! We forgot to turn on the optimize switch.
Yeah. I have tics like that, like if I count to a hundred I will probably mess it up at 88. It has to do with the way I write the letter E.
Imagine how messed up the first few SI’s will be until we get it right.
Not quite equivalent. More equivalent would be a car that tosses you out and takes off on its own because you aren’t worthy of transportation.
Bigoted Intelligence? Cyber Psychopathy? :)
Memory lies to us, and we can never be certain how much, but I did verify that there was in fact a TV show in the early 1960s called Hootenanny. I was in the eighth grade then and I remember Woody Allen doing a skit about being attacked by an automated elevator because he had hit his toaster in a moment of frustration. The elevator stopped between floors and asked, “Are you the guy who hit the toaster?” After the ordeal was over and the elevator threw him out in the apartment building basement, it made an anti-Semitic remark.
Or just laziness. Is there any reason to believe a Synthetic Intelligence couldn’t be lazy?
Would capacitors sell like amphetamines?
Another conclusion I found was that it has to run continuously, not just process inputs. I think you’re right that an intelligence has to be live.
In the early versions of the DEC VAX processors there was a “null” process that ran when there was no process consuming CPU time. Which brings up another computer geek question, would the SI run under the control of an OS, or would it run on a dedicated processor, the sort of thing that requires the compiler to generate start-up code? I would assume the latter.
If there’s hardware to support the “neuron” elements and their interconnections, with adapters to analog devices like lights, servos, and sensors, then I don’t think it would need a digital computer at all. If the SI is to run in a digital computer, then the program that runs it would be more like a simulation than an implementation. I mean the program has to run thru all the computations at (something similar to) a Nyquist rate.
I was thinking that too, but didn’t really know how to phrase it. Does this allow a machine to think, or cogitate, without dwelling on a task?
I won’t go there (think, cogitate, et c.) And I don’t really know how to delineate a task. As far as I can tell, it does what it does all the time. (I don’t know about sleep.) Sometimes there are inputs, sometimes outputs – “behavior” is the intellect trying to match patterns by moving the environment around.
Great book! It has two iconic (to me) opening and closing sentences.
Opening:
“The sky above the port was the color of television, tuned to a dead channel.”
Closing:
“He never saw Molly again.”
One more comment and I hit 200! Oh . . .
Okay, this has been fun.
I remain unpersuaded by the AI/SI distinction, since it’s boiled down thus far to little more than an assertion that AI isn’t real intelligence, SI will be real intelligence, SI is a kind of AI but… different, probably profoundly different, etc.
I agree (assuming I read the comment correctly) that a “world,” an accessible context, will probably be required for anything we’re likely to recognize as a “real” intelligence. It’s hard to imagine anything recognizable or relatable that lacks a world with which to interact. Of course, modern AI has a “world”: Tesla’s self-driving cars are festooned with digital eyeballs, taking in the same world we occupy.
I disagree (again, assuming I read it right) that there’s a fundamental need for continuity — for a continuous “on” state — or for a particular level of speed: an artificial and/or synthetic (whatever it is) intelligence can be clocked down as slow as we like, and needn’t be aware of the passage of time. A self-aware machine might think far slower than humans, or far faster. (That would be an amusing short story idea: we create a machine of dizzying intellect, able to solve the most difficult questions, endowed with a haughty and malignant personality, and everyone is understandably terrified — until they discover that it thinks so slowly that its sinister schemes, while no doubt brilliant, take centuries to formulate.) [Yes, there’s a reason I write software and not fiction.]
A few things we don’t yet know, or know deeply:
All I know for sure is that I never install Microsoft or Google software on my computer without immediately going into the configuration settings and unchecking the “Become Self-Aware” option. Honestly, they should never make that the installation default.
Sounds like something from Frederick Pohl that I remember from some of his “Gateway”/”Venus” stories. Seems like it was the “Prayer Fans” but maybe not. Anyway there was something in them about intelligences that existed within some form of technology structure, I think usually if not always transferred from living beings, and they were self-aware and such, but as I recall time for them passed more slowly than in the “outside world.”
There was also a segment in the “Stargate: SG-1” series where they had trapped some nanobot/nanotech enemies in/near the Event Horizon of a Black Hole. Which seemed safe until they realized that, where they were, the nano-beings were evolving much faster than in regular space. So in a pretty short time for us, but which may have been hundreds or even thousands of years for the nano-things, they had evolved to a point where they were able to escape.
Except that it actually works the other way around.
This is what happens when they don’t employ Einstein as a script advisor. ;)
This will happen, if the Teamsters choose to let it happen. A self-driving truck can be defeated with a well-placed traffic cone. Not so a human. The AI problem is much harder, when it has to cover adversarial agents.
There was probably some “reason” for that, but I don’t remember what it was. I never seriously watched any of the Stargate shows – too much BS claptrap nonsense for me to overlook even in the original movie – but it was on occasionally in between other shows on Comet TV or something, and I just didn’t bother changing the channel.
I tried to do that, but the AI hid it from me . . .
Also, just because Microsoft shows you “options” doesn’t mean they actually do anything.
I repurchased Frank Herbert’s Destination: Void yesterday, e-version, but it didn’t have the illustrated diagrams. I know they were rough back then, virtually symbolic, but I was interested in how they would be seen today. Too bad.
I hope you enjoy it. It was, in my opinion, the second worst work of science fiction by a major author that I’ve ever read. (The worst was Samuel Delany’s Dhalgren.)
I’m not going to reread it, so I won’t be enjoying it, but I really liked the last sentence.