Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
More Fuel for the Self-Driving Car Fire
Just came across this article this morning. I’ll highlight one paragraph and add emphasis:
The linked report suggests that the artificial intelligence may never be “intelligent” enough to do what human beings are generally capable of doing. (Well, not all of us, of course. A couple of days driving in Florida will tell you that.) That may be true in some ways, but more than raw “intelligence,” the AI systems do not have human intuition. They aren’t as intuitive as humans in terms of trying to guess what the rest of the unpredictable humans will do at any given moment. In some of those cases, it’s not a question of the car not realizing it needs to do something, but rather making a correct guess about what specific action is required.
I’ve made this argument before, that humans are better at winging it than AI — so far.
Admiral Rickover was pretty much against using computers to run the engine room, with a couple of exceptions. Any task that was deemed too monotonous was one, the other being any task that could be performed quicker by a computer. Even so, these weren’t really computers in the AI sense, but rather electronic sensors with programming to handle the task at hand. I’m sure modern submarine engine rooms have more computerization nowadays, but I’ll bet the crew can easily take over if the machines fail . . .
Published in Technology
It is worth noting that, despite the driver having a brake, the trains usually hit and kill/destroy whatever is in front of them anyway.
Honestly, the human in the cab is mostly there to demonstrate that someone ‘cares’. And to keep the therapy industry going of course.
I don’t know how we’d characterize the difference between AI and SI. Does the latter imply self-awareness?
I’m pretty confident that we’ll get to some incredible level of pattern recognition, and produce systems that fly through the old Turing Test without a hitch. (We’re probably there already, depending on who is administering the test.)
But self-awareness? I don’t think we know where that comes from, yet, and I’m not at all sure that we’ll know it when we see it in a machine — or know when it’s real, versus something the machine is trying to convince us is real.
At this point, neither great success nor great failure would surprise me.
Open the pod bay door, HAL.
The manual for my newish car is 736 pages long. What does “allow themselves to be held culpable” mean? Someone is always at fault. If you don’t believe me ask any trail lawyer.
So are you saying “Adios, middle class”?
Let’s wait to start the experiment for 30 years. That way I won’t have to live the global chaos.
The point is – computers are no where near the ability that is needed. If a hefty bag blows in front of you on a hiway where your vehicle is doing 55mph, do you really want the AI system to slam on the brakes? With the added possibility that the car behind you is not expecting yours to stop.
The AI is so bad on these vehicles that people in AZ suburban neighborhoods were running out and disabling them, as too many people were getting involved in accidents from the test vehicles.
Self-driving cars, smart meters, GPS, cellphones and refrigerators with wifi. It’s a spider web.
AI operates on chance % to guess what will happen and the decide on best course of action. While it can be super sophisticated, sometimes the action doesn’t need to happen and it results in what would appear to be a random act.
We humans can, in a flash, judge a situation and hold potential maneuvers in mind without acting on them and then figure the right course even if it’s no action or one with low probability. The computer will need to jump through more hoops to come to the conclusion that the lower probability action is the best course or that no action is needed after all. For instance, there’s an 80% chance that a child on the side of the road will run in front of my vehicle or step out where they shouldn’t. I do not make the choice to act on that probability by swerving to miss a kid that has not yet fallen into the street. Rather, I slow down to alter my reaction time need. There’s still that 80% chance thing hanging over the AI’s head. What do you do with it? How should the AI handle it? Even if it did slow down, what is it supposed to do with that probability if not to act on it?
Part of the problem is we don’t really understand how WE make the decisions we make. Attempts to study those actions and decisions by, say, airline pilots still results in a less intelligent AI. There’s still an ingredient missing.
I go back to the guy who put his ladder on a soggy pile of manure, tipped, and hurt himself. The ladder company settled or lost, and put another sticker on the ladder warming about soggy manure. After that, the company should not be at fault, and it’s the operators.
Any jury can award anything but warning stickers serve a legal purpose.
But we already have railroad transportation…
I’m sometimes reminded of this scene:
If you do succeed, remember to change your name to Miles Dyson so we know who to blame.
It was just a thought. Forget it if you want.
But if we already have railroads, why do we still have trucks? They must fulfill some need.
It’s the “last mile” stuff. Getting things from the train to the store where they get sold. Not long-haul stuff, and less suitable for AI.
We still have long-haul truckers. I don’t know the percentage, but they still exist. There is a reason they do. But as I implied, I have no dog in this fight.
Yes they do exist, but more could be shifted to rail if there were less regulation etc.
Give me another 30 years and Pete Buttigieg and his cronies can run the highways like this clip.
I think this would require roads designed to be driven on by autonomous vehicles. The modern car and modern roads are designed for human drivers. Unless and until we have cars and roads designed for autonomous vehicles it will be very difficult for autonomous vehicles to operate using existing infrastructure.
AI is artificial. SI will be real intelligence. I think that wraps up the difference.
Exactly. I’m always testing the “lane keeping assist” function.it gets confessed with bad paint jobs and those tar patch lines. Sunlight and dirt can affect sensors. This is a silly pursuit when money could be used on technology that helps.
I will never have a car without the Subaru Eyesight system or the equivalent. It’s less fatiguing and less stress. I prefer having the warning sounds. The adaptive cruise control is wonderful. I’m rarely dissatisfied with how it functions.
As an assistant, I love it. I’m still the driver.
I don’t understand the distinction you’re making. Can you expand on that? What would be a capability or attribute of synthetic intelligence that artificial intelligence would not possess, or vice versa?
I hope not. I will need to find a new line of work. Lol.
Modern locomotives have a “dead man switch”. Unless the driver is making constant inputs the train will stop.
Will the self-driving cars be named ‘Hal’?
I’ve read an article recommending automated trucks for the in between city driving and let humans handle the trickier bits at the beginning and end. A bit similar to autopilots on airplanes. Although I think some of the better systems can even handle takeoff and landing.
I used to be more bullish on self-driving cars, as I was sympathetic to the argument that they don’t have to be perfect but rather they just have to be better than humans, and humans are really bad at driving.
However, one factor that has made me less bullish is the fact that self-driving cars can be pretty susceptible to intentional attacks.
For example, if some prankster stands at the side of the road holding a stop sign, human drivers can recognize that it’s a prank and ignore the guy, but self-driving cars will stop.
Given the large number of highly-innovative pranksters in society, this problem becomes very difficult to solve.
I’d like to try adaptive cruise control. It sounds wonderful. Months ago it came up on a thread here and someone commented that he’d driven around Chicago, during rush hour, and never had to touch the pedals.