Artificial Intelligence and the Brilliant Idiot

 

My phone buzzed while my watch thumped my wrist. I was in a meeting and so made a surreptitious glance at my wrist. My wife was calling and I declined the call, knowing that if it was urgent she would either leave a message or send me a text. The text came through a few minutes later, asking if I wanted to join her for lunch. I waited until the meeting had ended, and until I had taken care of other business that had piled up, before finally messaging her back about when I would be free. We had our lunch date, but as we were leaving I pulled out my phone to check on my work emails, and there on the lock screen was a “Siri Suggestion” that I return my wife’s call from an hour and half before. Siri is a brilliant idiot.  Brilliant enough to guess that I should probably call my wife back, then put that as a suggestion right on the lock screen, but idiotic enough to not know that the suggestion was unwelcome and unnecessary.

Over the last couple of iterations in Apple’s IOS (the operating system used in their mobile devices), Apple has layered in assorted habit-gathering machine-learning routines into Siri, its smooth-voiced “Digital Personal Assistant”. The latest iteration of IOS, version 12, has extended these habit-watching routines to the point where, by default, they constantly monitor what you do and where you do it, then attempt to build macros of commands to automate and guide those habits. The suggestion that I return the call to my wife was based on the phone having observed that I do usually return calls to my wife, but had not yet done so in this case.

This was just a small foretaste of what Siri could do, if I let it, but Apple explains its concept rather more thoroughly:

Siri can now intelligently pair your daily routines with third-party apps to suggest convenient shortcuts right when you need them. So if you typically pick up a coffee on the way to work, Siri will learn your routine and suggest when to place your order from the Lock screen. You can also run shortcuts with your voice or create your own with the Shortcuts app.

And in addition:

A quicker way to do the things you do most often. As Siri learns your routines, you’ll get suggested shortcuts for just what you need, at just the right time, on the Lock screen or in search.

If one digs deep enough into the examples of commonly used macros (“Workflows” is the current jargon-term in vogue), one can find where people have their phone run an entire morning routine, from a wakeup alarm, to a tooth-brushing timer, a workout prompter with curated workout playlist, morning commute planner, and news podcast playlist for the drive. Where Siri comes in with its macros is in chaining and coordinating all of these different activities through their different phone applications – the user simply needs to allow Siri to monitor enough typical morning routines, and after enough iterations the phone will learn the pattern and do the rest on its own, with just the occasional prompt for confirmation from the user.

Put another way, you can now have your phone attempt to build a model of you, your habits, preferences, and life, then feed that model back to you as a daily organizer, concierge, or (to my thinking) drill sergeant. The usefulness of Siri’s Suggestions depend entirely on the accuracy of the model it makes of you. And it is important to bear in mind that it is ultimately only ever a model. The model is limited by the granularity of what it can measure, and indeed what it is allowed to measure. In the few days I had been running IOS, Siri had already created a model of me that said I usually return my wife’s calls within an hour, and used that model to prompt me to return her call. Siri simply lacked the information that my wife and I had rendezvoused, rendering the call-back unnecessary, and Siri lacked that information because neither my wife nor I have enabled other Siris on other phones to see our locations. Siri’s model was thus inaccurate because I had deliberately withheld a more precise tool by which to model me.

AI in Iteration

Some months ago, The Atlantic ran an article by Henry Kissinger wherein he explored his own discoveries and concerns with machine learning and artificial intelligence.  He describes his first real awakening here (emphasis mine):

The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which color he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.

The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

Like with Siri, the program AlphaGo learned by iteration – in this case game play that it could actively compute on its own, instead of passively monitoring human behavior – to create a model set for how Go could be played under any circumstance. AlphaGo’s technique is, however, like all predictions based on iterations, really just a brute force attack. Go has a limited rule set, which both gives it hard boundaries, and makes it possible for that rule set to be programmed in-toto into an iterative learning system. Siri is ultimately bounded too, not by game rules, but by what you allow it to monitor, and what you have installed on your phone. It’s not going to suddenly suggest you play Angry Birds on your bathroom break if you don’t actually have that game on your phone in the first place (though if your bathroom breaks are routine enough, I do wonder if it might try to learn those, and suggest more fiber when your schedule deviates).

Returning to Kissinger’s article, he notes how other habit-learning algorithms already function:

In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Of course these sorts of things are all just forms of brute managing of data sets to build models of us, or to “help” by automating things, or to solve specific problems. For Google, it’s all just to automatically gather data on us so as to sell advertisements to us, or to sell their politics and worldview. For Siri, it’s all to be useful enough that you keep coming back to Apple. For AlphaGo, it’s just to win a game. These are all narrow challenges, but as shown above they have limits and boundaries both in the data they can create or gather and most important in what they are expected to do.  Siri may try to predict what you might do next then attempt to assist in that but it’s not really suggesting you do anything truly new — it may suggest something new to you personally, but if so that is only because it is drawing on models of other Siri users and guessing that you might be similar enough in your routine.

AI Complications

What of more complicated or open-ended tasks? In the examples above, machine learning is still largely an iterative process of observation and model building to achieve goals that some human being picked – winning a game, timing your toothbrushing, or running your life for you.  These all have explicit or implicit boundaries.  What happens when when we start to turn Artificial Intelligence loose, or at least looser than what we have done to date?  What if we ourselves do not consider all the boundaries we must set in a system?

Human beings at times have a remarkable ability to find and exploit loopholes in meeting our own goals, or to completely miss what should be obvious and necessary information.  Maybe it’s cutting through somebody’s back yard on your way home.  Well, that’s fine if you are on foot, a bit rude if you are on your bike, and criminal property destruction if you are in your car.  In navigation, the context of the mode of transportation matters when it comes to how much we choose to bend the rules or deviate from a normative route, and this is not something you have to explain to most people.  But you do have to explain it to some people, just as I had to once write and illustrate a 4 page work instruction on how to fold and tape up a cardboard box.

There is a funny story a friend told me from a US auto factory. He (an engineer) was tasked with finding out why every car coming off the line had a broken electrical harness. Turns out that 1 lady had one single job – install the washer fluid bottle. The main trunk harness happened to be partly in her way, and rather than just shove it aside long enough to snap the bottle in, she broke the harness right out. When my friend demanded an explanation (for surely, breaking the harness was far more work than doing it the right way), she snapped “Nobody told menot to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

When one pictures a brilliant idiot of a AI system assembling cars, one where a computer is instructed to iterate out the best way to build a car (as opposed to direct teaching done today), it is highly likely we will see many such similar breakdowns in logic and common sense, all because an AI system was not properly bounded.  The difference will be that when an automated system runs amok, it can do so thousands or even millions of times before it is halted, and like my poor friend, the one who catches the error needs to have the right authority to stop the system.  Such a system run haywire would be a brilliant idiot.

As computers grow in complexity and capability, iterative AI (like simulating out a million things to find the best path) will do amazing things… and amazingly horrible things. Like the idiot’s “nobody told me NOT to do it this way”, AI will, time and again, arrive at horrible ends because the programmers, smart people all, just could not conceive of the brilliant idiot even as they built such a monster. I see this all the time with my suppliers and customers. As I said to one such recently, “you are why I had to write the rule book!” No matter how narrowly you think you’ve defined the scope of a job, someone will find the loophole or nuance that you did not conceive. But AI, poorly bounded, could find it very quickly, and exploit the gap a million times over.  And I rather doubt we’re smart enough to program in the sort of generalized boundaries we implicitly rely on every day.

As Kissinger puts it:

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

We may be in danger of building an army of brilliant idiots, and unleashing them the world over all because we do not even know how to define their boundaries.

Or Maybe Not

Science fiction is replete with all manner of doomsday scenarios for AI gone haywire (my favorite is Gray Goo).  And we should be exceedingly careful with how and where we apply any technology.  Many were convinced that we would not survive the first decades of the atomic bomb, and in the wake of WWII that was not an unreasonable fear, but somehow we managed to do so.  Of course, as the investor commercials often warn, past results are no guarantee of future experiences.  Just because humanity has muddled through numerous technological advances so far is no guarantee that we’ll not wipe ourselves out this time.  But that’s no guarantee we will either.

But we do have to closely examine what we’re doing with what we have created.  Automation has its place in manufacturing – after all when you are making 10,000 of something, you do want them to be identical, and identically good.  But in our daily lives?  Personally, I’m not keen to have my phone trying to guess my daily routines then “assist” me in completing them.  I’m not a checklist kind of person, and I don’t want my phone making one for me either.  I’ve turned all of those assistants, workflows, and macros off — I don’t want them, and I won’t use them.  But maybe they’re of use to you — maybe you like having your morning routine sorted out at the push of a button, with your coffee already ordered and waiting for you on your morning commute as you drive to your AI-curated morning commute playlist.  Me?  Many mornings I keep the radio off, and I’m making my own coffee at home anyway.  I’m not an automaton, and I do not want my life automated.

The ongoing attempts to automate my life, whether in task macros or music learning like Pandora, have so far produced nothing more than brilliant idiots time and again – in trying to model and measure me they miss the nuances and contours and more important they miss that I really do not want to be ordered about by a machine-learning routine in the first place.  Right now these routines are overt and crude and I can turn them off.  What happens when some developer decides I should not be allowed to turn them off anymore though?  When I need to be nudged and nagged into some routine as if I were another machine?  The boundaries of Siri or Alexa are simple — we can choose not to use them.  Fun as it is to worry about Gray Goo, I worry more about when machine-learning or AI no longer has any boundaries because we, as human beings have not figured out how to to define those boundaries.  If I have to write a work instruction for folding a cardboard box, and it runs to 3 pages, how many more gaps and errors would we see in an aggressive AI where we try to code in something like ethics?

I am not now proposing anything concrete.  Not yet.  Save only this: remember that as advanced as your machine-learning, AI, digital assistants, and driverless cars will get, they’ll still be brilliant and sophisticated idiots, and you will still be a human being with moral agency — an agency you cannot evade by letting your technology automate your life.

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 46 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. The Reticulator Member
    The Reticulator
    @TheReticulator

    Hammer, The (Ryan M) (View Comment):

    She (View Comment):

    Just, brilliant, and full of good info, @skipsul, thanks.

    I took a broom to iOS 12 on my phone the day after it installed and started telling me how many hours I was looking at the screen every day:

    “New tools to empower you to understand and make choices about how much time you spend using apps and websites”

    Still trying to figure out why I need a tool to help me (that’s what they mean when they say empower), how to make choices about what I do with my phone. For Pete’s sake.

    I’m not liking iOS 12 at all. Not even the fact that, in a few short weeks, I’ll be able to “FaceTime with 32 people at once!!!” is warming me up to it.

    When the day comes that I can push a button, send Siri and my iPhone out the door to do my day’s work in a reliable fashion, while I roll over and go back to sleep for a few more hours, then I might start to get excited. But not before then.

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room. A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him. I looked at his drill. I looked at his 2″ hole drill bit. “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon. Now that would be helpful.

    Oh, you are more patient than me. I’d have simply brought my drill from home.

    At the university where I worked, we called it the Midnight Cable Company. We worked while the union slept. Most of the first Ethernet installations were done through that organization. 

    • #31
  2. Hank Rhody, Red Hunter Contributor
    Hank Rhody, Red Hunter
    @HankRhody

    Fallout III was a pretty great game. One of it’s highlights was Galaxy News Radio, a radio station playing big band music, which really helped along the 1950’s-two-hundred-years-after-a-nuclear-war theme. When @MattBalzer started a Fallout: Wisconsin role-playing game I started training a Pandora Station to be Galaxy News Radio.

    For the most part it works extremely well. You get a lot of The Inkspots, Big Roy Brown, and Frank Sinatra. Every so often though, never mind how many times I give it the thumbs down, I get a doo-wop song in there. I believe it’s seeing “All these things are antecedeants of modern rock and roll, he must be interested in other things that lead to that genre. Let’s see if he liked doo-wop.” Which is not at all what I’m going for, but how could it tell otherwise?

    Then Fallout 4 came out. It’s soundtrack has half a dozen doo-wop songs on there. Well whaddaya know.

    • #32
  3. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    SkipSul:

    The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

    Like with Siri, the program AlphaGo learned by iteration – in this case game play that it could actively compute on its own, instead of passively monitoring human behavior – to create a model set for how Go could be played under any circumstance. AlphaGo’s technique is, however, like all predictions based on iterations, really just a brute force attack.

    Much toddler problem-solving also appears to be brute-force attack.

    What toddlers are looking for isn’t as clear as the rules of Go, obviously. Toddlers like approval, for example, but will settle on attention (including negative attention) if they can get it. The rules of toddler satisfaction don’t seem cut and dry. But to get satisfaction… yeah, the do seem willing to brute-force stuff until they eventually arrive at their heuristics.

    Whether or not machines ever approach human intelligence, the brute-force approach doesn’t strike me as particularly inhuman. Perhaps more like very “young”.

    • #33
  4. SkipSul Inactive
    SkipSul
    @skipsul

    Midget Faded Rattlesnake (View Comment):

    SkipSul:

    The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

    Like with Siri, the program AlphaGo learned by iteration – in this case game play that it could actively compute on its own, instead of passively monitoring human behavior – to create a model set for how Go could be played under any circumstance. AlphaGo’s technique is, however, like all predictions based on iterations, really just a brute force attack.

    Much toddler problem-solving also appears to be brute-force attack.

    What toddlers are looking for isn’t as clear as the rules of Go, obviously. Toddlers like approval, for example, but will settle on attention (including negative attention) if they can get it. The rules of toddler satisfaction don’t seem cut and dry. But to get satisfaction… yeah, the do seem willing to brute-force stuff until they eventually arrive at their heuristics.

    Whether or not machines ever approach human intelligence, the brute-force approach doesn’t strike me as particularly inhuman. Perhaps more like very “young”.

    One of the things that stands out for me is that brute force for a lot of tasks seems to be of the infinite monkeys with infinite typewriters sort.  It approaches a facsimile of intelligence, but only by applying an immense amount of power to do so.  This is rather different than the toddler brute force, or perhaps it is like a very large number of parallel toddlers.

    What intrigues me is the possibility that the facsimiles will eventually be good enough that we’ll have a hard time distinguishing them from actual local intelligence.

    • #34
  5. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    SkipSul (View Comment):
    One of the things that stands out for me is that brute force for a lot of tasks seems to be of the infinite monkeys with infinite typewriters sort. It approaches a facsimile of intelligence, but only by applying an immense amount of power to do so. This is rather different than the toddler brute force, or perhaps it is like a very large number of parallel toddlers.

    One stage in child development, I think reached between two and three, is supposed to be “can solve some problems by some means quicker than trying every option until one works”. Which must mean children who haven’t reached that stage are typically expected to problem-solve by trying option after another until one works — brute force.

    I see this a little with my own Zeke, who’s two. When he’s given a puzzle he’s played with before, he appears to use reason to fit shapes together. A new puzzle, though? It looks a lot more like he’s just randomly shoving stuff around till something fits. To adult eyes, this seems a little strange, but I guess it would make sense if a lot of the spatial reasoning that seems just part of seeing for us was established by brute-forcing objects when we were tots until we stumbled on working routines.

    I can imagine AI being like a very large number of parallel toddlers, though. <Shiver.>

    • #35
  6. Hammer, The (Ryan M) Inactive
    Hammer, The (Ryan M)
    @RyanM

    Dorrk (View Comment):

    This article that I felt strangely compelled to read earlier today looks at one very specific instance of the perils of “smart” technology-aided living:

    My Period Tracker Tells My Boyfriend Everything About My Flow

    And it’s completely sabotaging our relationship in the process

    If he was a good husband he’d know everything about your flow without the damned app… guess that’s why he’s just a boyfriend.

    • #36
  7. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    Hammer, The (Ryan M) (View Comment):
    If he was a good husband he’d know everything about your flow without the damned app…

    Please dear God let’s not live in a world where this sort of knowledge is necessary for being a good husband ;-P

     

    • #37
  8. Boss Mongo Member
    Boss Mongo
    @BossMongo

    Outstanding post.  Thanks, @skipsul.

    Ironically, I’ve been off Rico while devouring Zone War, by John Conroe. 

    Imagine 35,000 killer drones released in Manhattan.  Population decimated, humans manage to hold the line and isolate the drones on Manhattan.  10-years later, intrepid Zone Hunters make salvage incursions to recover wealth/tech/precious cargo.  It was a blast–and topical to this post.  It’s got AI, Drones, and great fight scenes.

    • #38
  9. kedavis Coolidge
    kedavis
    @kedavis

    Percival (View Comment):

    There was once an expert system that was designed to differentiate between digitized photographs and determine which ones were of interest. The system was fed a set of pictures, some of which contained tanks, and some of which didn’t. The system was informed which of the images contained that which was being sought and which ones did not. After this initial training period, a larger set of images were sent through.

    The system’s accuracy was phenomenal. It was able to get the correct images with far less training than had been predicted. There were smiles all around.

    Then a larger set of pictures were scanned and … the accuracy of the system plummeted. Where once even the hint of a vehicle largely obscured by vegetation proved no challenge, the system now couldn’t see a tank sitting in the center of the frame in the middle of an open field — repeatedly.

    The problem was eventually uncovered by some non-artificial intelligence. The training photographs had been taken in the same location on two separate days. On the first day, field maneuvers had been underway. This provided the armored vehicles that were of interest. It had been overcast that day. On the next day after the tanks had cleared out, the sun was shining. The system had picked up on the different amounts of shadows and thought that that was what it was supposed to be looking for.

    Thinking is harder than it looks. Just ask Sheldon Whitehouse.

    Hilarious.  But I don’t know that Mr Whitehouse would even realize it.

    • #39
  10. kedavis Coolidge
    kedavis
    @kedavis

    Judge Mental (View Comment):

    Percival (View Comment):

    There was once an expert system that was designed to differentiate between digitized photographs and determine which ones were of interest. The system was fed a set of pictures, some of which contained tanks, and some of which didn’t. The system was informed which of the images contained that which was being sought and which ones did not. After this initial training period, a larger set of images were sent through.

    The system’s accuracy was phenomenal. It was able to get the correct images with far less training than had been predicted. There were smiles all around.

    Then a larger set of pictures were scanned and … the accuracy of the system plummeted. Where once even the hint of a vehicle largely obscured by vegetation proved no challenge, the system now couldn’t see a tank sitting in the center of the frame in the middle of an open field — repeatedly.

    The problem was eventually uncovered by some non-artificial intelligence. The training photographs had been taken in the same location on two separate days. On the first day, field maneuvers had been underway. This provided the armored vehicles that were of interest. It had been overcast that day. On the next day after the tanks had cleared out, the sun was shining. The system had picked up on the different amounts of shadows and thought that that was what it was supposed to be looking for.

    Thinking is harder than it looks. Just ask Sheldon Whitehouse.

    I, for one, am in favor of attacking sunshine.

    Good idea, since it’s actually the main cause of “climate change.”

    • #40
  11. kedavis Coolidge
    kedavis
    @kedavis

    Judge Mental (View Comment):

    She (View Comment):

    Just, brilliant, and full of good info, @skipsul, thanks.

    I took a broom to iOS 12 on my phone the day after it installed and started telling me how many hours I was looking at the screen every day:

    “New tools to empower you to understand and make choices about how much time you spend using apps and websites”

    Still trying to figure out why I need a tool to help me (that’s what they mean when they say empower), how to make choices about what I do with my phone. For Pete’s sake.

    I’m not liking iOS 12 at all. Not even the fact that, in a few short weeks, I’ll be able to “FaceTime with 32 people at once!!!” is warming me up to it.

    When the day comes that I can push a button, send my Siri and my iPhone out the door to do my day’s work in a reliable fashion, while I roll over and go back to sleep for a few more hours, then I might start to get excited. But not before then.

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room. A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him. I looked at his drill. I looked at his 2″ hole drill bit. “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon. Now that would be helpful.

    “I didn’t say cords, I said cards! Business cards! We’re going to put a little box underneath to catch them. So, start drilling.”

    Business cards are probably considered Public Relations so someone from PR has to drill THOSE holes.

    • #41
  12. kedavis Coolidge
    kedavis
    @kedavis

    Hammer, The (Ryan M) (View Comment):

    I believe I have Siri turned off on my phone. It worried me at first when she asked “why are you doing this, Ryan?” but she hasn’t bothered me since.

    She didn’t suggest you take a stress pill?

    Gold.

    • #42
  13. kedavis Coolidge
    kedavis
    @kedavis

    James Lileks (View Comment):

    Great post. I have all the most recent spiffy hot smarty-pants OS / iOS updates, but haven’t noticed Siri barging into my life at all. I’m just not busy enough to trigger any of this stuff.

    Atlantic mag had a piece – can’t find the link at the moment – about AI as a friend for the lonely and troubled, and What It Means when we confess to our digital assistants, how we’ll inevitably come to see them as characters in our own lives. Emotional versions of sex robots for the lonely, I guess. It’ll be fascinating to see where this goes. I don’t think we’ll ever be able to shake off the knowledge that we can turn these things off, which reminds us of their artificiality. It’s at the heart of all the interactions. Alexa, stop.

    As it stands now, I love my watch: it gives me a custom tap on the wrist to say “this message is from your daughter, in Brazil,” and I raise my arm and see a small picture of her walking a dog n a distant city. No matter how good AI gets, Siri will never know why that matters. Only that it has a certain value in a hierarchy of values.

    That’s only until Google/Apple/etc decide to remove the “stop” command.  Or perhaps are ordered to remove it, by President Headroom.

    • #43
  14. kedavis Coolidge
    kedavis
    @kedavis

    Hang On (View Comment):

    And there’s always this problem

     

    Yes, I’m surprised that Google etc haven’t been put out of business due to lawsuits under the Americans With Disabilities Act, etc.  Or just hounded out of business because of their mistreatment of the “differently abled”…

    • #44
  15. kedavis Coolidge
    kedavis
    @kedavis

    This is nothing new to me.  I’ve known for almost 50 years, the aphorism “Computers are machines that let people make mistakes at the speed of light.”

    And I also made up my own:  “One of the main reasons computers are useful is because they’re NOT ‘intelligent.'”

    What happens when your computer decides it needs a vacation, or your self-driving car decides it needs a weekend at the coast WITHOUT YOU?

    • #45
  16. Percival Thatcher
    Percival
    @Percival

    kedavis (View Comment):

    Percival (View Comment):

    There was once an expert system that was designed to differentiate between digitized photographs and determine which ones were of interest. The system was fed a set of pictures, some of which contained tanks, and some of which didn’t. The system was informed which of the images contained that which was being sought and which ones did not. After this initial training period, a larger set of images were sent through.

    The system’s accuracy was phenomenal. It was able to get the correct images with far less training than had been predicted. There were smiles all around.

    Then a larger set of pictures were scanned and … the accuracy of the system plummeted. Where once even the hint of a vehicle largely obscured by vegetation proved no challenge, the system now couldn’t see a tank sitting in the center of the frame in the middle of an open field — repeatedly.

    The problem was eventually uncovered by some non-artificial intelligence. The training photographs had been taken in the same location on two separate days. On the first day, field maneuvers had been underway. This provided the armored vehicles that were of interest. It had been overcast that day. On the next day after the tanks had cleared out, the sun was shining. The system had picked up on the different amounts of shadows and thought that that was what it was supposed to be looking for.

    Thinking is harder than it looks. Just ask Sheldon Whitehouse.

    Hilarious. But I don’t know that Mr Whitehouse would even realize it.

    He might … somewhere between the 300th and 400th time a constituent walks up to him and says “Hey, Sheldon! Pull my finger.”

    • #46
Become a member to join the conversation. Or sign in if you're already a member.