Artificial Intelligence and the Brilliant Idiot

 

My phone buzzed while my watch thumped my wrist. I was in a meeting and so made a surreptitious glance at my wrist. My wife was calling and I declined the call, knowing that if it was urgent she would either leave a message or send me a text. The text came through a few minutes later, asking if I wanted to join her for lunch. I waited until the meeting had ended, and until I had taken care of other business that had piled up, before finally messaging her back about when I would be free. We had our lunch date, but as we were leaving I pulled out my phone to check on my work emails, and there on the lock screen was a “Siri Suggestion” that I return my wife’s call from an hour and half before. Siri is a brilliant idiot.  Brilliant enough to guess that I should probably call my wife back, then put that as a suggestion right on the lock screen, but idiotic enough to not know that the suggestion was unwelcome and unnecessary.

Over the last couple of iterations in Apple’s IOS (the operating system used in their mobile devices), Apple has layered in assorted habit-gathering machine-learning routines into Siri, its smooth-voiced “Digital Personal Assistant”. The latest iteration of IOS, version 12, has extended these habit-watching routines to the point where, by default, they constantly monitor what you do and where you do it, then attempt to build macros of commands to automate and guide those habits. The suggestion that I return the call to my wife was based on the phone having observed that I do usually return calls to my wife, but had not yet done so in this case.

This was just a small foretaste of what Siri could do, if I let it, but Apple explains its concept rather more thoroughly:

Siri can now intelligently pair your daily routines with third-party apps to suggest convenient shortcuts right when you need them. So if you typically pick up a coffee on the way to work, Siri will learn your routine and suggest when to place your order from the Lock screen. You can also run shortcuts with your voice or create your own with the Shortcuts app.

And in addition:

A quicker way to do the things you do most often. As Siri learns your routines, you’ll get suggested shortcuts for just what you need, at just the right time, on the Lock screen or in search.

If one digs deep enough into the examples of commonly used macros (“Workflows” is the current jargon-term in vogue), one can find where people have their phone run an entire morning routine, from a wakeup alarm, to a tooth-brushing timer, a workout prompter with curated workout playlist, morning commute planner, and news podcast playlist for the drive. Where Siri comes in with its macros is in chaining and coordinating all of these different activities through their different phone applications – the user simply needs to allow Siri to monitor enough typical morning routines, and after enough iterations the phone will learn the pattern and do the rest on its own, with just the occasional prompt for confirmation from the user.

Put another way, you can now have your phone attempt to build a model of you, your habits, preferences, and life, then feed that model back to you as a daily organizer, concierge, or (to my thinking) drill sergeant. The usefulness of Siri’s Suggestions depend entirely on the accuracy of the model it makes of you. And it is important to bear in mind that it is ultimately only ever a model. The model is limited by the granularity of what it can measure, and indeed what it is allowed to measure. In the few days I had been running IOS, Siri had already created a model of me that said I usually return my wife’s calls within an hour, and used that model to prompt me to return her call. Siri simply lacked the information that my wife and I had rendezvoused, rendering the call-back unnecessary, and Siri lacked that information because neither my wife nor I have enabled other Siris on other phones to see our locations. Siri’s model was thus inaccurate because I had deliberately withheld a more precise tool by which to model me.

AI in Iteration

Some months ago, The Atlantic ran an article by Henry Kissinger wherein he explored his own discoveries and concerns with machine learning and artificial intelligence.  He describes his first real awakening here (emphasis mine):

The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which color he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.

The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

Like with Siri, the program AlphaGo learned by iteration – in this case game play that it could actively compute on its own, instead of passively monitoring human behavior – to create a model set for how Go could be played under any circumstance. AlphaGo’s technique is, however, like all predictions based on iterations, really just a brute force attack. Go has a limited rule set, which both gives it hard boundaries, and makes it possible for that rule set to be programmed in-toto into an iterative learning system. Siri is ultimately bounded too, not by game rules, but by what you allow it to monitor, and what you have installed on your phone. It’s not going to suddenly suggest you play Angry Birds on your bathroom break if you don’t actually have that game on your phone in the first place (though if your bathroom breaks are routine enough, I do wonder if it might try to learn those, and suggest more fiber when your schedule deviates).

Returning to Kissinger’s article, he notes how other habit-learning algorithms already function:

In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Of course these sorts of things are all just forms of brute managing of data sets to build models of us, or to “help” by automating things, or to solve specific problems. For Google, it’s all just to automatically gather data on us so as to sell advertisements to us, or to sell their politics and worldview. For Siri, it’s all to be useful enough that you keep coming back to Apple. For AlphaGo, it’s just to win a game. These are all narrow challenges, but as shown above they have limits and boundaries both in the data they can create or gather and most important in what they are expected to do.  Siri may try to predict what you might do next then attempt to assist in that but it’s not really suggesting you do anything truly new — it may suggest something new to you personally, but if so that is only because it is drawing on models of other Siri users and guessing that you might be similar enough in your routine.

AI Complications

What of more complicated or open-ended tasks? In the examples above, machine learning is still largely an iterative process of observation and model building to achieve goals that some human being picked – winning a game, timing your toothbrushing, or running your life for you.  These all have explicit or implicit boundaries.  What happens when when we start to turn Artificial Intelligence loose, or at least looser than what we have done to date?  What if we ourselves do not consider all the boundaries we must set in a system?

Human beings at times have a remarkable ability to find and exploit loopholes in meeting our own goals, or to completely miss what should be obvious and necessary information.  Maybe it’s cutting through somebody’s back yard on your way home.  Well, that’s fine if you are on foot, a bit rude if you are on your bike, and criminal property destruction if you are in your car.  In navigation, the context of the mode of transportation matters when it comes to how much we choose to bend the rules or deviate from a normative route, and this is not something you have to explain to most people.  But you do have to explain it to some people, just as I had to once write and illustrate a 4 page work instruction on how to fold and tape up a cardboard box.

There is a funny story a friend told me from a US auto factory. He (an engineer) was tasked with finding out why every car coming off the line had a broken electrical harness. Turns out that 1 lady had one single job – install the washer fluid bottle. The main trunk harness happened to be partly in her way, and rather than just shove it aside long enough to snap the bottle in, she broke the harness right out. When my friend demanded an explanation (for surely, breaking the harness was far more work than doing it the right way), she snapped “Nobody told menot to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

When one pictures a brilliant idiot of a AI system assembling cars, one where a computer is instructed to iterate out the best way to build a car (as opposed to direct teaching done today), it is highly likely we will see many such similar breakdowns in logic and common sense, all because an AI system was not properly bounded.  The difference will be that when an automated system runs amok, it can do so thousands or even millions of times before it is halted, and like my poor friend, the one who catches the error needs to have the right authority to stop the system.  Such a system run haywire would be a brilliant idiot.

As computers grow in complexity and capability, iterative AI (like simulating out a million things to find the best path) will do amazing things… and amazingly horrible things. Like the idiot’s “nobody told me NOT to do it this way”, AI will, time and again, arrive at horrible ends because the programmers, smart people all, just could not conceive of the brilliant idiot even as they built such a monster. I see this all the time with my suppliers and customers. As I said to one such recently, “you are why I had to write the rule book!” No matter how narrowly you think you’ve defined the scope of a job, someone will find the loophole or nuance that you did not conceive. But AI, poorly bounded, could find it very quickly, and exploit the gap a million times over.  And I rather doubt we’re smart enough to program in the sort of generalized boundaries we implicitly rely on every day.

As Kissinger puts it:

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

We may be in danger of building an army of brilliant idiots, and unleashing them the world over all because we do not even know how to define their boundaries.

Or Maybe Not

Science fiction is replete with all manner of doomsday scenarios for AI gone haywire (my favorite is Gray Goo).  And we should be exceedingly careful with how and where we apply any technology.  Many were convinced that we would not survive the first decades of the atomic bomb, and in the wake of WWII that was not an unreasonable fear, but somehow we managed to do so.  Of course, as the investor commercials often warn, past results are no guarantee of future experiences.  Just because humanity has muddled through numerous technological advances so far is no guarantee that we’ll not wipe ourselves out this time.  But that’s no guarantee we will either.

But we do have to closely examine what we’re doing with what we have created.  Automation has its place in manufacturing – after all when you are making 10,000 of something, you do want them to be identical, and identically good.  But in our daily lives?  Personally, I’m not keen to have my phone trying to guess my daily routines then “assist” me in completing them.  I’m not a checklist kind of person, and I don’t want my phone making one for me either.  I’ve turned all of those assistants, workflows, and macros off — I don’t want them, and I won’t use them.  But maybe they’re of use to you — maybe you like having your morning routine sorted out at the push of a button, with your coffee already ordered and waiting for you on your morning commute as you drive to your AI-curated morning commute playlist.  Me?  Many mornings I keep the radio off, and I’m making my own coffee at home anyway.  I’m not an automaton, and I do not want my life automated.

The ongoing attempts to automate my life, whether in task macros or music learning like Pandora, have so far produced nothing more than brilliant idiots time and again – in trying to model and measure me they miss the nuances and contours and more important they miss that I really do not want to be ordered about by a machine-learning routine in the first place.  Right now these routines are overt and crude and I can turn them off.  What happens when some developer decides I should not be allowed to turn them off anymore though?  When I need to be nudged and nagged into some routine as if I were another machine?  The boundaries of Siri or Alexa are simple — we can choose not to use them.  Fun as it is to worry about Gray Goo, I worry more about when machine-learning or AI no longer has any boundaries because we, as human beings have not figured out how to to define those boundaries.  If I have to write a work instruction for folding a cardboard box, and it runs to 3 pages, how many more gaps and errors would we see in an aggressive AI where we try to code in something like ethics?

I am not now proposing anything concrete.  Not yet.  Save only this: remember that as advanced as your machine-learning, AI, digital assistants, and driverless cars will get, they’ll still be brilliant and sophisticated idiots, and you will still be a human being with moral agency — an agency you cannot evade by letting your technology automate your life.

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 46 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Gary McVey Contributor
    Gary McVey
    @GaryMcVey

    A fascinating post, SkipSul. 

    25 years ago this fall, I bought one of the first Apple Newtons. It was a much better device than its comedy legend would suggest, and yes, the handwriting recognition was pretty good.

    What sank Newton in general was an ad campaign that vastly overstated what was possible in 1993. “Newton is always looking for ways to help you out. If you’re on a business trip, Newton will suggest Italian restaurants that suit your tastes. When you rent a car, just scrawl ‘Hertz $88’ and Newton will know to fill out a complete expense report for you”. It’s only been the past couple of years that phones acquired the sort of helper apps that would really make that level of assistance possible. 

    • #1
  2. Judge Mental Member
    Judge Mental
    @JudgeMental

    Do you want Skynet?  Because this is how you get Skynet.

    • #2
  3. SkipSul Inactive
    SkipSul
    @skipsul

    Gary McVey (View Comment):

    A fascinating post, SkipSul.

    25 years ago this fall, I bought one of the first Apple Newtons. It was a much better device than its comedy legend would suggest, and yes, the handwriting recognition was pretty good.

    What sank Newton in general was an ad campaign that vastly overstated what was possible in 1993. “Newton is always looking for ways to help you out. If you’re on a business trip, Newton will suggest Italian restaurants that suit your tastes. When you rent a car, just scrawl ‘Hertz $88’ and Newton will know to fill out a complete expense report for you”. It’s only been the past couple of years that phones acquired the sort of helper apps that would really make that level of assistance possible.

    25 years ago I would have thought such promises were wonderful, but that was before the full understanding of the massive server farms and privacy surrendering spying required.  It was easier to want such things when we could envision the control being all local, but controlled by companies?  Like Google?  With their technocratic aspirations of control?  I’ll pass.

    • #3
  4. Percival Thatcher
    Percival
    @Percival

    There was once an expert system that was designed to differentiate between digitized photographs and determine which ones were of interest.  The system was fed a set of pictures, some of which contained tanks, and some of which didn’t. The system was informed which of the images contained that which was being sought and which ones did not. After this initial training period, a larger set of images were sent through.

    The system’s accuracy was phenomenal. It was able to get the correct images with far less training than had been predicted. There were smiles all around.

    Then a larger set of pictures were scanned and … the accuracy of the system plummeted. Where once even the hint of a vehicle largely obscured by vegetation proved no challenge, the system now couldn’t see a tank sitting in the center of the frame in the middle of an open field — repeatedly.

    The problem was eventually uncovered by some non-artificial intelligence. The training photographs had been taken in the same location on two separate days. On the first day, field maneuvers had been underway. This provided the armored vehicles that were of interest. It had been overcast that day. On the next day after the tanks had cleared out, the sun was shining. The system had picked up on the different amounts of shadows and thought that that was what it was supposed to be looking for.

    Thinking is harder than it looks. Just ask Sheldon Whitehouse.

    • #4
  5. Judge Mental Member
    Judge Mental
    @JudgeMental

    Percival (View Comment):

    There was once an expert system that was designed to differentiate between digitized photographs and determine which ones were of interest. The system was fed a set of pictures, some of which contained tanks, and some of which didn’t. The system was informed which of the images contained that which was being sought and which ones did not. After this initial training period, a larger set of images were sent through.

    The system’s accuracy was phenomenal. It was able to get the correct images with far less training than had been predicted. There were smiles all around.

    Then a larger set of pictures were scanned and … the accuracy of the system plummeted. Where once even the hint of a vehicle largely obscured by vegetation proved no challenge, the system now couldn’t see a tank sitting in the center of the frame in the middle of an open field — repeatedly.

    The problem was eventually uncovered by some non-artificial intelligence. The training photographs had been taken in the same location on two separate days. On the first day, field maneuvers had been underway. This provided the armored vehicles that were of interest. It had been overcast that day. On the next day after the tanks had cleared out, the sun was shining. The system had picked up on the different amounts of shadows and thought that that was what it was supposed to be looking for.

    Thinking is harder than it looks. Just ask Sheldon Whitehouse.

    I, for one, am in favor of attacking sunshine.

    • #5
  6. SkipSul Inactive
    SkipSul
    @skipsul

    There’s another problem: When the Newton suggested an Italian place, why did it pick that one, and not another?  Easier to trust the suggestion 25 years ago than today, where we know it is gamed.

    It is gamed by paid adverts.

    It is gamed by customer reviews, which could be faked.

    It is gamed by a search AI programmed to hide things its creator does not approve of.

    It is gamed  by that same AI to “Nudge” us in to thinking we want what it wants us to want.  “Eat here, it’s healthier!”

    We have every reason to distrust how helpful it seems to be.

    • #6
  7. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    Yeah, true artificial intelligence is twenty years away, and has been for the last forty.

    • #7
  8. She Member
    She
    @She

    Just, brilliant, and full of good info, @skipsul, thanks.

    I took a broom to iOS 12 on my phone the day after it installed and started telling me how many hours I was looking at the screen every day:

    “New tools to empower you to understand and make choices about how much time you spend using apps and websites”

    Still trying to figure out why I need a tool to help me (that’s what they mean when they say empower), how to make choices about what I do with my phone.  For Pete’s sake.

    I’m not liking iOS 12 at all.   Not even the fact that, in a few short weeks, I’ll be able to “FaceTime with 32 people at once!!!” is warming me up to it.

    When the day comes that I can push a button, send Siri and my iPhone out the door to do my day’s work in a reliable fashion, while I roll over and go back to sleep for a few more hours, then I might start to get excited.  But not before then.

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room.   A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him.  I looked at his drill.  I looked at his 2″ hole drill bit.  “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon.  Now that would be helpful.

    • #8
  9. Gary McVey Contributor
    Gary McVey
    @GaryMcVey

    Phil Turmel (View Comment):

    Yeah, true artificial intelligence is twenty years away, and has been for the last forty.

    You’re right. But on the other hand, a lot of the stuff that Henry Dreyfuss and other eternal AI skeptics said was beyond reach finally hit the threshold of practicality. Speech recognition, self-driving cars, language translation–in 1975-2000, all of them “proof” that the inflated hopes of 1957-70 were hokum. 

    • #9
  10. Judge Mental Member
    Judge Mental
    @JudgeMental

    She (View Comment):

    Just, brilliant, and full of good info, @skipsul, thanks.

    I took a broom to iOS 12 on my phone the day after it installed and started telling me how many hours I was looking at the screen every day:

    “New tools to empower you to understand and make choices about how much time you spend using apps and websites”

    Still trying to figure out why I need a tool to help me (that’s what they mean when they say empower), how to make choices about what I do with my phone. For Pete’s sake.

    I’m not liking iOS 12 at all. Not even the fact that, in a few short weeks, I’ll be able to “FaceTime with 32 people at once!!!” is warming me up to it.

    When the day comes that I can push a button, send my Siri and my iPhone out the door to do my day’s work in a reliable fashion, while I roll over and go back to sleep for a few more hours, then I might start to get excited. But not before then.

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room. A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him. I looked at his drill. I looked at his 2″ hole drill bit. “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon. Now that would be helpful.

    “I didn’t say cords, I said cards!  Business cards!  We’re going to put a little box underneath to catch them.  So, start drilling.”

    • #10
  11. Richard Finlay Inactive
    Richard Finlay
    @RichardFinlay

    SkipSul: brilliant and sophisticated idiots,

    As opposed to people in my circles, who lack brilliance and sophistication. 1 out of 3, though.

    • #11
  12. The Reticulator Member
    The Reticulator
    @TheReticulator

    I find it interesting that people allow machines to interact with their lives in this way. However, it will not be so amusing when I am required to allow it.

    • #12
  13. Hammer, The (Ryan M) Inactive
    Hammer, The (Ryan M)
    @RyanM

    I believe I have Siri turned off on my phone. It worried me at first when she asked “why are you doing this, Ryan?” but she hasn’t bothered me since.

    • #13
  14. Hammer, The (Ryan M) Inactive
    Hammer, The (Ryan M)
    @RyanM

    She (View Comment):

    Just, brilliant, and full of good info, @skipsul, thanks.

    I took a broom to iOS 12 on my phone the day after it installed and started telling me how many hours I was looking at the screen every day:

    “New tools to empower you to understand and make choices about how much time you spend using apps and websites”

    Still trying to figure out why I need a tool to help me (that’s what they mean when they say empower), how to make choices about what I do with my phone. For Pete’s sake.

    I’m not liking iOS 12 at all. Not even the fact that, in a few short weeks, I’ll be able to “FaceTime with 32 people at once!!!” is warming me up to it.

    When the day comes that I can push a button, send Siri and my iPhone out the door to do my day’s work in a reliable fashion, while I roll over and go back to sleep for a few more hours, then I might start to get excited. But not before then.

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room. A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him. I looked at his drill. I looked at his 2″ hole drill bit. “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon. Now that would be helpful.

    Oh, you are more patient than me. I’d have simply brought my drill from home. 

    • #14
  15. Judge Mental Member
    Judge Mental
    @JudgeMental

    Hammer, The (Ryan M) (View Comment):

    I believe I have Siri turned off on my phone. It worried me at first when she asked “why are you doing this, Ryan?” but she hasn’t bothered me since.

    I’m curious about how much of the tech is in the gizmo, and how much in the cloud.  If it’s just a microphone and modem kind of deal, then every word you speak in your house is going into the cloud and onto their corporate servers for analysis, just so that it can know to respond.

    Why would anyone sign up for that?

    • #15
  16. Dorrk Inactive
    Dorrk
    @Dorrk

    This article that I felt strangely compelled to read earlier today looks at one very specific instance of the perils of “smart” technology-aided living:

    My Period Tracker Tells My Boyfriend Everything About My Flow

    And it’s completely sabotaging our relationship in the process

    • #16
  17. James Lileks Contributor
    James Lileks
    @jameslileks

    Great post. I have all the most recent spiffy hot smarty-pants OS / iOS updates, but haven’t noticed Siri barging into my life at all. I’m just not busy enough to trigger any of this stuff.

    Atlantic mag had a piece – can’t find the link at the moment – about  AI as a friend for the lonely and troubled, and What It Means when we confess to our digital assistants, how we’ll inevitably come to see them as characters in our own lives. Emotional versions of sex robots for the lonely, I guess. It’ll be fascinating to see where this goes. I don’t think we’ll ever be able to shake off the knowledge that we can turn these things off, which reminds us of their artificiality. It’s at the heart of all the interactions. Alexa, stop. 

    As it stands now, I love my watch: it gives me a custom tap on the wrist to say “this message is from your daughter, in Brazil,” and I raise my arm and see a small picture of her walking a dog n a distant city. No matter how good AI gets, Siri will never know why that matters. Only that it has a certain value in a hierarchy of values. 

    • #17
  18. She Member
    She
    @She

    Hammer, The (Ryan M) (View Comment):

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room. A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him. I looked at his drill. I looked at his 2″ hole drill bit. “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon. Now that would be helpful.

    Oh, you are more patient than me. I’d have simply brought my drill from home.

    I’d have been fired if I did that.

    • #18
  19. Percival Thatcher
    Percival
    @Percival

    She (View Comment):

    Hammer, The (Ryan M) (View Comment):

    SkipSul: “Nobody told me NOT to do it this way!” Further complicating the problem, as my friend was not a foreman, he lacked the authority to tell her not to break the wire harness.

    This made me smile and reminded me of the time I called Maintenance and explained to the secretary that I’d like someone to come and drill several 2″ holes in the counter top in the new PC training room. A few hours later, the carpenter showed up.

    I explained what I would like him to do, and showed him that the holes should be positioned so it was convenient to drop the power cords for the PC and the monitor down through them to the outlets below.

    “Oh, no!” He exclaimed, “I’m not allowed to do that.”

    I looked at him. I looked at his drill. I looked at his 2″ hole drill bit. “Why not?” I asked.

    “The electrician has to drill them, if you’re going to drop wires through them,” he said.

    The union.

    Next morning, with the carpenter’s drill and bit in hand, the electrician showed up.

    Perhaps Apple can find a way to solve that little conundrum soon. Now that would be helpful.

    Oh, you are more patient than me. I’d have simply brought my drill from home.

    I’d have been fired if I did that.

    They’d have to catch you.

    • #19
  20. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    James Lileks (View Comment):

    Great post. I have all the most recent spiffy hot smarty-pants OS / iOS updates, but haven’t noticed Siri barging into my life at all. I’m just not busy enough to trigger any of this stuff.

    Atlantic mag had a piece – can’t find the link at the moment – about AI as a friend for the lonely and troubled, and What It Means when we confess to our digital assistants, how we’ll inevitably come to see them as characters in our own lives. Emotional versions of sex robots for the lonely, I guess. It’ll be fascinating to see where this goes. I don’t think we’ll ever be able to shake off the knowledge that we can turn these things off, which reminds us of their artificiality. It’s at the heart of all the interactions. Alexa, stop.

    As it stands now, I love my watch: it gives me a custom tap on the wrist to say “this message is from your daughter, in Brazil,” and I raise my arm and see a small picture of her walking a dog n a distant city. No matter how good AI gets, Siri will never know why that matters. Only that it has a certain value in a hierarchy of values.

    AI doesn’t know “why”

    • #20
  21. SkipSul Inactive
    SkipSul
    @skipsul

    Percival (View Comment):
    They’d have to catch you.

    • #21
  22. SkipSul Inactive
    SkipSul
    @skipsul

    Judge Mental (View Comment):

    Hammer, The (Ryan M) (View Comment):

    I believe I have Siri turned off on my phone. It worried me at first when she asked “why are you doing this, Ryan?” but she hasn’t bothered me since.

    I’m curious about how much of the tech is in the gizmo, and how much in the cloud. If it’s just a microphone and modem kind of deal, then every word you speak in your house is going into the cloud and onto their corporate servers for analysis, just so that it can know to respond.

    Why would anyone sign up for that?

    It seems that’s largely where it’s at right now.  For the time being, they’ve given up trying to do much speech parsing locally.

    Anyone here ever use any of the dictation programs of the 90s or early 00s?  They showed a lot of promise, but they were only ever about 90% good enough, and their error rates, even after lots of training and correction, still required a great deal of extra time to correct.  I tried them for a while before concluding that A) I hate talking to computers, and B) Any time saved in dictation was lost in correction. 

    However, they were an absolute boon to those who cannot type easily or at all, and they’re still vital for computers that are not constantly connecting to the internet.

    • #22
  23. SkipSul Inactive
    SkipSul
    @skipsul

    James Lileks (View Comment):
    Great post. I have all the most recent spiffy hot smarty-pants OS / iOS updates, but haven’t noticed Siri barging into my life at all. I’m just not busy enough to trigger any of this stuff.

    If you have had it mostly turned off or restricted for a while, then the IOS12 upgrade will leave those things still turned off.  What is turned on by default after the upgrade is the new stuff, and even there it’s only those new features that are not dependent on things you had already blocked.

    But try starting a new device from scratch – no porting over settings from older devices – and see what happens.

    James Lileks (View Comment):
    As it stands now, I love my watch: it gives me a custom tap on the wrist to say “this message is from your daughter, in Brazil,” and I raise my arm and see a small picture of her walking a dog n a distant city. No matter how good AI gets, Siri will never know why that matters. Only that it has a certain value in a hierarchy of values. 

    Right.  It’s of a great deal more use and value when we take control of it, and don’t let it try to guess.  Do this here, not there.  What concerns me is that while this is workable for the technically savvy, the developers know that most people are not going to spend the hours customizing their tech to behave, so their research and new features are directed at those who would rather have the tech do the heavy lifting.

    • #23
  24. Percival Thatcher
    Percival
    @Percival

    SkipSul (View Comment):

    Percival (View Comment):
    They’d have to catch you.

    “Why is there a perfectly round hole in this table?”

    ”Dunno. Precision termites?”

    • #24
  25. Percival Thatcher
    Percival
    @Percival

    SkipSul (View Comment):

    Judge Mental (View Comment):

    Hammer, The (Ryan M) (View Comment):

    I believe I have Siri turned off on my phone. It worried me at first when she asked “why are you doing this, Ryan?” but she hasn’t bothered me since.

    I’m curious about how much of the tech is in the gizmo, and how much in the cloud. If it’s just a microphone and modem kind of deal, then every word you speak in your house is going into the cloud and onto their corporate servers for analysis, just so that it can know to respond.

    Why would anyone sign up for that?

    It seems that’s largely where it’s at right now. For the time being, they’ve given up trying to do much speech parsing locally.

    Anyone here ever use any of the dictation programs of the 90s or early 00s? They showed a lot of promise, but they were only ever about 90% good enough, and their error rates, even after lots of training and correction, still required a great deal of extra time to correct. I tried them for a while before concluding that A) I hate talking to computers, and B) Any time saved in dictation was lost in correction.

    However, they were an absolute boon to those who cannot type easily or at all, and they’re still vital for computers that are not constantly connecting to the internet.

    We worked on a voice recognition system for tuning radios. One of the voice trainers was from Brooklyn, the other was from Chennai, and the rest of us were far too easily amused.

    • #25
  26. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    SkipSul: What happens when when we start to turn Artificial Intelligence loose, or at least looser than what we have done to date? What if we ourselves do not consider all the boundaries we must set in a system?

    We get a bunch of mechanical toddlers?

    SkipSul: We may danger of building an army of brilliant idiots, and unleashing them the world over, all because we do not even know how to define their boundaries.

    Or possibly teenagers.

    SkipSul: The ongoing attempts to automate my life, whether in task macros or music learning like Pandora, have so far produced nothing more than brilliant idiots time and again – in trying to model and measure me they miss the nuances and contours, and more importantly they miss that I really do not want to be ordered about by a machine-learning routine in the first place.

    OK, toddlers, definitely toddlers.

    • #26
  27. Midget Faded Rattlesnake Member
    Midget Faded Rattlesnake
    @Midge

    Hammer, The (Ryan M) (View Comment):

    I believe I have Siri turned off on my phone. It worried me at first when she asked “why are you doing this, Ryan?” but she hasn’t bothered me since.

    “You’re an incompetent, Siri.”
    “Searching the web for ‘urine incontinence’…”

    • #27
  28. Hang On Member
    Hang On
    @HangOn

    And there’s always this problem

     

    • #28
  29. Hugh Inactive
    Hugh
    @Hugh

    this would make a good TED talk.

    • #29
  30. Hang On Member
    Hang On
    @HangOn

    AI is actually a variety of different techniques rather than one thing. There are heuristics, neural networks, Markov Chain Processes, Natural Language Processors among others as techniques that are classified as AI. Within heuristics there are at least a half dozen techniques. And Natural Language Processors and neural networks seem to go together. So hybrids arise. As much as language changes, as nuanced as it is and as fast as it changes in English, good luck with a program that will be 100% correct. But then, humans mis-communicate all the time as well.

    How you go about “solving” a realistic problem means that you have to fit the problem into the domain of the solution technique. Problems can arise.

    • #30
Become a member to join the conversation. Or sign in if you're already a member.