I, Robot, Am Not Taking Over any Time Soon

 

NoRobots-300x277In “Conservatives are Too Quick to Dismiss the Rise of the Robots,” James Pethokoukis worries that whereas in the past, technology has given rise to new jobs to replace those lost to innovation, this time it may be different.

James provides us with an excellent specimen of the kind of thinking that constantly causes macroeconomists, politicians, and other self-styled high-level thinkers to make serious errors when analyzing changes to economies and human societies. I’m not picking on James, who’s an otherwise excellent analyst, but on this error, which is so common that it really needs to be discussed.

This phrase, in particular, jumped out at me:

Just think about the progress made in autonomous vehicles and the fact that the most common job in most states is that of truck driver.

His statement displays a top-down approach to analyzing the problem. Truck drivers drive vehicles. Soon, vehicles will be autonomous. So, no more truck drivers. But this kind of of analysis doen’t reflect the true complexity of truck driving.

And this isn’t just about truck drivers. To explain the problem and why it matters so much, I need to digress. The economy is a complex system. Society itself is a complex system. Workers in a society have to be productive within this complexity. These systems are very different from, say, a complicated machine. A complicated machine can be understood in a reductionist way: Take apart a motor; understand the various parts and what they do; and you can understand what the motor does. If you have full understanding of the motor, you can treat it like a black box, with inputs and outputs, and ignore the complicated workings inside. Engineers and scientists use this type of analysis to break down complicated problems and organize them into simpler ones.

Complex systems turn this upside down. A complex system looks simple on the surface, but becomes increasingly complex as you drill down. Complex systems are more than just a collection of parts – their behavior is governed not just by what each part does, but by the interactions among the parts. For example, you can’t understand a brain just by learning what neurons do. You must also — at least — understand the web of billions of neurons and the interactions among them.

The other problem with analyzing complex systems from the top down is that these systems function by means of feedback from the bottom, which causes constant iteration. If the price of steel goes up, that information changes the behavior of steel producers and consumers. That, in turn, causes its price to change again. The new price may create more consumers or more producers, or cause manufacturers to substitute other materials, which in turn causes the prices of those materials to change, and so on, ad infinitum.

You can think of such systems as a kind of self-programming computer: They constantly take in data, process it, and in response change the output. This process of feedback and constant change makes these systems very sensitive to initial conditions; as a result, seen from on high, they are opaque, and behave unpredictably  Hence the most common word in a macroeconomist’s vocabulary is apt to be “unexpectedly.”

Conservatives tend to understand this, because the thinkers we tend to read and follow understood it. Adam Smith’s phrase, “the invisible hand,” suggests how well he understood the way emergent properties drive complex systems. Hayek’s opposition to “scientism” and the pretense of knowledge were an evocation of complex systems theory. In fact, Hayek is considered one of the early contributors to that field.

Statists believe that the economy and society can be treated the same way. If they can find the levers that control the economy, the smart people at the top can push and pull on them and drive the ship of state. Social scientists want to be mathematical and scientific, just like the engineers and physicists, so they go through contortions to create models decorated by a few numbers and formulas into which they can be plugged. They use these to justify applying “scientific” techniques to managing people and the interactions between them. This is what Hayek called “scientism,” not science.

When macroeconomists reduce the economy to aggregate variables like GDP, employment, capital, inflation, or the consumer price index, they’re abstracting away everything that really matters in an economy in favor of a few numbers that are amenable to mathematical modeling. These numbers may indeed be useful when trying to understand the state of an economy, but the variables can’t be tweaked by central planners in sure confidence that the outcomes will be predictable. Attempts to do so lead to unintended consequences and to the destruction of the feedback forces the system needs to remain healthy.

If you’ve never read it, I highly recommend reading the classic essay I, Pencil by Leonard Read. It’s a perfect description of the way complex systems deceive people who look at them only from a very high level. If you ask someone how hard it is to build a pencil, they might think about it and say, “Oh, not hard. You need a wooden dowel, a hole drilled in it, and some lead or graphite to fill the hole. Glue it in, and you’re done.” But as Read’s pencil replied in the first person, “Simple? Yet, not a single person on the face of this earth knows how to make me.

The essay drills down into the construction of the pencil. You need some wood. Fine. Where do you get it? Will any wood do? Or are there special characteristics? And how do you get this wood? Chopping down trees? How do you do that? With an axe or a saw? How do you make an axe or a saw? Oh, you need a steel axe head. How do you make steel?  And so on, and so on. Spoiler alert: By the time you walk down just a couple of steps of production, you find efforts that require thousands of people, each with specialized knowledge the others do not share. It’s an incredibly complex endeavor, and the amount of economic and physical coordination required to make pencils is astounding.

The reality of complex systems is the reason conservatives oppose central planning. Hayek knew this, and it formed the core of his arguments against an overweening state, the supposed superiority of macroeconomic modeling, and decision-making by central authorities.

This failure to see hidden complexity is not limited to politicians and economists. Most engineering projects that run over budget, or that fail completely, do so because of a failure to take into account hidden complexity lurking in the details. Software engineering has moved away from top-down design and toward bottom-up, iterative development cycles precisely because it better matches the real world. The largest, most carefully thought-out architecture developed by people in the head office generally doesn’t survive contact with the real world, which is why that type of development isn’t done much any more.

Now back to the truck driver. Can a robot drive a truck? Maybe, on a well-documented road, and under unexceptional circumstances. From the high-level view, that answers the question. But if you ask a truck driver what he does, you might find that he also loads and unloads cargo. And if you dig into that activity, you might find that he needs to rely on years of experience  to know how to do that safely and efficiently with the load properly balanced and secured. He may be required to act as an agent for the company, collecting payment and verifying that the shipment matches the manifest. The truck driver is also the early warning system for vehicle problems. He has the knowledge and judgment to be able to tell if something is wrong. A rattling sound on a road full of debris might not be a problem. The same rattle heard on a smooth road? Might be a problem.

The truck driver is the coordinator of on-road repairs. His presence protects the cargo from theft or tampering. He deals with many different end-customers, many of whom are still using old-fashioned paper manifests and invoices a computer can’t deal with. He may use his judgment to determine if a check can be accepted for delivery. Each customer’s loading dock may have hazards and unique maneuvering difficulties. Then there are the ancillary benefits of human truck drivers – they cement relationships with customers. They spot opportunities. They report traffic accidents or crime to the police. They notice damaged goods in a shipment. Sleeping in the truck protects it from theft.

These are the things off the top of my head, and I’m not a truck driver. I’ll bet if you asked Dave Carter what he does, he could go into much greater detail.  And if you asked other people in the chain, they’d have their own set of complexities that are part of the entire work process called “truck driving.”

Robots don’t do complexity well. They are excellent at repetitive tasks, or tasks that can be extremely well defined, and which have a fixed set of parameters and boundary conditions. A robot on an assembly line knows exactly what it has to do, and the list of potential failures (parts out of alignment, defects in materials, etc.) are well known. Even a self-driving robot car needs to know what the road looks like — Google’s cars use pre-mapped road data — and it can’t deal with situations that are very far outside the norm.

We are making  strides here, and Google’s robot cars have a surprising amount of autonomous decision-making capability when it comes to things like cars stopping in front of them suddenly and obstacles on the road. But that’s a far cry from the kind of generalized human judgement required in most occupations — which is why the robots won’t be taking over any time soon.

I can’t say the same for the central planners. We seem to be stuck with them.

Published in Culture, General, Science & Technology, Technology
Tags:

Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 71 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Casey Inactive
    Casey
    @Casey

    Dan Hanson: we can have millions of cars running around autonomously and communicating through networks without a lot of very strange and potentially bad things happening.

    I’m sure bad things will happen but I think fewer bad things will happen.  It seems to me that crashes related to vehicle problems and to road conditions will likely be a wash.  Technical difficulties will be a new add-in.  But the idiot people problem will disappear entirely.

    No teen drivers, no drunk drivers, no cell phone chatters, no make-up fixing, no coffee spills.  No confusion, no blind spots, no “Oh crud!  The exit!” crossing three lanes in a panic.

    The more driving gets automated, the less there will be to react to.  And things will be safer and smoother.

    • #61
  2. user_18586 Thatcher
    user_18586
    @DanHanson

    anonymous Said:

    I shall again post Ray Kurzweil‘s cartoon from How to Create a Mind.

    In my opinion, that cartoon shows how NOT to think about this problem.   Being human is not a matter of a having a number of individual tasks you need to be able to solve.   It’s not like there’s some big list of activities that can be checked off to determine ‘human-ness’.

    Take a couple of items still on his wall.  Can a robot play baseball?  Why not?  That seems to me to be a problem limited mostly by engineering,  and the decisions required can be solved in the same way that a computer can learn to play poker.  Sure.   But one of his items on the wall is “only a human can review a movie”, as if this is a problem of the same order.  But what makes a good movie?  Sure, a  computer can figure out if it’s under-exposed,  it can probably compare scenes to databases of ‘good’ scenes and look for composition errors and the like.   But it will be a long, long time before a computer can watch a movie and tell you if it’s likely to make you laugh or cry.  Even longer to tell you if a movie makes an important, novel statement about the human condition.

    Another thing is that almost none of the ‘solved’ problems have been solved the way humans solve them,  and aren’t generalizable.  Computers can play chess better than humans,  but they don’t play chess like humans do.  They substitute real intelligence and judgement for speed, database lookup,  and a few heuristics.  Take the best chess playing computer in the world,  add no new code to it,  and see how well it does playing poker.   Or reviewing a movie.

    The problem with predicting human-like intelligence in machines is that we still don’t know what produces it in humans.  The human brain is not at all like a computer – not even like a computerized neural network.  There have been humans who have had severe brain injuries that have reduced their neuron counts below that of a gorilla or a dolphin,  with no apparent impairment to their cognition.   The brain is a mixture of digital and analog signals existing within multiple complex systems – the limbic system,  the nervous system,  the immune system,   a complex system of bacteria that regulates much more than we thought, etc.   It lives in a chemical bath that changes constantly and affects things in ways we don’t understand.

    I believe artificial intelligence is possible,  but I don’t believe we can ‘design’ it.  I think it will emerge as a property of an increasingly complex system, and we may not even recognize it as an intelligence at first.   Heck,  it might already be here in the form of emergent properties of our increasingly networked world that behave in ways that seem ‘intelligent’ but which no one ever planned.

    • #62
  3. user_18586 Thatcher
    user_18586
    @DanHanson

    anonymous said:

    Computing power available at constant cost has been growing exponentially for more than a century (I include mechanical/electromechanical calculators and tabulating machinery in this category as well as electronic computers).  In the future, there are only two possible outcomes:

    1. Exponential growth will continue.

    2. Exponential growth will cease.

    You are talking about computer hardware.  As you know,  software is not like that at all.  Computer hardware has followed Moore’s law because as Richard Feynman said,  there is a lot of room at the bottom.  By shrinking circuitry we make it faster and cheaper.

    But software has not progressed like that.  There is no exponential growth in software capability,  other than that which was facilitated through advances in hardware and networking.   Artificial intelligence gains are not exponential.  There have been breakthroughs in some areas due to the availability of things such as huge datasets of photos that can be used to train photo matching algorithms.   Bipedal robots and such are as much the result of improvements in sensors and motors than software.  And even those are in their infancy – the last DARPA challenge saw the best robots in the world failing at tasks a toddler does without thinking.

    In fact,  I think software is starting to hit some limitations.   In my own industry I’m seeing increasingly troubled or failed software projects because the environment they are required to work in is increasingly complex.   Writing an inventory management system for a company with 10 suppliers and a 30-day inventory supply is much simpler than writing one for a company with 500 suppliers operating in a ‘just in time’ basis using electronic interchange messages to coordinate everything.  Software requirements are growing in complexity, and this is challenging the way we plan and manage those projects.   We have a long way to go.

    I would happily take your 2030 bet.  Assuming Ricochet is still here in 2030, maybe we’ll have the chance to revisit this and see who’s right.

    • #63
  4. user_18586 Thatcher
    user_18586
    @DanHanson

    iWe said:

    I guess I would say it is related to the question  of how deterministic the future really can be. My answer is that complex systems are actually unmodellable – as per the reference I brought up above. Better computers have not led to better near or long-term weather prediction because the weather is far more complex and non-linear than any model can capture.

    Absolutely correct.  The economy can’t be predicted.  Innovation can’t be predicted.  The climate system can’t be predicted.  Stock movements can’t be predicted.   And this isn’t just because we haven’t figured out how – it’s because they are phenomena that are fundamentally unpredictable due to their nature.

    Notice that in politics such systems appear to be increasingly the refuge of the left.   One nice thing about mucking about with complex systems is that there are so many interactions that you can always find an excuse for why your prediction or intervention failed.  There are no control economies or control climates you can compare your results to.   This is why applying scientific reductionism and other scientific methods in an attempt to model, predict, and control these systems is best thought of as ‘scientism’, and not science.  It’s modern-day phrenology.

    • #64
  5. user_18586 Thatcher
    user_18586
    @DanHanson

    anonymous said:

    So, it isn’t just the biological substrate which is magic, but there is something in the human brain, absent from those of other animals (from which, as a moment’s glance at the comparative physiology of the brains of various species it clearly evolved) which endows it with this mystic power of abstraction.  Well then, where is it?  Can the philosopher identify any structure in the human brain which is absent from the brains of other primates?

    There’s nothing special about the human brain compared to other animals brains – there is something special about every animal brain that makes it different from every other animal, and makes species behave very differently even if they fall roughly into the same brain-complexity bin.

    Tell me – If all that matters is the size of the neocortex,  do you believe that once we build a digital one with the complexity of, say, a dolphin,  that it will magically begin to exhibit dolphin-like intelligence and dolphin-like traits?  And if go a few more generations down the line and build a neo-cortex the size of a Gorilla’s,  we’ll start seeing gorilla behavior?   And if so,  how do you explain the apparent intelligence of many birds, which do not have a neo-cortex at all?  African Gray parrots have shown signs of intelligence that exceeds that of the great apes,  yet they have very alien brain structures with no cortex and fewer neurons than other, much dumber animals.

    How do you explain the fact that we can perform a hemispherectomy on a human,  literally removing half the brain,  with no apparent cognitive function loss?  Even small infants who have had half their brains removed to stop seizures have been found to grow up normally without learning impairment.  And yet,  these brains have fewer neurons than gorillas and other primates,  and we have never taught them to think anywhere near the capacity of a human.

    There is so much we do not understand about how the brain works that I believe it’s folly to assume that predicting artificial intelligence is as simple as counting transistors and the number of connections between them.

    • #65
  6. user_18586 Thatcher
    user_18586
    @DanHanson

    Tuck Said:

    I rented a Chevy a while ago that had an automatic braking system.  I was driving down a street and someone on the right side opened his car door.

    The car interpreted this as a possible collision, and slammed on the brakes.  It scared the daylights out of everyone who was in the car looking out the windshield, because we’d all seen the same thing and determined it was obviously not a threat, as the door was out of the car’s path of travel.

    Collision avoidance is a nice feature, potentially.  When they get it to work as well as a human brain…

    I know what you mean.  I have a 2015 Ford Escape that has all the collision-avoidance stuff built into it.  State of the art stuff:  Cross-path detection, front and rear distance sensors,  blind-spot warning,  you name it.

    I paid over $1000 for those options,  thinking they’d be helpful.  I have since turned them all off.  The number of false positives and false negatives is maddening.  The car is constantly bleeping and blooping at you,  which causes you to learn to ignore the cacaphony, or at worst distracts you and makes you less safe.  But even worse,  sometimes it fails to warn of a real threat,  meaning you can’t rely on it,  which means you always have to check manually anyway.  This makes it a gimmick and totally useless.  It’s just far too dumb to understand the complex environment around it.

    As a simple example,  it utterly fails in parking lots because it can’t tell that a car creeping along waiting to turn into an empty stall is not a threat to me backing out of a stall five cars away.   It can’t tell that the car driving in the next section over is not a threat to me backing out of my stall.   But on the other hand,  it’s not smart enough to know that the car directly behind me that’s not yet moving but which has its reverse lights on IS a threat to me and I’d better not pull out behind him until I know he sees me.

    • #66
  7. user_18586 Thatcher
    user_18586
    @DanHanson

    anonymous: You’re on!  Bottle of Johnnie Walker Black as the stake?  What shall be the specific criterion?  I’d suggest a machine passing the original text-based Turing test, but perhaps you’d prefer machines routinely writing book and movie reviews for mass market publications, just as they are now writing sports copy.

    I’ll take the bet.  However… It seems to me that defining ‘human intelligence’ isn’t a trivial task.   For me,  for an AI to be seen as truly having human-level intelligence it’s not enough for it to excel at any one given task, no matter how difficult it seems today.  We’re pretty good at coming up with ways to get computers to solve very specific problems.  If you’ve seen what Google’s pattern-matching networks are achieving, and how they are doing it,  it’s pretty astounding.

    DARPA has a better way of determining this stuff – you bring your robot to a competition which has a bunch of challenges – some you know of beforehand,  and some which are sprung on you and you can’t anticipate.   If your robot can figure out the ones that you didn’t know about,  that’s pretty cool.

    So give me a robot that can play poker,  but which might have to switch to another game of my choosing and still play well – even if it’s never heard of the game (so long as I give it the rules).  And while we’re playing,  I’ll chat with it and ask it questions like,  “Hey,  what do you think of this painting?”  Or maybe, “watch this movie with me,  then tell me what you liked and didn’t like.”   But the key is you can’t know exactly what I’m going to ask it to do,  other than that it’s something I would expect any educated human to be able to do.   It doesn’t have to have physical abilities – no opening of jars or anything like that.  Purely mental questions.  Maybe I’ll tell it a joke and see if it laughs.  Maybe I’ll say something sarcastically and see if it can tell.   Maybe I’ll  ask it to write a sonnet.   But you can’t know in advance.

    I guess this is just a more detailed statement of the Turing test.  Do you believe we’ll have computers before 2030 capable of this?

    • #67
  8. Casey Inactive
    Casey
    @Casey

    I don’t care who wins the bet so long as you share.

    • #68
  9. user_18586 Thatcher
    user_18586
    @DanHanson

    anonymous:

    Dan Hanson: African Gray parrots have shown signs of intelligence that exceeds that of the great apes, yet they have very alien brain structures with no cortex and fewer neurons than other, much dumber animals.

    There is evidence that there is a structure in the avian brain which is homologous to the neocortex in mammals. There is also evidence for the presence of mirror neurons in birds. Mirror neurons appear to be important in imitative behaviour and learning, and their presence (or analogues) in birds may explain their ability to learn birdsongs and imitate sounds.

    There’s a lot more going on in avian brains than just mimicry.  Alex the African Gray Parrot had a meaningful vocabulary of over 100 words (i.e. he actually understood them).  He understood the concepts of bigger, smaller, same, and different.  He could identify seven different colors by name.  He could identify objects by their shape and/or color.  He seems to have understood object permanence.   And the list goes on.  He succeeded in cognitive tests no other non-human animal has ever passed.  I owned an African Grey Parrot for a few years,  and can attest to how smart they are.  But their brains are not constructed anything like mammalian brains.

    • #69
  10. Whiskey Sam Inactive
    Whiskey Sam
    @WhiskeySam

    anonymous:

    Dan Hanson:I would happily take your 2030 bet. Assuming Ricochet is still here in 2030, maybe we’ll have the chance to revisit this and see who’s right.

    You’re on! Bottle of Johnnie Walker Black as the stake? What shall be the specific criterion? I’d suggest a machine passing the original text-based Turing test, but perhaps you’d prefer machines routinely writing book and movie reviews for mass market publications, just as they are now writing sports copy.

    That explains a lot of CBS Sports’ articles lately.

    • #70
  11. Frank Soto Member
    Frank Soto
    @FrankSoto

    anonymous:

    Dan Hanson: There’s a lot more going on in avian brains than just mimicry. Alex the African Gray Parrot had a meaningful vocabulary of over 100 words (i.e. he actually understood them). He understood the concepts of bigger, smaller, same, and different. He could identify seven different colors by name. He could identify objects by their shape and/or color. He seems to have understood object permanence. And the list goes on. He succeeded in cognitive tests no other non-human animal has ever passed. I owned an African Grey Parrot for a few years, and can attest to how smart they are. But their brains are not constructed anything like mammalian brains.

    African Greys are remarkable. N’kisi appears to have a vocabulary of around 950 words and uses them in complete sentences. The parrot speaks in idiomatic American English, inventing circumlocutions for terms not known.

    And, he appears to be telepathic.

    On the subject of African Greys, my father had one, and the damn thing would play honest to God pranks on people and then laugh.

    He enjoyed making a perfect replica sound of the phone ringing, and then yelling for a family member in a shockingly authentic voice.  When they’d come running to answer it, the bird would start laughing.

    • #71
Become a member to join the conversation. Or sign in if you're already a member.