Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
I, Robot, Am Not Taking Over any Time Soon
In “Conservatives are Too Quick to Dismiss the Rise of the Robots,” James Pethokoukis worries that whereas in the past, technology has given rise to new jobs to replace those lost to innovation, this time it may be different.
James provides us with an excellent specimen of the kind of thinking that constantly causes macroeconomists, politicians, and other self-styled high-level thinkers to make serious errors when analyzing changes to economies and human societies. I’m not picking on James, who’s an otherwise excellent analyst, but on this error, which is so common that it really needs to be discussed.
This phrase, in particular, jumped out at me:
Just think about the progress made in autonomous vehicles and the fact that the most common job in most states is that of truck driver.
His statement displays a top-down approach to analyzing the problem. Truck drivers drive vehicles. Soon, vehicles will be autonomous. So, no more truck drivers. But this kind of of analysis doen’t reflect the true complexity of truck driving.
And this isn’t just about truck drivers. To explain the problem and why it matters so much, I need to digress. The economy is a complex system. Society itself is a complex system. Workers in a society have to be productive within this complexity. These systems are very different from, say, a complicated machine. A complicated machine can be understood in a reductionist way: Take apart a motor; understand the various parts and what they do; and you can understand what the motor does. If you have full understanding of the motor, you can treat it like a black box, with inputs and outputs, and ignore the complicated workings inside. Engineers and scientists use this type of analysis to break down complicated problems and organize them into simpler ones.
Complex systems turn this upside down. A complex system looks simple on the surface, but becomes increasingly complex as you drill down. Complex systems are more than just a collection of parts – their behavior is governed not just by what each part does, but by the interactions among the parts. For example, you can’t understand a brain just by learning what neurons do. You must also — at least — understand the web of billions of neurons and the interactions among them.
The other problem with analyzing complex systems from the top down is that these systems function by means of feedback from the bottom, which causes constant iteration. If the price of steel goes up, that information changes the behavior of steel producers and consumers. That, in turn, causes its price to change again. The new price may create more consumers or more producers, or cause manufacturers to substitute other materials, which in turn causes the prices of those materials to change, and so on, ad infinitum.
You can think of such systems as a kind of self-programming computer: They constantly take in data, process it, and in response change the output. This process of feedback and constant change makes these systems very sensitive to initial conditions; as a result, seen from on high, they are opaque, and behave unpredictably Hence the most common word in a macroeconomist’s vocabulary is apt to be “unexpectedly.”
Conservatives tend to understand this, because the thinkers we tend to read and follow understood it. Adam Smith’s phrase, “the invisible hand,” suggests how well he understood the way emergent properties drive complex systems. Hayek’s opposition to “scientism” and the pretense of knowledge were an evocation of complex systems theory. In fact, Hayek is considered one of the early contributors to that field.
Statists believe that the economy and society can be treated the same way. If they can find the levers that control the economy, the smart people at the top can push and pull on them and drive the ship of state. Social scientists want to be mathematical and scientific, just like the engineers and physicists, so they go through contortions to create models decorated by a few numbers and formulas into which they can be plugged. They use these to justify applying “scientific” techniques to managing people and the interactions between them. This is what Hayek called “scientism,” not science.
When macroeconomists reduce the economy to aggregate variables like GDP, employment, capital, inflation, or the consumer price index, they’re abstracting away everything that really matters in an economy in favor of a few numbers that are amenable to mathematical modeling. These numbers may indeed be useful when trying to understand the state of an economy, but the variables can’t be tweaked by central planners in sure confidence that the outcomes will be predictable. Attempts to do so lead to unintended consequences and to the destruction of the feedback forces the system needs to remain healthy.
If you’ve never read it, I highly recommend reading the classic essay I, Pencil by Leonard Read. It’s a perfect description of the way complex systems deceive people who look at them only from a very high level. If you ask someone how hard it is to build a pencil, they might think about it and say, “Oh, not hard. You need a wooden dowel, a hole drilled in it, and some lead or graphite to fill the hole. Glue it in, and you’re done.” But as Read’s pencil replied in the first person, “Simple? Yet, not a single person on the face of this earth knows how to make me.“
The essay drills down into the construction of the pencil. You need some wood. Fine. Where do you get it? Will any wood do? Or are there special characteristics? And how do you get this wood? Chopping down trees? How do you do that? With an axe or a saw? How do you make an axe or a saw? Oh, you need a steel axe head. How do you make steel? And so on, and so on. Spoiler alert: By the time you walk down just a couple of steps of production, you find efforts that require thousands of people, each with specialized knowledge the others do not share. It’s an incredibly complex endeavor, and the amount of economic and physical coordination required to make pencils is astounding.
The reality of complex systems is the reason conservatives oppose central planning. Hayek knew this, and it formed the core of his arguments against an overweening state, the supposed superiority of macroeconomic modeling, and decision-making by central authorities.
This failure to see hidden complexity is not limited to politicians and economists. Most engineering projects that run over budget, or that fail completely, do so because of a failure to take into account hidden complexity lurking in the details. Software engineering has moved away from top-down design and toward bottom-up, iterative development cycles precisely because it better matches the real world. The largest, most carefully thought-out architecture developed by people in the head office generally doesn’t survive contact with the real world, which is why that type of development isn’t done much any more.
Now back to the truck driver. Can a robot drive a truck? Maybe, on a well-documented road, and under unexceptional circumstances. From the high-level view, that answers the question. But if you ask a truck driver what he does, you might find that he also loads and unloads cargo. And if you dig into that activity, you might find that he needs to rely on years of experience to know how to do that safely and efficiently with the load properly balanced and secured. He may be required to act as an agent for the company, collecting payment and verifying that the shipment matches the manifest. The truck driver is also the early warning system for vehicle problems. He has the knowledge and judgment to be able to tell if something is wrong. A rattling sound on a road full of debris might not be a problem. The same rattle heard on a smooth road? Might be a problem.
The truck driver is the coordinator of on-road repairs. His presence protects the cargo from theft or tampering. He deals with many different end-customers, many of whom are still using old-fashioned paper manifests and invoices a computer can’t deal with. He may use his judgment to determine if a check can be accepted for delivery. Each customer’s loading dock may have hazards and unique maneuvering difficulties. Then there are the ancillary benefits of human truck drivers – they cement relationships with customers. They spot opportunities. They report traffic accidents or crime to the police. They notice damaged goods in a shipment. Sleeping in the truck protects it from theft.
These are the things off the top of my head, and I’m not a truck driver. I’ll bet if you asked Dave Carter what he does, he could go into much greater detail. And if you asked other people in the chain, they’d have their own set of complexities that are part of the entire work process called “truck driving.”
Robots don’t do complexity well. They are excellent at repetitive tasks, or tasks that can be extremely well defined, and which have a fixed set of parameters and boundary conditions. A robot on an assembly line knows exactly what it has to do, and the list of potential failures (parts out of alignment, defects in materials, etc.) are well known. Even a self-driving robot car needs to know what the road looks like — Google’s cars use pre-mapped road data — and it can’t deal with situations that are very far outside the norm.
We are making strides here, and Google’s robot cars have a surprising amount of autonomous decision-making capability when it comes to things like cars stopping in front of them suddenly and obstacles on the road. But that’s a far cry from the kind of generalized human judgement required in most occupations — which is why the robots won’t be taking over any time soon.
I can’t say the same for the central planners. We seem to be stuck with them.
Published in Culture, General, Science & Technology, Technology
But in the case of self-driving cars they would be planning ahead and accidents would plummet. Every car knows where it is going and the car can communicate that to other cars. So you’d never have a car change lakes unexpectedly. The cars would be working together in advance. And traffic would disappear because cars could plan out the path of least resistance. Stop lights and stop signs would become unnecessary. Cars would know to stop before another car could even be seen.
Machines can plan ahead far better than people.
Yes, there is something special about the human brain. There is 0% chance computers become self-aware because extracting the intelligible species of a form is not a matter of computational power or computer complexity. Big, fat category mistake, John.
The problem with Ray Kurzweil is that he doesn’t understand humans. Humans will never let a computer mind emerge.
Would you invest in a software-CEO who is starting up a social media company?
If there’s a soul that uses the brain then you can’t find it there.
It may be that soul which is frustrated with the imperfections of the body/brain tool is seeking to create a more perfect tool. A tool which frees us from animal distraction.
I’m coaching 6, 7, and 8 year olds for the first time this year. The most striking thing to me that I hadn’t ever considered about the game is the number of mental variables that must be processed. This is incredibly hard for kids this age.
With a runner on first and second, I technically need the third baseman to know that if the ball goes at him or to his right to touch third. But if the ball is to his left, try to tag the runner. If you can’t tag the runner then throw to second. Of course, that’s too much for kids so you just say “If you get it, touch 3rd” and hope for the best.
Baseball, in addition to its physical elements, is a game of IF-THEN statements that must be processed quickly. There is no doubt robots could do this way better.
But then it wouldn’t be a game.
At the Carnegie Science Center there is a robot exhibit with an interactive basketball game. Humans shoot foul shots against a robot and the score runs all day. Of course, the robot is nearly perfect and humans are terrible. Even the best human in the NBA (Steve Nash) missed 10% of the time.
Think of a robot basketball game. Each robot nearly flawlessly executing the basketball process. But why? How boring that would be.
Why=Human
I rented a Chevy a while ago that had an automatic braking system. I was driving down a street and someone on the right side opened his car door.
The car interpreted this as a possible collision, and slammed on the brakes. It scared the daylights out of everyone who was in the car looking out the windshield, because we’d all seen the same thing and determined it was obviously not a threat, as the door was out of the car’s path of travel.
Collision avoidance is a nice feature, potentially. When they get it to work as well as a human brain…
I presume John’s done the math on this, and it’s a reasonable estimate, based on current trends continuing.
I’m a bit more skeptical, for reasons such as this:
So we can use our most powerful supercomputers to model a brain-dead person, effectively, at 1/2400th speed.
We’ve got a long way to go…
For those convinced that the whole ‘automation will take away jobs’ argument is much ado about nothing, I highly recommend “Who Owns the Future” by Jaron Lanier. While he may be a longhair hippie-dippie freak, he is a serious technologist/scientist and one of the fathers of Virtual Reality. He is NOT some tin foil hat doomer.
Dan,
Thanks for this post. I really enjoyed it. I work for a huge company and two of the points you make resonate. The first is the point in one of your comments about how auto manufacturers use robots because of consistency. I see a strong desire for consistency in my company yet we are not in manufacturing so our leadership tries to achieve it via rigid policies and controls. That brings me to the second point that resonates which is the nature of complex systems and human behavior. No matter how many new policies our leadership comes up with, there are always unexpected results due to feedback from the bottom. Employees adjust their behavior as a result of the new rules. Leadership favors central planning (HQ staff) in an effort to achieve consistency but fails to understand that a company of our size is too complex to successfully control from the top down. I wonder at what point (if ever) they will shift to a more decentralized or bottom-up approach that allows for variation and iteration with the benefit of quicker response to feedback from the bottom?
IMHO, the question of when the robots “take over” is far less relevant than the question of what the effects of the robots “taking over” will actually be.
If the effects of a “robot takeover” are more positive than negative, then the question of when it happens becomes largely moot.
Of course, we cannot predict with 100% certainty what the effects would be. There are simply too many variables involved.
Unfortunately, it seems to be the human tendency to predict the worst effects whenever information about the future is less than perfect.
This may be because preparing for the worst has provided homo sapiens sapiens with an evolutionary advantage over more easy-going species, or it may also be because instilling fear about the future is a great way for tyrants to reinforce their own power, or maybe a combination of both factors.
Hmmm – I don’t know. I agree with a lot of what you say here, but….. I’m running 27 300 ton autonomous haultrucks in a (admittedly well defined and constrained) mine. And there are problems, sure – but when they hum, they break production records every day.
It only looks like “short order” in historical hindsight. At the time, it looked like an entire generation of mass social disruption. And contrary to the fable that all those dispossessed farmers took factory jobs, most of the transition was generational, i.e. their kids took factory jobs, while they wasted away in quiet despair. It’s no coincidence that Temperance movements arose during the Second Industrial Revolution. It was a response to truly unconscionable numbers of people drinking themselves to death.
I’m in wholehearted agreement that automation is a wonderful thing, and that it will continue to make life better for most people. But let’s not pretend the transition is always easy, particularly for the individuals directly affected.
You worry that preemptive concern over automation’s effect on employment will play into the hands of central planners, and it’s a legitimate concern. But there will continue to be labor force disruptions (as you rightly point out have always been with us), and pointing to a nebulous aggregate benefit will be no comfort to those individuals left behind. Regardless of the actual scope of the labor force disruption, I guarantee you the central planners will be able to find individuals whose poignant stories will play well in campaign ads. If we continue failing to demonstrate our compassion for the economically dispossessed, failing to articulate how our policies will better enable affected individuals to prosper through the creative destruction, we will continue to foster an ever-larger constituency for those who promise to “help”.
I believe that the potential exists for humans to allow automation to be a wonderful thing, but that automation itself does not guarantee that outcome.
For one thing, the savings accrued by automation must be allowed to become reflected in the actual price of goods.
If governments continue to assume that deflation is always bad then automation will be seen as a force for evil, because consumers will not see any benefit from the savings that automation provides since the savings will be eaten up by inflation.
That’s simply one way that governments can negate the benefits of automation.
This is exactly what a robot who is about to take over would say.
For all of my differences with John on issues surrounding machine intelligence, I see no good reason to believe that there are labor tasks that cannot eventually be automated.
I can imagine a good reason, though it’s more a philosophical problem of adequately defining words like “automation” and “labour” than it is a technological barrier.
One can imagine that in order to create automatons that can perform every form of human labour, those automatons would have to be so advanced that they no longer truly qualify as “automatons” in the first place.
At that point, the tasks will have been “automated” only in the sense that 19th century plantations were “automated”.
I am skeptical that this is, in fact, technologically possible.
I could wish that every media personality who opines about the coming robot revolution spend a year working in a modern factory.
That said, if you’re not learning how to program, I don’t know why not.
Here’s an interesting case study on automated transportation systems and safety:
Blood on the tracks
I have noticed people, especially clever people, tend to have great ideas for how easily other people’s jobs are, as in, “Why can’t they just do x, y, and z?”
My response is, “They may well be able to do x, y, and z, but they can’t just do x, y, and z. If you’re using the word ‘just’ to describe someone else’s job, you don’t really know what they do.”
This is illustrated by this great xkcd comic:
This is the same example I use when explaining to my friends why I think autonomous cars are nowhere near hitting the market. For software to deal with an unexpected situation, it has to be programmed in. Which means the following chain of events had to occur:
Now think of the limitless number of possibilities when it comes to driving. Tire debris, downed tree branches, piles of leaves, open manholes, poorly painted or duplicated lines, scuffed road, mixed pavement, rain, flooding, ice, sleet, snow, pedestrians, cyclists, motorcyclists splitting lanes, construction cones, traffic cops, jaywalkers, shadows, reflections, access gates….
Are those scenarios so outrageous that cars can’t learn them? Particularly if many cars are sharing their learning over time.
Autonomous cars may well reduce the accident rate of the one type of accident, namely the kind caused by driver error. But they may introduce an entirely new category of potentially deadly problems caused by common failure modes such as network outages or unexpected lighting or weather conditions which cause all cars in an area to behave in a problematic way. Or maybe the system grinds to a halt.
I should have been more specific. I’m talking about completely self-driving cars, like the kind with no steering wheel. My point was supposed to be that humans will always be required to deal with unanticipated, unmodeled, or otherwise unhandled-by-the-software situation. But I ran out of words and then never came back around to finish up.
John, this without going too far off topic here, I am not surprised to read this. One of the oldest and simplest forms of intercept guidance is proportional navigation or PN. I found a paper discussing the fielder theory you named above, and the author actually describes PN guidance, apparently without knowing it, for the horizontal component of the problem:
That last sentence describes the basic function of a large number of guided missiles.
Sorry I’ve been away from this thread for a couple of days. Very busy time here. I’ve got some respondin’ to do…
Z in MT said:
I know this was meant to be a bit of a joke, but there is a lot of truth here. As we expand more into the digital world, I think it’s entirely possible that our individual material needs will decline. In fact, I think that may be happening already. I don’t think economists have a good handle on how to calculate GDP, employment, and consumption in the digital era.
When I was a kid, if I wanted to see a movie my friends and I would get on our bikes or the bus and go to a movie theater and buy tickets. All of this required real goods and real money and was tracked in the economy.
Five years ago, if kids wanted to see a movie they might watch it on Netflix or download it. Fewer real goods involved, but they were still watching movies that cost millions of dollars to make.
Today, those same kids might be entertained by watching a play-through videogame on youtube – content produced by their own cohort for their own consumption – no dollars changing hands, no GDP modifying economic transactions, no theater buildings, etc. So to an economist it would look like the entertainment economy is shrinking, yet to the actual consumers of it there is as much or more entertainment available than there ever was.
How much ‘economic activity’ will be going on in the virtual world five years from now? What happens to the tourism industry when we can visit any spot in the world using VR glasses and feel like we’re really there? What happens to sports stadiums when a tiny HD camera on a gimbal can be placed on the 50 yard line and you can watch the game with VR goggles with head tracking and have the best seats in the house without having to leave home?
What happens to real estate when the only thing needed to be totally entertained, educated, and productive is a 10 X 10 room and a set of VR goggles? The reasons people wanted larger houses – more recreation space, more private space, more working space, hobby space… Goes away if most of your life is spent online.
There are radical changes happening, and many of them have the effect of taking the real economy ‘underground’ and making it opaque to economists. I don’t know how large this effect is now, but it can only get larger over time.
Casey said:
If you get all the cars connected together and automated, you will again run into complex systems issues. For example, there’s good evidence that the ‘flash crashes’ and strange fluctuations we’re seeing in the stock market today are the result of complexity and network effects from millions of connected, automated trading algorithms.
As I said before, in complex systems the overall behavior is determined by the interactions between nodes more than by the behaviour of the individual nodes themselves. This behavior is emergent and unpredictable. So I’m not as sanguine as you are that we can have millions of cars running around autonomously and communicating through networks without a lot of very strange and potentially bad things happening.
Many of the benefits you outline here can be done today without self-driving. GPS navigation coupled with congestion pricing could reduce traffic jams by a huge amount today. Cars with collision warning and auto-braking can reduce accidents without taking the human out of the mix.
Also, I would think about the different psychological effects of different types of accidents. One of the reasons we can tolerate 30,000 deaths per year in auto accidents is the feeling that we are in control of our own destiny behind the wheel – that we won’t make the same kinds of mistakes that killed the other people.
But if you take that away and make us totally dependent on an automated system, and therefore accidents are completely random and out of our control, I think you’ll find that it won’t take too many such accidents before people refuse to allow an autonomous machine to drive them around. But that’s a social effect and I could be completely wrong. Those are the kinds of complex responses that can change everything but which are completely unpredictable.
Fascinating comment. Office space as well… more working from home means less office space required, less driving, even closets can be smaller since you won’t need so many “work outfits”.
John Penfold said:
This is changing. Complexity Economics is a rapidly growing field. Smarter corporations are starting to learn the lessons of complexity and push decision-making down to the lower levels in the company. They are flattening their hierarchies and creating semi-autonomous divisions.
Of course, there will be tremendous pushback from control freaks and statists who believe they should be running things, and from people who are uncomfortable with an unknowable future and demand that predictions be made regardless of whether they are going to be even remotely accurate. For example, we still listen to the predictions of macroeconomists, stock gurus and futurists, even though it’s been shown repeatedly that their predictions are no better than throwing darts at a dartboard. Check out the accuracy of the CBO’s GDP predictions sometime. If you go more than a year out, they are no better than a simple model which regresses from the current GDP to the historical average.
So this is going to take some time. But the reality of complex systems cannot be denied – only ignored until something bad happens.
Exactly right!