I, Robot, Am Not Taking Over any Time Soon

 

NoRobots-300x277In “Conservatives are Too Quick to Dismiss the Rise of the Robots,” James Pethokoukis worries that whereas in the past, technology has given rise to new jobs to replace those lost to innovation, this time it may be different.

James provides us with an excellent specimen of the kind of thinking that constantly causes macroeconomists, politicians, and other self-styled high-level thinkers to make serious errors when analyzing changes to economies and human societies. I’m not picking on James, who’s an otherwise excellent analyst, but on this error, which is so common that it really needs to be discussed.

This phrase, in particular, jumped out at me:

Just think about the progress made in autonomous vehicles and the fact that the most common job in most states is that of truck driver.

His statement displays a top-down approach to analyzing the problem. Truck drivers drive vehicles. Soon, vehicles will be autonomous. So, no more truck drivers. But this kind of of analysis doen’t reflect the true complexity of truck driving.

And this isn’t just about truck drivers. To explain the problem and why it matters so much, I need to digress. The economy is a complex system. Society itself is a complex system. Workers in a society have to be productive within this complexity. These systems are very different from, say, a complicated machine. A complicated machine can be understood in a reductionist way: Take apart a motor; understand the various parts and what they do; and you can understand what the motor does. If you have full understanding of the motor, you can treat it like a black box, with inputs and outputs, and ignore the complicated workings inside. Engineers and scientists use this type of analysis to break down complicated problems and organize them into simpler ones.

Complex systems turn this upside down. A complex system looks simple on the surface, but becomes increasingly complex as you drill down. Complex systems are more than just a collection of parts – their behavior is governed not just by what each part does, but by the interactions among the parts. For example, you can’t understand a brain just by learning what neurons do. You must also — at least — understand the web of billions of neurons and the interactions among them.

The other problem with analyzing complex systems from the top down is that these systems function by means of feedback from the bottom, which causes constant iteration. If the price of steel goes up, that information changes the behavior of steel producers and consumers. That, in turn, causes its price to change again. The new price may create more consumers or more producers, or cause manufacturers to substitute other materials, which in turn causes the prices of those materials to change, and so on, ad infinitum.

You can think of such systems as a kind of self-programming computer: They constantly take in data, process it, and in response change the output. This process of feedback and constant change makes these systems very sensitive to initial conditions; as a result, seen from on high, they are opaque, and behave unpredictably  Hence the most common word in a macroeconomist’s vocabulary is apt to be “unexpectedly.”

Conservatives tend to understand this, because the thinkers we tend to read and follow understood it. Adam Smith’s phrase, “the invisible hand,” suggests how well he understood the way emergent properties drive complex systems. Hayek’s opposition to “scientism” and the pretense of knowledge were an evocation of complex systems theory. In fact, Hayek is considered one of the early contributors to that field.

Statists believe that the economy and society can be treated the same way. If they can find the levers that control the economy, the smart people at the top can push and pull on them and drive the ship of state. Social scientists want to be mathematical and scientific, just like the engineers and physicists, so they go through contortions to create models decorated by a few numbers and formulas into which they can be plugged. They use these to justify applying “scientific” techniques to managing people and the interactions between them. This is what Hayek called “scientism,” not science.

When macroeconomists reduce the economy to aggregate variables like GDP, employment, capital, inflation, or the consumer price index, they’re abstracting away everything that really matters in an economy in favor of a few numbers that are amenable to mathematical modeling. These numbers may indeed be useful when trying to understand the state of an economy, but the variables can’t be tweaked by central planners in sure confidence that the outcomes will be predictable. Attempts to do so lead to unintended consequences and to the destruction of the feedback forces the system needs to remain healthy.

If you’ve never read it, I highly recommend reading the classic essay I, Pencil by Leonard Read. It’s a perfect description of the way complex systems deceive people who look at them only from a very high level. If you ask someone how hard it is to build a pencil, they might think about it and say, “Oh, not hard. You need a wooden dowel, a hole drilled in it, and some lead or graphite to fill the hole. Glue it in, and you’re done.” But as Read’s pencil replied in the first person, “Simple? Yet, not a single person on the face of this earth knows how to make me.

The essay drills down into the construction of the pencil. You need some wood. Fine. Where do you get it? Will any wood do? Or are there special characteristics? And how do you get this wood? Chopping down trees? How do you do that? With an axe or a saw? How do you make an axe or a saw? Oh, you need a steel axe head. How do you make steel?  And so on, and so on. Spoiler alert: By the time you walk down just a couple of steps of production, you find efforts that require thousands of people, each with specialized knowledge the others do not share. It’s an incredibly complex endeavor, and the amount of economic and physical coordination required to make pencils is astounding.

The reality of complex systems is the reason conservatives oppose central planning. Hayek knew this, and it formed the core of his arguments against an overweening state, the supposed superiority of macroeconomic modeling, and decision-making by central authorities.

This failure to see hidden complexity is not limited to politicians and economists. Most engineering projects that run over budget, or that fail completely, do so because of a failure to take into account hidden complexity lurking in the details. Software engineering has moved away from top-down design and toward bottom-up, iterative development cycles precisely because it better matches the real world. The largest, most carefully thought-out architecture developed by people in the head office generally doesn’t survive contact with the real world, which is why that type of development isn’t done much any more.

Now back to the truck driver. Can a robot drive a truck? Maybe, on a well-documented road, and under unexceptional circumstances. From the high-level view, that answers the question. But if you ask a truck driver what he does, you might find that he also loads and unloads cargo. And if you dig into that activity, you might find that he needs to rely on years of experience  to know how to do that safely and efficiently with the load properly balanced and secured. He may be required to act as an agent for the company, collecting payment and verifying that the shipment matches the manifest. The truck driver is also the early warning system for vehicle problems. He has the knowledge and judgment to be able to tell if something is wrong. A rattling sound on a road full of debris might not be a problem. The same rattle heard on a smooth road? Might be a problem.

The truck driver is the coordinator of on-road repairs. His presence protects the cargo from theft or tampering. He deals with many different end-customers, many of whom are still using old-fashioned paper manifests and invoices a computer can’t deal with. He may use his judgment to determine if a check can be accepted for delivery. Each customer’s loading dock may have hazards and unique maneuvering difficulties. Then there are the ancillary benefits of human truck drivers – they cement relationships with customers. They spot opportunities. They report traffic accidents or crime to the police. They notice damaged goods in a shipment. Sleeping in the truck protects it from theft.

These are the things off the top of my head, and I’m not a truck driver. I’ll bet if you asked Dave Carter what he does, he could go into much greater detail.  And if you asked other people in the chain, they’d have their own set of complexities that are part of the entire work process called “truck driving.”

Robots don’t do complexity well. They are excellent at repetitive tasks, or tasks that can be extremely well defined, and which have a fixed set of parameters and boundary conditions. A robot on an assembly line knows exactly what it has to do, and the list of potential failures (parts out of alignment, defects in materials, etc.) are well known. Even a self-driving robot car needs to know what the road looks like — Google’s cars use pre-mapped road data — and it can’t deal with situations that are very far outside the norm.

We are making  strides here, and Google’s robot cars have a surprising amount of autonomous decision-making capability when it comes to things like cars stopping in front of them suddenly and obstacles on the road. But that’s a far cry from the kind of generalized human judgement required in most occupations — which is why the robots won’t be taking over any time soon.

I can’t say the same for the central planners. We seem to be stuck with them.

Published in Culture, General, Science & Technology, Technology
Tags:

Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 71 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Casey Inactive
    Casey
    @Casey

    But in the case of self-driving cars they would be planning ahead and accidents would plummet. Every car knows where it is going and the car can communicate that to other cars. So you’d never have a car change lakes unexpectedly. The cars would be working together in advance. And traffic would disappear because cars could plan out the path of least resistance. Stop lights and stop signs would become unnecessary. Cars would know to stop before another car could even be seen.

    Machines can plan ahead far better than people.

    • #31
  2. Robert Lux Inactive
    Robert Lux
    @RobertLux

    anonymous:

    TeamAmerica:Why do I have this feeling that anonymous or Great Ghost of Godel will soon show up to tell us how robots will be able to supersede humans in 10-20 years?

    Dunno.

    I shall again post Ray Kurzweil‘s cartoon from How to Create a Mind

    Only a Human...

    Unless you believe there is something inherently special about the human brain which cannot be reproduced in a different substrate (which to me seems just silly),

    Yes, there is something special about the human brain. There is 0% chance computers become self-aware because extracting the intelligible species of a form is not a matter of computational power or computer complexity. Big, fat category mistake, John. 

    • #32
  3. Z in MT Member
    Z in MT
    @ZinMT

    The problem with Ray Kurzweil is that he doesn’t understand humans. Humans will never let a computer mind emerge.

    • #33
  4. iWc Coolidge
    iWc
    @iWe

    anonymous: If people want to believe there is something immaterial about the human brain which can never be replicated on a different substrate, fine. But I wouldn’t base a long term investment strategy on that belief

    Would you invest in a software-CEO who is starting up a social media company?

    • #34
  5. Casey Inactive
    Casey
    @Casey

    anonymous: Well then, where is it?  Can the philosopher identify any structure in the human brain which is absent from the brains of other primates?

    If there’s a soul that uses the brain then you can’t find it there.

    It may be that soul which is frustrated with the imperfections of the body/brain tool is seeking to create a more perfect tool.  A tool which frees us from animal distraction.

    • #35
  6. Casey Inactive
    Casey
    @Casey

    anonymous: For example, baseball players do not catch fly balls

    I’m coaching 6, 7, and 8 year olds for the first time this year.  The most striking thing to me that I hadn’t ever considered about the game is the number of mental variables that must be processed.  This is incredibly hard for kids this age.

    With a runner on first and second, I technically need the third baseman to know that if the ball goes at him or to his right to touch third.  But if the ball is to his left, try to tag the runner.  If you can’t tag the runner then throw to second.   Of course, that’s too much for kids so you just say “If you get it, touch 3rd” and hope for the best.

    Baseball, in addition to its physical elements, is a game of IF-THEN statements that must be processed quickly.  There is no doubt robots could do this way better.

    But then it wouldn’t be a game.

    At the Carnegie Science Center there is a robot exhibit with an interactive basketball game.  Humans shoot foul shots against a robot and the score runs all day.  Of course, the robot is nearly perfect and humans are terrible.  Even the best human in the NBA (Steve Nash) missed 10% of the time.

    Think of a robot basketball game.  Each robot nearly flawlessly executing the basketball process.  But why?  How boring that would be.

    Why=Human

    • #36
  7. Tuck Inactive
    Tuck
    @Tuck

    Dan Hanson: …I’ve tried the new lane-keeping features in modern cars. They have an indicator that tells you when the system has enough information to work, and when it doesn’t. And that indicator goes OFF a lot….

    I rented a Chevy a while ago that had an automatic braking system.  I was driving down a street and someone on the right side opened his car door.

    The car interpreted this as a possible collision, and slammed on the brakes.  It scared the daylights out of everyone who was in the car looking out the windshield, because we’d all seen the same thing and determined it was obviously not a threat, as the door was out of the car’s path of travel.

    Collision avoidance is a nice feature, potentially.  When they get it to work as well as a human brain…

    • #37
  8. Tuck Inactive
    Tuck
    @Tuck

    anonymous: …Present day computers are vastly less complicated than the connections of the neocortex, and hence they do not exhibit these capabilities, but based upon the current growth curve, they will by around 2030….

    I presume John’s done the math on this, and it’s a reasonable estimate, based on current trends continuing.

    I’m a bit more skeptical, for reasons such as this:

    “…Luckily, the team had access to the fourth fastest supercomputer in the world — the K computer at the Riken research institute in Kobe, Japan.

    “Using the NEST software framework, the team led by Markus Diesmann and Abigail Morrison succeeded in creating an artificial neural network of 1.73 billion nerve cells connected by 10.4 trillion synapses. While impressive, this is only a fraction of the neurons every human brain contains. Scientists believe we all carry 80-100 billion nerve cells, or about as many stars as there are in the Milky Way.

    “Knowing this, it shouldn’t come as a surprise that the researchers were not able to simulate the brain’s activity in real time. It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time….”

    So we can use our most powerful supercomputers to model a brain-dead person, effectively, at 1/2400th speed.

    We’ve got a long way to go…

    • #38
  9. user_1008534 Member
    user_1008534
    @Ekosj

    For those convinced that the whole ‘automation will take away jobs’ argument is much ado about nothing, I highly recommend “Who Owns the Future” by Jaron Lanier. While he may be a longhair hippie-dippie freak, he is a serious technologist/scientist and one of the fathers of Virtual Reality. He is NOT some tin foil hat doomer.

    • #39
  10. Mario the Gator Inactive
    Mario the Gator
    @Pelayo

    Dan,

    Thanks for this post.  I really enjoyed it.  I work for a huge company and two of the points you make resonate.  The first is the point in one of your comments about how auto manufacturers use robots because of consistency.  I see a strong desire for consistency in my company yet we are not in manufacturing so our leadership tries to achieve it via rigid policies and controls.  That brings me to the second point that resonates which is the nature of complex systems and human behavior.  No matter how many new policies our leadership comes up with, there are always unexpected results due to feedback from the bottom.  Employees adjust their behavior as a result of the new rules.  Leadership favors central planning (HQ staff) in an effort to achieve consistency but fails to understand that a company of our size is too complex to successfully control from the top down.  I wonder at what point (if ever) they will shift to a more decentralized or bottom-up approach that allows for variation and iteration with the benefit of quicker response to feedback from the bottom?

    • #40
  11. Misthiocracy Member
    Misthiocracy
    @Misthiocracy

    Dan Hanson: I, Robot, Am Not Taking Over any Time Soon

    IMHO, the question of when the robots “take over” is far less relevant than the question of what the effects of the robots “taking over” will actually be.

    If the effects of a “robot takeover” are more positive than negative, then the question of when it happens becomes largely moot.

    Of course, we cannot predict with 100% certainty what the effects would be. There are simply too many variables involved.

    Unfortunately, it seems to be the human tendency to predict the worst effects whenever information about the future is less than perfect.

    This may be because preparing for the worst has provided homo sapiens sapiens with an evolutionary advantage over more easy-going species, or it may also be because instilling fear about the future is a great way for tyrants to reinforce their own power, or maybe a combination of both factors.

    • #41
  12. user_92524 Member
    user_92524
    @TonyMartyr

    Hmmm – I don’t know. I agree with a lot of what you say here, but….. I’m running 27 300 ton autonomous haultrucks in a (admittedly well defined and constrained) mine. And there are problems, sure – but when they hum, they break production records every day.

    • #42
  13. Bi-Coloured-Python-Rock-Snake Member
    Bi-Coloured-Python-Rock-Snake
    @EvanMeyer

    Dan Hanson:There will definitely continue to be job losses due to computerization and automation. But there’s no evidence to believe that the future will be fundamentally different than the past in this regard. The mechanization of agriculture destroyed tens of millions of jobs and caused society to radically restructure.

    However, you can also look at it as a force that freed massive amounts of human capital from low-value work and made it available for other things. And the presence of that huge pool of labor and the increase in wealth from mechanization solved that problem in short order.

    It only looks like “short order” in historical hindsight. At the time, it looked like an entire generation of mass social disruption. And contrary to the fable that all those dispossessed farmers took factory jobs, most of the transition was generational, i.e. their kids took factory jobs, while they wasted away in quiet despair. It’s no coincidence that Temperance movements arose during the Second Industrial Revolution. It was a response to truly unconscionable numbers of people drinking themselves to death.

    • #43
  14. Bi-Coloured-Python-Rock-Snake Member
    Bi-Coloured-Python-Rock-Snake
    @EvanMeyer

    I’m in wholehearted agreement that automation is a wonderful thing, and that it will continue to make life better for most people. But let’s not pretend the transition is always easy, particularly for the individuals directly affected.

    You worry that preemptive concern over automation’s effect on employment will play into the hands of central planners, and it’s a legitimate concern. But there will continue to be labor force disruptions (as you rightly point out have always been with us), and pointing to a nebulous aggregate benefit will be no comfort to those individuals left behind. Regardless of the actual scope of the labor force disruption, I guarantee you the central planners will be able to find individuals whose poignant stories will play well in campaign ads. If we continue failing to demonstrate our compassion for the economically dispossessed, failing to articulate how our policies will better enable affected individuals to prosper through the creative destruction, we will continue to foster an ever-larger constituency for those who promise to “help”.

    • #44
  15. Misthiocracy Member
    Misthiocracy
    @Misthiocracy

    Bi-Coloured-Python-Rock-Snake: I’m in wholehearted agreement that automation is a wonderful thing, and that it will continue to make life better for most people.

    I believe that the potential exists for humans to allow automation to be a wonderful thing, but that automation itself does not guarantee that outcome.

    For one thing, the savings accrued by automation must be allowed to become reflected in the actual price of goods.

    If governments continue to assume that deflation is always bad then automation will be seen as a force for evil, because consumers will not see any benefit from the savings that automation provides since the savings will be eaten up by inflation.

    That’s simply one way that governments can negate the benefits of automation.

    • #45
  16. Frank Soto Member
    Frank Soto
    @FrankSoto

    I, Robot, Am Not Taking Over any Time Soon

    This is exactly what a robot who is about to take over would say.

    • #46
  17. Frank Soto Member
    Frank Soto
    @FrankSoto

    For all of my differences with John on issues surrounding machine intelligence, I see no good reason to believe that there are labor tasks that cannot eventually be automated.

    • #47
  18. Misthiocracy Member
    Misthiocracy
    @Misthiocracy

    Frank Soto:For all of my differences with John on issues surrounding machine intelligence, I see no good reason to believe that there are labor tasks that cannot eventually be automated.

    I can imagine a good reason, though it’s more a philosophical problem of adequately defining words like “automation” and “labour” than it is a technological barrier.

    One can imagine that in order to create automatons that can perform every form of human labour, those automatons would have to be so advanced that they no longer truly qualify as “automatons” in the first place.

    At that point, the tasks will have been “automated” only in the sense that 19th century plantations were “automated”.

    I am skeptical that this is, in fact, technologically possible.

    • #48
  19. user_494971 Contributor
    user_494971
    @HankRhody

    I could wish that every media personality who opines about the coming robot revolution spend a year working in a modern factory.

    That said, if you’re not learning how to program, I don’t know why not.

    • #49
  20. user_105642 Member
    user_105642
    @DavidFoster

    Here’s an interesting case study on automated transportation systems and safety:

    Blood on the tracks

    • #50
  21. user_1184 Inactive
    user_1184
    @MarkWilson

    Tuck:

    Dan Hanson: The problem with this thinking is that it does not reflect the true complexity of the job.

    Such is the way of the ivory-tower intellectual. :)

    I have noticed people, especially clever people, tend to have great ideas for how easily other people’s jobs are, as in, “Why can’t they just do x, y, and z?”

    My response is, “They may well be able to do x, y, and z, but they can’t just do x, y, and z.  If you’re using the word ‘just’ to describe someone else’s job, you don’t really know what they do.”

    This is illustrated by this great xkcd comic:

    physicists[1]

    • #51
  22. user_1184 Inactive
    user_1184
    @MarkWilson

    iWe: And I think that driving down a road that is under construction and has ambiguous signage, obstructions, cones, potholes, etc. is a perfect example of how decision-making cannot be made using any software model, no matter how complex. … Something that cannot be predicted is quite hard to write code to govern. I think.

    This is the same example I use when explaining to my friends why I think autonomous cars are nowhere near hitting the market.  For software to deal with an unexpected situation, it has to be programmed in.  Which means the following chain of events had to occur:

    1. Somebody thought of a scenario
    2. Somebody in charge of funding decided it was worth spending money to address
    3. Somebody figured out how to simulate the nominal scenario
    4. Somebody figured out how the car can identify the scenario
    5. Somebody figured out a strategy for the autonomous car to use to deal with the scenario
    6. Somebody figured out the range of possible variations on the scenario and created a matrix of possible combinations with other variables and simulated them
    7. Somebody figured out how to create a realistic set of live tests that cover the design space
    8. Somebody funded and executed the testing and certified the algorithm

    Now think of the limitless number of possibilities when it comes to driving.  Tire debris, downed tree branches, piles of leaves, open manholes, poorly painted or duplicated lines, scuffed road, mixed pavement, rain, flooding, ice, sleet, snow, pedestrians, cyclists, motorcyclists splitting lanes, construction cones, traffic cops, jaywalkers, shadows, reflections, access gates….

    • #52
  23. Casey Inactive
    Casey
    @Casey

    Are those scenarios so outrageous that cars can’t learn them? Particularly if many cars are sharing their learning over time.

    • #53
  24. user_1184 Inactive
    user_1184
    @MarkWilson

    anonymous: Self-driving vehicles will almost certainly always be involved in accidents, given the uncertainty of the road environment.  But at the point the accident rate is, say, ten times lower than that for human-driven vehicles, most people will, I think, concede it’s good enough to endorse (though, I hope, not mandate) the technology while, at the same time, trying to further reduce the accident rate.

    Autonomous cars may well reduce the accident rate of the one type of accident, namely the kind caused by driver error.  But they may introduce an entirely new category of potentially deadly problems caused by common failure modes such as network outages or unexpected lighting or weather conditions which cause all cars in an area to behave in a problematic way.  Or maybe the system grinds to a halt.

    • #54
  25. user_1184 Inactive
    user_1184
    @MarkWilson

    Casey:Are those scenarios so outrageous that cars can’t learn them?Particularly if many cars are sharing their learning over time.

    I should have been more specific.  I’m talking about completely self-driving cars, like the kind with no steering wheel.  My point was supposed to be that humans will always be required to deal with unanticipated, unmodeled, or otherwise unhandled-by-the-software situation.  But I ran out of words and then never came back around to finish up.

    • #55
  26. user_1184 Inactive
    user_1184
    @MarkWilson

    anonymous: For example, baseball players do not catch fly balls by observing the trajectory of the ball, solving the differential equations for its motion, running to the place the ball will come down, and then waiting for the ball to arrive.  Instead, they apply a heuristic which has been called optical acceleration cancellation which involves running so that the ball moves at an apparent constant speed.

    John, this without going too far off topic here, I am not surprised to read this.  One of the oldest and simplest forms of intercept guidance is proportional navigation or PN.  I found a paper discussing the fielder theory you named above, and the author actually describes PN guidance, apparently without knowing it, for the horizontal component of the problem:

    If the fielder’s lateral velocity is less than the ball’s, dδ/dt becomes increasingly positive throughout the catch (as shown in Figure 2e); if the fielder’s lateral velocity is greater than the ball’s, dδ/dt becomes increasingly negative throughout the catch (as shown in Figure 2a). So a servo that took the rate of horizontal rotation of the tracking system as its input and produced lateral acceleration of the fielder as its output could aid interception by moving the fielder’s lateral velocity toward that of the ball.

    That last sentence describes the basic function of a large number of guided missiles.

    • #56
  27. user_18586 Thatcher
    user_18586
    @DanHanson

    Sorry I’ve been away from this thread for a couple of days.  Very busy time here.  I’ve got some respondin’ to do…

    Z in MT said:

    It is an interesting question. What will everybody do when robots take all our current jobs?

    Cat videos and shoes.

    I know this was meant to be a bit of a joke, but there is a lot of truth here.  As we expand more into the digital world,  I think it’s entirely possible that our individual material needs will decline.  In fact,  I think that may be happening already.   I don’t think economists have a good handle on how to calculate GDP, employment, and consumption in the digital era.

    When I was a kid,  if I wanted to see a movie my friends and I would get on our bikes or the bus and go to a movie theater and buy tickets.  All of this required real goods and real money and was tracked in the economy.

    Five years ago,  if kids wanted to see a movie they might watch it on Netflix or download it.   Fewer real goods involved, but they were still watching movies that cost millions of dollars to make.

    Today,  those same kids might be entertained by watching a play-through videogame on youtube – content produced by their own cohort for their own consumption – no dollars changing hands,  no GDP modifying economic transactions,  no theater buildings,  etc.  So to an economist it would look like the entertainment economy is shrinking,  yet to the actual consumers of it there is as much or more entertainment available than there ever was.

    How much ‘economic activity’ will be going on in the virtual world five years from now?  What happens to the tourism industry when we can visit any spot in the world using VR glasses and feel like we’re really there?   What happens to sports stadiums when a tiny HD camera on a gimbal can be placed on the 50 yard line and you can watch the game with VR goggles with head tracking and have the best seats in the house without having to leave home?

    What happens to real estate when the only thing needed to be totally entertained, educated, and productive is a 10 X 10 room and a set of VR goggles?  The reasons people wanted larger houses – more recreation space, more private space,  more working space, hobby space… Goes away if most of your life is spent online.

    There are radical changes happening,  and many of them have the effect of taking the real economy ‘underground’ and making it opaque to economists.  I don’t know how large this effect is now,  but it can only get larger over time.

    • #57
  28. user_18586 Thatcher
    user_18586
    @DanHanson

    Casey said:

    Driving really isn’t that complex. Particularly if all cars are automated. Accidents will plummet. Cars will be able to know well in advance whether a particular route may be faster. Traffic will be almost non-existent. The whole system would be way more efficient.

    If you get all the cars connected together and automated,  you will again run into complex systems issues.   For example,  there’s good evidence that the ‘flash crashes’ and strange fluctuations we’re seeing in the stock market today are the result of complexity and network effects from millions of connected, automated trading algorithms.

    As I said before,  in complex systems the overall behavior is determined by the interactions between nodes more than by the behaviour of the individual nodes themselves.  This behavior is emergent and unpredictable.   So I’m not as sanguine as you are that we can have millions of cars running around autonomously and communicating through networks without a lot of very strange and potentially bad things happening.

    Many of the benefits you outline here can be done today without self-driving.  GPS navigation coupled with congestion pricing could reduce traffic jams by a huge amount today.   Cars with collision warning and auto-braking can reduce accidents without taking the human out of the mix.

    Also,  I would think about the different psychological effects of different types of accidents.  One of the reasons we can tolerate 30,000 deaths per year in auto accidents is the feeling that we are in control of our own destiny behind the wheel – that we won’t make the same kinds of mistakes that killed the other people.

    But if you take that away and make us totally dependent on an automated system,  and therefore accidents are completely random and out of our control,  I think you’ll find that it won’t take too many such accidents before people refuse to allow an autonomous machine to drive them around.  But that’s a social effect and I could be completely wrong.  Those are the kinds of complex responses that can change everything but which are completely unpredictable.

    • #58
  29. Casey Inactive
    Casey
    @Casey

    Dan Hanson:Sorry I’ve been away from this thread for a couple of days. Very busy time here. I’ve got some respondin’ to do…

    Fascinating comment.  Office space as well… more working from home means less office space required, less driving, even closets can be smaller since you won’t need so many “work outfits”.

    • #59
  30. user_18586 Thatcher
    user_18586
    @DanHanson

    John Penfold said:

    The failure to understand complexity  is at the heart of our dysfunction.  Macroeconomists, the political class, many corporate managements,   almost all intellectuals and most importantly, the regulatory state in all of its glory act as if the world were mechanical and it’s future knowable.

    This is changing.  Complexity Economics is a rapidly growing field.  Smarter corporations are starting to learn the lessons of complexity and push decision-making down to the lower levels in the company.  They are flattening their hierarchies and creating semi-autonomous divisions.

    Of course,  there will be tremendous pushback from control freaks and statists who believe they should be running things,  and from people who are uncomfortable with an unknowable future and demand that predictions be made regardless of whether they are going to be even remotely accurate.   For example,  we still listen to the predictions of macroeconomists,  stock gurus and futurists,   even though it’s been shown repeatedly that their predictions are no better than throwing darts at a dartboard.   Check out the accuracy of the CBO’s GDP predictions sometime.   If you go more than a year out,  they are no better than a simple model which regresses from the current GDP to the historical average.

    So this is going to take some time.   But the reality of complex systems cannot be denied – only ignored until something bad happens.

      We will adapt to robotics if allowed.  We will be allowed if we abolish most of the regulatory state, replacing it with the rule of law.  We will fail to adapt, or our adaptation will be unnecessarily painful and long if we try to control the process under the illusion that we can.

    Exactly right!

    • #60
Become a member to join the conversation. Or sign in if you're already a member.