Rodney Brooks

Robots, AI, and other stuff

Unexpected Consequences of Self Driving Cars

rodneybrooks.com/unexpected-consequences-of-self-driving-cars/

Many new technologies have unexpected impacts on the physical or social world in which we live.

When the first IMPs^{\big 1} for the fledgling ARPANET were being built starting in 1969 at BBN^{\big 2} in Cambridge, MA, I think it safe to say that no one foresaw the devastating impact that the networking technology being developed would have on journalism thirty to fifty years later. Craigslist replaced classified ads in newspapers and took a huge amount of their revenue away, and then Google provided a new service of search for things that one might buy and at the same time delivered ads for those things, taking away much of the rest of advertising revenue from print, radio, and TV, the homes of most traditional journalism. Besides loss of advertising cutting the income stream for journalism, and thus cutting the number of employed journalists, the new avenues for versions of journalism are making it more difficult for traditional print journalists to compete, as John Markoff recently talked about in announcing his retirement from the New York Times.

A way of sharing main frame computer power between research universities ended up completely disrupting how we get our news, and perhaps even who we elected as President.

Where might new unexpected upendings of our lives be coming from?

Perhaps the new technology with the biggest buzz right now is self driving cars.

In this post I will explore two possible consequences of having self driving cars, two consequences that I have not seen being discussed, while various car companies, non-traditional players, and startups debate what level of autonomy we might expect in our cars and when. These potential consequences  are self-driving cars as social outcasts and anti-social behavior of owners.  Both may have tremendous and unexpected influence on the uptake of self-driving cars.  Both are more about the social realm than the technical realm, which is perhaps why technologists have not addressed them. And then I’ll finish, however, by dissing a non-technical aspect of self driving cars that has been overdone by technologists and other amateur philosophers with an all out flame. And yes, I am at best an amateur philosopher too. That’s why it is a flame.

But first…

Levels of Autonomy

There is general agreement on defining different levels of autonomy for cars, numbered 0 through 5, although different sources have slightly different specifications for them.  Here are the levels from the autonomous car entry in Wikipedia which attributes this particular set to the SAE (Society of Automotive Engineers):

  • Level 0: Automated system has no vehicle control, but may issue warnings.
  • Level 1: Driver must be ready to take control at any time. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.
  • Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.
  • Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks, but must still be prepared to take control when needed.
  • Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
  • Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decision.

Some versions of level 4 specify the that there may be geographical restrictions, perhaps to places that have additional external infrastructure installed.

Today almost all new cars have level 1 autonomy features, and level 2 autonomy is becoming more common in production products. Some manufacturers are releasing software for level 4 though the legality and prudence of doing so right now is open to question.

There is much debate on how to have safe versions of level 2 and level 3 autonomy as both require a human to jump into the control loop when their attention has been wandering.  The time available for the person to reorient their concentration in order to respond to events in the world is often much shorter than what people really need.   I think most people agree that there might be a natural progression from level 4 to level 5, but there are different opinions on whether going from level 2 to level 3, or, more vociferously, from level 3 to level 4 are natural progressions.  As a result there are advocates for going straight to level 4, and there are many start up companies, and non-traditional players (e.g., Google) trying to go directly to level 4 or level 5 autonomy.

The rest of this post is about level 4 and level 5 autonomy.  What are the unexpected social consequences of having cars driving around without a human driver in command or at the ready to be in command?

 

1. Social Outcasts

Suppose you are walking along a twisting narrow country ride at night, with no shoulder and thick vegetation right at the edge of the pavement, and with no moon out, and you hear a car approaching.  What do you do?  I know what I would do!  I’d get off the road, climbing into the bushes if necessary, until the car had passed.  Why would I do that?  Because I would have no indication of whether the driver of the car had seen me and was going to avoid hitting me.

We all realize that on dark narrow country roads anonymous cars are first class citizens and we pedestrians are second class.  We willingly give cars the right of way.

But what about in the daytime (or even night time) in an urban area where you live?  There, pedestrians and cars interact all the time.  And much of that interaction is social interaction between the person on the street and the person behind the wheel.  Sometimes it is one way interaction, but often it is two way interaction.  Two questions arise.  If self driving cars can not participate in these interactions how will people feel about these new aliens sharing their space with them?  And in the interests of safety for pedestrians, how much will the performance of self driving cars need to be detuned relative to human driven cars, and how will that impact the utility of those cars, and degrade the driving experience of people that have traditional level 1 cars?

Within a few blocks of where I live  in Cambridge, MA, are two different microcosms of how people and cars interact.  Other neighborhoods will have other ways of interacting but the important point is how common interaction is.

The streets for a few blocks around where I live are all residential with triple decker apartment buildings or single or duplex houses on small lots.  The streets are narrow, and many of them are one-way.  There are very few marked pedestrian crossings.  People expect to be able to cross a street at any point, but they know there is give and take between drivers and pedestrians and there are many interactions between cars and people walking.  They do not think that cars are first class citizens and that they are second class.  Cars and people are viewed as equals, unlike on a narrow country road at night.

Cars and people interact in three ways that I have noticed in this area.  First, on the longer through roads the cars travel without stop signs, but there are stop signs on the entering or cross side streets.  People expect the right of way on the longer streets too, expecting that cars that have stopped on a side street will let them walk in front if they are about to step off the curb.  But people look to the driver for acknowledgement that they have been seen before they step in front of the car.  Second, when people want to cross a street between intersections or on one of the through streets without stop signs they wait for a bit of a gap between cars, step out cautiously if one is coming and confirm that the car is slowing down before committing to be in the middle of the road.   But often they will step off the curb and partially into the road not expecting the very next car to let them go, but the one that is second to reach where they are–they do expect that second car to let them cross.  And third, the sidewalks are narrow, and especially when there is snow can be hard to walk on (residents are responsible for the sidewalk in front of their properties, and can take a while to clear them) so in winter people often walk along the roads, trying to give room for the cars to go by, but nevertheless expecting the cars to be respectful of them and give them room to walk along the road.

A few blocks further away from where I live is a somewhat different environment, a commercial shopping, bar, and restaurant area (with the upper floors occupied by M.I.T. spin-off startups), known as Central Square^{\big 3}. There are marked pedestrian crossings there, and mostly people stick to crossing the roads at those designated places.  Things are a little less civil here, perhaps because more people driving through are not local residents from right around the neighborhood.

People step out tentatively into the marked cross walks and visually check whether on-coming drivers are slowing down, or indicate in some way that they have seen the pedestrian.  During the day it easy to see into the cars and get an idea of what the driver is paying attention to, and the same is actually true at night as there is enough ambient light around to see into most cars.  Pedestrians and drivers mostly engage in a little social interaction, and any lack of interaction is usually an indicator to the pedestrian that the driver has not seen them.  And when such a driver barrels through the crossing the pedestrians get angry and yell at the car, or even lean their hands out in front of the car to show the driver how angry they are.

Interestingly, many pedestrians reward good behavior by drivers.  Getting on the main street or off of the main street from or onto a small side street can often be tricky for a driver.  There are often so many people on the sidewalks that there is a constant flow of foot traffic crossing the exits or entrances of the side streets.   Drivers have to be patient and ready for a long wait to find a break.  Often pedestrians who have seen how patient a driver is being will voluntarily not step into the cross walk, and either with a head or hand signal indicate to a driver that they should head through the crossing.  And if the driver doesn’t respond they make the signal again–the pedestrian has given the turn to the driver and expects them to take it.

There are big AI perception challenges, just in my neighborhood, to get driverless cars to interact with people as well us driverful cars do. What if level 4 and level 5 autonomy self driving cars are not able to make that leap of fitting in as equals as current cars do?

Cars will clearly have to be able to perceive people walking along the street, even and especially on a snowy day, and not hit them.  That is just not debatable.  What is debatable is whether the cars will need to still pass them, or whether they will slowly follow people not risking passing them as a human driver would.  That slows down the traffic for both the owner of the driverless car, and for any human drivers.  The human drivers may get very annoyed with being stuck behind driverless cars.  Driverless cars would then be a nuisance.

In the little side streets, when at a stop sign, cars will have to judge when someone is about to cross in front of them.  But sometimes people are just chatting at the corner, or it is a parent and child waiting for the school bus that pulls up right there.  How long should the driverless car wait?  And might someone bully such cars by teasing them that they are about to step off the curb–people don’t try that with human drivers as there will soon be repercussions, but driverless cars doing any percussioning will just not be acceptable.

Since there are no current ways that driverless cars can give social signals to people, beyond inching forward to indicate that they want to go, how will they indicate to a person that they have seen them and it safe to cross in front of the car at a stop sign?  Perhaps the cars will instead need to be 100% guaranteed to let people go.  Otherwise without social interactions it would be like the case of the dark country road.  In that case driverless cars would have a privileged position compared to cars with human drivers and pedestrians.  That is not going to endear them to the residents.  “These damn driverless cars act like they own the road!”  So instead, driverless cars will need to be very wimpy drivers, slowing down traffic for everybody.

At a cross walk in Central Square driverless cars potentially might be stuck for hours. Will people take pity on them as they do on human drivers? To take advantage of this the cars would need to understand human social signals of giving them a turn, but without a reciprocal signal it is going to be confusing to the generous pedestrians and they may soon decide to not bother being nice to driverless cars at all. That will only make it more frustrating for a human driver stuck behind them, and in Central Square at least, that will quickly lead to big traffic jams. “Damn those driverless cars, they just jam the place up!”

According to this report from the UK, there are predictions that traffic on highways will slow down somewhat because of timid autonomous systems until some threshold of autonomous density is reached.   I think the dynamics where we consider the role of pedestrians is going to be very different and much more serious.

If self driving cars are not playing by the unwritten rules of how pedestrians and other drivers expect cars to interact, there will be ire directed at someone.  In the case of cars with level 2 or level 3 autonomy there will be a driver in the driver’s seat, and pedestrians will see them, see their concerns being ignored by the person, and direct their ire at that person, most likely the owner or the person temporarily using the car as a service.  If the car is under level 4 or level 5 autonomy it may be totally unoccupied, or have no seating in what would be the driver’s seat, and then the ire will be directed at that class of car.

I see a real danger of contempt arising for cars with level 4 and level 5 autonomy.   It will come from pedestrians and human drivers in urban areas.  And when there is contempt and lack of respect, people will not be shy about expressing that contempt.

At least one manufacturer  is afraid that human drivers will bully self driving cars operating with level two autonomy, so they are taking care that in their level 3 real world trials the cars look identical to conventional models, so that other drivers will not cut them off and take advantage of the heightened safety levels that lead to autonomous vehicle driving more cautiously.

2. Anti-social Behavior of Owners

The flip side of autonomous cars not understanding social mores well enough, is owners of self driving cars using them as a shield to be anti-social themselves.

Up from Central Square towards Harvard Square is a stretch of Massachusetts Avenue that is mixed residential and commercial, with metered parking.  A few weeks ago I needed to stop at the UPS store there and ship a heavy package.  There were no free parking spots so I soon found myself cruising up and down along about a 100 meter stretch, waiting for one to open up.  The thought occurred to me that if I had had a level 4 or 5 self driving car I could have left it to do that circling, while I dropped into the store.

Such is the root of anti-social behavior. Convenience for the individual, me not having to find a parking spot, versus over exploitation of the commons, filling the active roadway with virtually parked cars. Without autonomous vehicles UPS locations that are in places without enough parking shed some of their business to locations that have more extensive parking. That dynamic of self balancing may change once car owners have an extra agent at their beck and call, the self driving system of their automobiles.

We have seen many groups, including Tesla, talk about the advantage to individuals  of having their cars autonomously dealing with parking, so from a technical point of view I think this capability is one that is being touted as an advantage of autonomous cars. However, it gets to interact with human nature and then anti-social behavior can arise.

I think there will be plenty of opportunity for people to take other little short cuts with their autonomous cars. I’m sure the owners will be more creative than I can be, but here are three additional examples.

(1) People will jump out of their car at a Starbucks to run in and pick up their order knowingly leaving it not in a legal parking spot, perhaps blocking others, but knowing that it will take care of getting out of the way if some other car needs to move or get by. That will be fine in the case there is no such need, but in the case of need it will slow everything down just a little. And perhaps the owner will be able to set the tolerance on how uncomfortable things have to get before the car moves. Expect to see lots of annoyed people. And before long grocery store parking lots, especially in a storm, will just be a sea of cars improperly parked waiting for their owners.

(2) This is one for the two (autonomous) car family. Suppose someone is going to an event in the evening and there is not much parking nearby. And suppose autonomous cars are now always prowling neighborhoods waiting for their owners to summon them, so it takes a while for any particular car to get through the traffic to the pick up location. Then the two car family may resort to a new trick so that they don’t have to wait quite so long as others for their cars to get to the front door pick up at the conclusion of the big social event. They send one of their cars earlier in the day to find the closest parking spot that it can, and it settles in for a long wait. They use their second car to drop them at the event and send it home immediately. When the event is over their first autonomous car is right there waiting for them–the cost to the commons was a parking spot occupied all day by one of their cars.

(3) In various suburban schools that my kids went to when they were young there was a pick up ritual, which I see being repeated today when I drive past a school at the right time. Mothers, mostly, would turn up in their cars just before dismissal time and line up in the order that they arrived with the line backing out beyond the school boundary often. When school was over the teachers would come outside with all the kids and the cars would pull up to the pick up point^{\big 4}, the parents and teachers would cooperate to get the kids into their car seats, and off would go the cars with the kids, one at a time. When the first few families have fully driverless cars, one can imagine them sending their cars to wait in line first, so that their kids get picked up first and brought home. Not only does that mean that other parents would have to invest more of their personal time waiting in order to get their kids earlier, while the self driving car owners do not, but it ends up putting more responsibility on the teachers. Expect to see push back on this practice from the schools. But people will still try it.

Early on in the transition to driverless cars the 1% will have a whole new way to alienate the rest of the society. If you don’t think so, take a drive south from San Francisco on 101 in the morning and see the Teslas speeding down the left most lane.

What This Means

There are currently only fifteen fully driverless train systems in the United States, mostly in airports, and all with at most a handful of miles of track, all of which is completely spatially separated from any rights of way for any vehicles or pedestrians outside of the systems.  The first large scale driverless mass transit system in the US is going to be one that is under construction in Honolulu at this time, scheduled to be in initial operation in 2020 (though in late 2014 it was scheduled to begin operation in 2017).

There have been designs for larger scale systems to be driverless, for almost fifty years–for instance the San Francisco BART (Bay Area Rapid Transit) trains, first introduced in 1972 had lots of control automation features right at the beginning.  Failures and accidents however meant that many manual systems were added and sometimes later removed, sometimes having serious negative impact on overall efficiency of the system.

The aspirations for driverless train systems most closely correspond to level 4 autonomy for cars, but in very restrictive geographical environments.  Level 5 autonomy for trains would correspond to trains on tracks with level crossings, or street cars that share space with automobiles and pedestrians.  No one is advocating for, or testing, level 5 train autonomy at this moment.

Note also, that train navigation is very much simpler than automobile navigation.  There are guide rails!  They physically restrict were the trains can go.  And note further that all train systems are very much operated by organizations full of specialists.  Individual consumers do not go out and buy trains and use them personally–but that is what we are expecting will happen with individual consumers buying and using self driving cars.

Level 4 autonomy for trains is much easier than level 4 autonomy for cars.  Likewise for level 5.  But we hardly have any level 4 autonomous trains in the US.

Gill Pratt, CEO of Toyota Research Institute^{\big 5} said just a few days ago that “none of us in the automobile or IT industries are close to achieving true Level 5 autonomy”.

The preceding two sections talked about two ways in which self driving cars are going to get a bad name for themselves, as social outcasts in situations where there are pedestrians and other drivers, and in enabling anti-social behavior on behalf of their owners. Even ignoring the long tail of technical problems remaining to be solved for level 5 autonomy, to which Pratt refers, I think we are going to see push back from the public against level 5 and against widespread level 4 autonomy.  This pushback is going to come during trials and early deployments.  It may well be fierce.  People are going to be agitated.

Technically we will be able to make reasonable systems with level 4 autonomy in the not too distant future, but the social issues will mean that the domains of freedom for level 4 autonomous vehicles will be rather restricted.

We’ll see autonomous trucks convoying behind a single human occupied truck (perhaps itself a level 3 vehicle) in designated lanes on highways. But once off the highway we’ll demand individual humans in each truck to supervise the off highway driving.

Just as in airports where we have had self driving trains for quite a while we’ll see limited geographic domains where we have level 4 autonomous cars operating in spaces where there are no pedestrians and no other human drivers.

For instance, it will not be too long before we’ll have garages where drivers drop off their cars which then go and park themselves with only inches on each side in tightly packed parking areas. Your car will take up much less room than a human parked car, so there will be an economic incentive to develop these parking garages.

Somewhat later we might see level 4 autonomy for ride hailing services in limited areas of major cities.  The ride will have to begin and terminate within a well defined geographic area where it is already the case that pedestrian and automobile traffic is well separated by fairly strong social norms about the use of  walk signals at the corner of every block.  Some areas of San Francisco might work for this.

We might also see level 4 autonomy on some delivery vehicles in dense urban environments.  But they will need to be ultra deferential to pedestrians, and not operate and clog things up for other cars during peak commuting periods.  This could happen on a case by case basis in not too many years, but I think it will be a long time before it gets close to being universally deployed as a means of delivery.

We’ll see a slower than techies expect deployment of level 4 autonomy, with a variety of different special cases leading the way.  Level 5 autonomy over large geographical areas is going to be a long time coming.  Eventually it will come, but not as quickly as many expect today.

The futurist Roy Amara was well known for saying: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

That is where we are today.  People are overestimating how quickly level 5 autonomy will come, and even over estimating how widespread level 4 autonomy will be any time soon.  They are seeing the technical possibilities and not seeing the resistance that will come with autonomous agents invading human spaces, be they too rude or overly polite. But things will march on and at some point every single car will be level 5 autonomy and we’ll no longer let people drive.  Eventually it will creep up on us and we’ll hardly notice^{\big 6} when it does happen.

Eventually manual driving disappear in all but specialized entertainment zones.  But by then we won’t notice.  It is inevitable.  But, that day will not be soon.  And the flying cars will be even later.



And now we get to a little flaming:


<flame>

There is a serious question about how safe is safe.  35,000 people in the US are killed in motor vehicle accidents per year, with about 1.25 million world wide.  Right now all these deaths involve human drivers. They are both horribly large numbers.  Over the last 120 years we, the human race, has decided that such high numbers of deaths are acceptable for the usefulness that automobiles provide.

My guess is that we will never see close to such high numbers of deaths involving driverless cars.  We just will not find them acceptable, and instead we will delay adopting levels 4 and 5 autonomy, at the cost of more overall lives lost, rather than have autonomous driving systems cause many deaths at all.  Rather than 35,000 annual deaths in the US it will not be acceptable unless it is a relatively tiny number.  Ten deaths per year may be deemed too much, even though it could be viewed as minus 34,990 deaths.  A very significant improvement over the current state of affairs.

It won’t be rational. But that is how it is going to unfold.

Meanwhile, there has been a cottage industry of academics and journalists looking for click bait (remember, their whole business model got disrupted by the Internet–they are truly desperate, and have been driven a little mad), asking questions about whether we will trust our cars to make moral decisions when they are faced with horrible choices.

You can go here to a web site at M.I.T. to see the sorts of moral decisions people are saying that autonomous cars will need to make.  When the brakes suddenly fail should the car swerve to miss a bunch of babies in strollers and instead hit a gaggle of little old ladies?  Which group should the car decide to kill and which to save, and who is responsible for writing the code that makes these life and death decisions?

Here’s a question to ask yourself. How many times when you have been driving have you had to make a forced decision on which group of people to drive into and kill? You know, the five nuns or the single child? Or the ten robbers or the single little old lady? For every time that you have faced such decision, do you feel you made the right decision in the heat of the moment? Oh, you have never had to make that decision yourself? What about all your friends and relatives? Surely they have faced this issue?

And that is my point. This is a made up question that will have no practical impact on any automobile or person for the forseeable future. Just as these questions never come up for human drivers they won’t come up for self driving cars. It is pure mental masturbation dressed up as moral philosophy. You can set up web sites and argue about it all you want. None of that will have any practical impact, nor lead to any practical regulations about what can or can not go into automobiles. The problem is both non existant and irrelevant.

Nevertheless there is endless hand wringing and theorizing, in this case at Newsweek, about how this is an oh so important problem that must be answered before we entrust our cars to drive autonomously.

No it is not an important question, and it is not relevant. What is important is to make self driving cars as safe as possible. And handling the large tail of perceptual cases that arise in the real world will be key to that.

Over the years many people have asked me and others whether our robots are “three laws safe”. They are referring to Asimov’s three laws from his science fiction books in the 1950’s about humanoid robots.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But those who have actually read Asimov’s book know that Asimov used these laws as a source of plot, where ambiguities led to a plot twist, or where, through a clever set up, conflicts between the laws were introduced. They were a joke!  It has not stopped the press breathlessly picking up on this as an important factor for robots.  Almost as bad as how the press picks up on the Turing test (itself a rhetorical device used by Alan Turing to make a point, not an actual certification of intelligent behavior).  Not that it is all the fault of the press.  There are plenty of academics (and recently Lords, physicists, and billionaires) who have also chosen to draw attention to a supposed barrier to the use of AI–whether machines will be moral.  There is nothing sensible to say on these issues at this time.

For Asimov’s laws none of our robots or perception systems can figure out the state of the world well enough for any robot today, or in the forseeable future to figure out when which law applies. And we won’t have cars that can tell nuns from robbers–how about robbers dressed as nuns, all the better when out on a bank robbing spree?

The Newsweek article, somewhat tongue in cheek, suggests:
To handle these relative preferences, we could equip people with beacons on their cellphones to signal nearby cars that they are a certain type of person (child, elderly, pedestrian, cyclist). Then programmers could instruct their autonomous systems to make decisions based on priorities from surveys or experiments like the Moral Machine.
Err, yeah.  This is going to work well, as no robber is ever going to choose the nun setting on their phone–I’m sure they will identify themselves as a robber, as they should!

My favorite answer to this general moral dilemma, known as the trolley problem, was given by Nicholas, the two year old son of E. J. Masicampo who teachs a moral philosophy class. Seen here dad sets up Nicholas’ wooden train set so that taking one fork will kill one person, and the other fork will kill five. Asked what should the train do, Nicholas moves the singleton to lie on the same track as the other five, then drives his train into all six of them, scatters them all, and declares “oh, oh”!


</flame>

Footnotes

^{\big 1}Interface Message Processors. Today they would be referred to as Internet protocol routers.

^{\big 2}Bolt, Beranek and Newman in Cambridge, MA, a company that was always known as BBN. As distinct from BBN, the Buckingham Browne and Nichols school in Cambridge, MA — no doubt many employees of BBN sent their kids to school at BBN.

^{\big 3}Like all things called “Squares” in Massachusetts there is absolutely nothing to do with squareness in Central Square.  It is just a region of Massachusetts Avenue in Cambridge where there is so much commercial activity that there are zero buildings with residential occupancy at ground level.

^{\big 4}My kids all went to a private pre-school in Concord, MA, and almost all the parents owned dark blue Volvo 240DL station wagons.  Although our kids could all tell their parents’ car from the others at the grocery store, it just didn’t work at this pre-school.  The kids could never tell when it was their parent rolling up for the next pickup.  That was back when the 1% was a few more percentage points of the population, and not quite as hollowed out as now…

^{\big 5}Full disclosure.  I am on the advisory board for the Toyota Research Institute, but this blog post represents only my own thoughts on autonomous driving.

^{\big 6}Most people failed to notice that a technology, analog TV,  that had been omnipresent for most of their lives was overtaken and then one day it just disappeared as it did in the US on June 12, 2009.  Poof!  It was gone.

60 comments on “Unexpected Consequences of Self Driving Cars”

  1. Great post! We’ve been writing about sociability between autonomous vehicles (also Drones) for a few years now and you might find our papers useful:

    1) Applin and Fischer 2015: “Resoloving Multiplexed Automotive Communications: Applied Agency and the Social Car: http://posr.org/w/images/e/e1/Applin_Fischer_Auto_2015.pdf
    Final at IEEE: http://ieeexplore.ieee.org/document/7064858/

    2) Applin, Riener, and Fischer 2015: ” Extending Driver-Vehicle Interface Research Into the Mobile Device Commons : Transitioning to (nondriving) passengers and their vehicles” http://ieeexplore.ieee.org/document/7310907/

    3) an earlier version of 1), Applin and Fischer 2012: “Applied Agency: Resolving Multiplexed
    Communication in Automobiles” http://posr.org/w/images/a/a5/Applin_Fischer_AutoUI_2012_DRAFT.pdf Final at http://www.auto-ui.org/12/docs/AutomotiveUI-2012-Adjunct-Proceedings.pdf pg. 159-163/163-167

    1. Thanks, and apologies that I don’t yet know how to merge your typo fix into your original comment.

      These papers are interesting and complementary to what I was talking about. Your comments on the early days of cars invading the street spaces of humans and people’s reactions sounds analogous to what I was theorizing about here. I didn’t consider the sorts of extensions to mobile communications channels that you are working on (except in the flame section on nuns versus robbers). My work with robots over the last 25 years (from the Cog days back at MIT, through toys and vacuums at iRobot, and industrial robots at Rethink Robotics) has been about robots using the same non technological communications channels that humans use socially with each other. I think those channels are where we are most comfortable–both the Amazon Echo and Google Home have done good jobs of exploiting those channels well (visual and aural). [And it couldn’t have happened any earlier as without recent advances in deep learning the quality of speech recognition was just not up to it. That reminds me of an addendum that I might write, about a primitive Echo/Home like system we built around 1999 which really needed social interaction to compensate for much lower and slower rates of speech recognition.]

  2. Thanks. It may be that robots and other IoT type participants (I include drones, autonomous vehicles, and robots here, as well as all others) may need to capture electronically the type of communication that we are able to do through rapid processing of sensing and data but not through replicable human channels. Humans can just sort of *sense* stuff and we don’t really know how we do it. We get a “feeling” not to go somewhere, or that we absolutely need to call someone suddenly, or whatever and our intuition when meeting people is highly sensory but not easily “engineered.” This is why, with our Thing Theory, we discuss agent/agent agent/sensor types of organization and processing as a potential way for these entities to communicate with each other in a social way. See also Applin and Fischer (2013) Thing Theory: Connecting Humans to Location Aware Smart Environments: http://www.dfki.de/LAMDa/2013/accepted/13_ApplinFischer.pdf – for these models, trust of at least one node of the system in a local locale is required.

  3. Great post, Rod. And I heartily endorse your flame on absurd edge cases stirring up BS faux philosophical red herrings that moral choices might hinder autonomous vehicle adoption. I would also add to your general post on human-vehicle interactions that situations that might appear visually and functionally identical often have very different social interaction norms. For example, I faced a significant adjustment period moving from Cambridge, MA to Berkeley, CA. The west coast pedestrians have evolved a much more ascendant social position with respect to vehicles, crossing streets without even checking for cars, disallowing the early left turn and so on. When I first arrived, I was shocked there weren’t more car-pedestrian accidents, but it eventually became clear that drivers also willfully adopt a more subservient and cautious norm wrt pedestrians. So algorithms will need to be able to adapt to local norms that otherwise might appear identical across different regions.

      1. I am guessing there is some give and take between the pedestrians and the cars with drivers in them. The interesting thing will be to see whether pedestrians treat a totally empty car differently, with less respect perhaps.

  4. At Nissan Research Center Silicon Valley, a group of social scientists has been researching this topic for over three years, together with Don Norman’s group at UCSD. I have coined the term “socially acceptable driving” in order to give a name to these social issues we need to understand, before we can even find solutions. One design concept we have introduced and we are now getting ready to test “in the wild” is the “intention indicator.”

    We are organizing an open workshop on this topic, together with the UCSD group. It would be great if we can get roboticists, autonomous car developers, social scientists, and philosophers, and ethicists, like yourself and people replying to your blog post, together to debate these topics.

  5. What a great article Rodney.

    The contempt and loathing that you mention toward self-driving cars will be amplified even further by the class aspects the conflict. Autonomous cars are likely to be more expensive, and represent larger luxury models. Consequently, passive-aggressive (or just aggressive) actions towards autonomous cars will represent sticking it to: the upper-class, technologists in general, libertarian utopianists, technology companies, technologists, and the latest OS version forced down our throats.

    How irresistible!

    1. Yes, I agree with many of your points in the longer term. They will get handled. But it may take a while and I think there will be unexpected disruption and arguments over policy in the mean time. It will not be smooth sailing for a while, but eventually we will get to a new normal.

  6. Good article!
    But how about software bugs? Autonomous cars require fairly complex software and at the same time the consequences of bugs may cause loss of life and limp!

    1. I was discussing this recently with a friend who is about to go and start working on driving software for a major car manufacturer. I have no idea how it works in the States, but in the UK and Germany for example, cars over two or three years old are subject to statutory annual roadworthiness checks. We concluded that control software being patched up to date should be part of these.

  7. Another aspect of self driving cars that I have not seen explored enough: driving related offences. High speed chases would not be necessary if a police override to stop a car were possible (which of course entails various privacy concerns depending how it is implemented). DUI would no longer apply if the person in the car is no longer classed as a driver. These have an impact on driver/passenger safety but there are potentially significant benefits for public safety as well as police resourcing. I would be interested in knowing, for example, what percentage of police resources are currently dedicated to those activities that self-drive could eliminate the need for?
    Another area I’d like to see explored more which you mention is parking. Do we still need parking in homes or do we just summon cars as a service? Is parking still needed at malls or do cars take themselves off to a remote location until summoned. These both touch on another point – with self-driving cars are we still going to own cars or just use them as a service when needed? In your example where you leave your car circulating, why not just catch one car to go to pick up your parcel and then summon another when you’re ready to leave? Why do you *need* it to be the same car on the way to/from a location? If car ownership drops what would replace it exactly?
    Do we need speed limits for self-drive cars? Why shouldn’t they dynamically calculate based on current conditions what the optimal speed is for a given situation? Could this enable fast transit between cities along freeways which are effectively speed unlimited?
    A final point; some of the challenges around level 4+ autonomy are because the current infrastructure is not built for self drive cars. As more cars develop, this could also change. A simple example might be that roadworks may require markings specifically to enable self drive cars to safely navigate them. Making these changes may be a lot easier/cheaper that improving the technology for cars to be able to handle all driving conditions that humans have to deal with.

  8. This is a good post. I’ve had a lot of similar concerns with all the hype around self-driving vehicles. I’ve never thought about the human/car interaction aspect, but more about train funding, autonomous trains (like in Singapore/London) and the entire problem with software, care modifications and safety.

    I wrote this a few weeks back on those topics.

    http://penguindreams.org/blog/self-driving-cars-will-not-solve-the-transportation-problem/

  9. Seems like a pretty pessimistic piece. I agree that the scenarios presented will require adjustments to policy, parking prices, pedestrians, and drivers. However it just doesn’t seem like a big deal.

    Some examples:
    * blocking roads as humans jump out for coffee- law requiring autonomous cars to make progress within 10 seconds of drop off
    * primo parking spots for school pickup taken by driverless cars – policy to give priority to 2 humans waiting for car (driver and kid) over autonomous (waiting for just kid). Or maybe just a separate area for autonomous pickups since presumably autonomous cars will mix better with autonomous cars than hard to predict humans.
    * unclear human interactions with autonomous cars – feedback from headlights, speakers, wheels, or a few extra LEDs.

    Basically humans are smart, communicate well with body language, and already navigate just fine with vehicles all the way from oblivious teens blasting music, to bart, trains, cows, bicycles, oblivious drivers that seem to prefer minivans, etc.

    Sure some social conventions will change. Maybe wiggling the front tires will replace the “no, you go first” handwave. Maybe a friendly noise. Or blinking one headlight. Pedestrians and my experience aren’t so hyper aware of drivers mind state, except in the highest risk scenarios. Sure some will be amused to cut off autonomous cars with confidence that they can’t match with human drivers. But the autonomous systems are already getting pretty damn good. They already recognize humans, which direction they are moving, and where they are likely to be in a few seconds. Sure group pedestrian behavior and cars waiting for those groups might need a bit of tuning. But I’m confident it will be worked out.

    Sure track standing bicycles (that move slightly forward and backwards) freaked out the google car which was overly polite and stopped each time to let the bike go first. Some tweaks, it’s not a big deal. At least one system already can recognize hand gestures.

    I also think that society, policy makers, and insurance companies will see the current benefits (40% less accidents when Tesla turned on autopilot) and will find future improves systems even more acceptable. Even if large numbers of people die with autonomous drivers. As long as it’s demonstrably less deaths than human drivers.

    So generally autonomous driving is pretty awesome now (40% less accidents) and the current level of hardware (Nvidia PX2 + radar + more/better cameras) isn’t even fully enabled yet. I welcome safer roads, more efficient road carrying capacity (even 10% autonomous should help prevent the common traffic waves that cause stop and go traffic), and less pressure for parking. I also think it will encourage significantly more car pooling. After all if your car is going to pick up one kid from school, why not 2? Or maybe a car ful, then send them home for dinner afterwards. Car pooling to work is somewhat inconvenient, and any chances hit the driver unfairly. Not such a big deal if the driver is a computer. It could make a second trip, add a hop at the end, go pick up gas before hand, etc.

    Seems like a city of 100k people where kids have to get to school, people have to get to work, and people have errands, doctor appointments, and unscheduled travel could have the needs met by significantly less autonomous cars than the current standard. Less cars, less space wasted on cars, less parking lots, less pollution (for production and driving), etc.

    Imagine an airport if each car arriving brought 4 people flying, picked up 4 people that landed, and never parked!

    1. I like those ideas on social signals. Wiggling the tires. Flashing a single headlight. Great!

      I too welcome safer roads, and ultimately that is what autonomous cars will bring us. I just don’t think that rationality will be what reigns for many years. It will be rockier than rational techno enthusiasts (I would guess that both you and I fall into that category) would like to believe.

  10. You may wish to consider the maritime shipping SOLAS laws. Robot shipping is also coming, and will such ships have responsibilities to respond to high seas requests for assistance? What about definitions of Lookout? Could the robot (or its maker) be found at fault if it doesn’t respond? Who is at fault in a collision between a robot and manned ship when the circumstances of the collision were in extremis?

  11. Hi!

    You mention: we did not discuss in 1969 the effects of the internet on journalism. A catchy start, but I will tell you unequivocally that I had those discussions in 1990 when I worked at the first company to put TCP/IP on the Macintosh. Most people thought it was a really good idea, to destroy the idea of the gatekeepers and 4th estate. We certainly saw the fact that newspapers subsidize useful social work ( investigative journalism ) with nuts-and-bolts ( classified ads ) as a peculiarity that didn’t make sense. A distortion. A better system would take its place. The arguments would go “Watergate wouldn’t have been funded!” then “We don’t need Woodward and Bernstein, we need Deep Throat!”. The issue of fragmentation of public discourse came up, but rarely. 1990 was right around when the NCSA Browser was published, and Gopher gave way to hypertext, although a bunch of that work also happened just 2 years earlier. ( Tim Berners-Lee did groundbreaking work, sure, but my university created a GML ( precursor to XML / HTML ) based hyperlink system in 1987. I know because I applied for the job by my classmate Jill got the job. So we were discussing all of this in college, at the same time Markoff was writing about the internet worm.) Anyway, most people involved in the internet 1990-ish thought the impact of the internet ( as we now know it ) would be beneficial to Journalism because it would remove the “filters” and let the people speak, “man”. Most of that has come to pass, as we now have Wikileaks instead of the Washington Post, and the 4th estate will vanish. We didn’t really foresee the post-truth world that we live in, and I don’t yet see the “better thing” that will spring up when journalism is fully un-funded.

    I think some of your thoughts about self driving car unintended consequences are not what I expect. For example, taking up the public commons with “your” car only happens when we continue to own cars. More people, even today, think we’ll stop having car ownership and start just calling Uber-rides ( aka Johny Cab, Total Recall ). This will lead to a devastation in the car industry, and remember 40% of Americans have driving jobs, and the car is the 2nd largest single purchase most people make in their lives. Only 1/10 of the number of cars will need to be built, which will simply zero out the American economy in ways hard to understand now, and probably lead to a revolution of some sort. You can’t simply take out half the economy that exists today, in 10 years, and expect the social contracts we have to survive. Like how in 1990 most people expected the “citizen journalist” to replace the paid journalist — which has not happened to a great enough extent — I believe the economic impact of killing the car and driving economy will not be pleasant.

    But that’s just me. Thank you for the thoughtful writing.

    1. This is a superb piece of analysis. Far too little thought has been given to the social aspects of driverless cars. I have written a similar piece analysing the mistaken assumptions behind the notion that driverless cars are going to come on stream in the near future. http://www.christianwolmar.co.uk/2017/01/the-dangerous-myth-of-the-driverless-future/
      My view is that these sort of issues make the introduction in an urban context almost impossible. There are numerous other examples of potential insuperable issues. For example, take the dark road that is mentioned at the beginning of this piece. What if a couple of hoodlums fancied robbing drivers and simply stood in front of the car and stopped it? Or what if a woman was in one of these cars on her own and a potential rapist stopped the car? Or her ex husband who abused her? And so on.

  12. One more thought about self driving car ethics that I literally discussed over dinner tonight. What if you wear a tee shirt that has a road continuing straight ahead, and the self driving car thinks you are an open road and doesn’t stop, and hits you? Are you at fault for being an idiot with a confusing tee shirt, or should the car company be required to handle those cases before a Stage 5 certification?

    Legally, this probably becomes “man wears deer costume in hunting season”, and I don’t think you can get convicted of manslaughter in a case like that.

    1. Yes, I showed the results of this paper: http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf, which shows how to find images that fool a deep learning network trained on real images to give labels that a human would not give, at least not seriously, but humans can see the essence, to a former DARPA program manager. He immediately jumped to the idea that adversaries, might, during conflict, plaster our cities with posters that look completely innocuous to humans but which are read as road signs by self driving cars and get them to go careening where they shouldn’t oughta careen. All the adversaries would need would be access to the deep learning runtime labeling chips used in a car and the ability to give it an arbitrary input and read its output to search for and find images that completely fool it.

      Here are some examples from the paper:

      1. It is interesting research, but a pretty unlikely attack. First of all, real systems will be subject to constant retraining, and these attacks are very dependent on the specific configuration of a given network.

        More to the point, I think most cars will be map driven. People imagine cars that drive without maps, because to a robotics purist, that is the ideal. But in reality, a car that can drive without a map is a car that can build a map, but maps can get QA. Mostly, a car that sees a road sign that is not in its map would be highly surprised. It would immediately trigger an alert back at the control center — a new sign is a rare event, particularly one the city didn’t bother to log before erecting (since I think it will become policy that new traffic signs and rules will have to be logged before they go into place or force.) And in this event, only very basic handling of the sign would take place, and never dangerous actions because a new road sign doesn’t have to be confusing to a network to be trying to screw with you.

        Over time, I expect the real rules of the road to be in the virtual infrastructure layer, and the physical road signs just there as a reminder to humans of what the rules are. (Though a human would be excused if they obeyed the sign and not the virtual layer rules.)

        In a sense it is that way today. I suspect the true speed limit on a road is officially in the road planning database, and the sign just lets us know what it is. Could be wrong on that, though.

      2. Very interesting! It might make sense to apply the logic of deep learning to the moral questions as well. For instance, instead of programming a rule like “if the car must swerve into pedestrians, steer towards old ladies rather than children,” we could construct a deep learning system that trains on millions of hours of driving data and insurance claims, and learns to swerve towards the obstacle whose attributes correlate to the lowest hospital bills. So the moral dilemmas won’t be rule-based, but decisions about what data is important and what measures approximate preferable outcomes.

  13. Great article, thanks.

    > Then the two car family may resort to a new trick

    I already do a version of this. I go to a gym that has severely limited parking and relatively poor public transport home late in the evening. So I park my non-autonomous car at the gym in the morning, go to work and then back to the gym with public transport which is fine during the day, then drive home from the gym later. I feel a tiny bit guilty about doing this, but only a tiny bit.

  14. Thanks for a very thoughtful piece.

    One of my concerns with autonomous vehicles is age related hardware failures. Humans, driving conventional cars, are remarkably good at accommodating these problems. You can drive a car with worn brakes and a flaky transmission for months before you get it fixed

    But autonomous vehicles will be critically dependent on circuitry, sensors, actuators (such as steering and braking control units) and the network that interconnects everything. As these systems age, they will become unreliable and fail. Sometimes, it will be possible for the software to anticipate a failure and require that the vehicle be repaired before it can be operated again. Other times, the failure will be sudden.

    A network failure at 75 mph is a particularly terrifying scenario. Software would detect the failure be unable to do anything about it because, without the network, there’s no way to control safety critical devices. The driver will suddenly become responsible for the vehicle. But they might be napping and probably haven’t really driven the vehicle in months anyway.

  15. Another interesting, but non-ethical, change will be the lack of traffic violations. That revenue stream will dry out[1], which is going to be a real problem I suspect. The loss will not be offset with a significantly lower police work level either, since traffic violation detection is becoming heavily automated.

    OG.

    [1] That doesn’t mean the driverless cars won’t do any, speed markings in particular
    being what they are, but that once the responsability for them shifts to the constructor the lobbying/legal counter-attack will kill any county trying a traffic trap.

  16. I’m in total agreement with your first issue; indeed coming from a country that has no jaywalking statutes, and where crossing the road wherever you want is routine, I find social interactions between drivers and pedestrians to be the norm. Without the ability to make eye contact – or indeed the signalling inherent in refusing to make eye contact – I suspect both autonomous vehicles and pedestrians will have to be more tentative.

    I’m less convinced however of your second issue. It seems to assume that individual ownership of vehicles will remain the norm after autonomous vehicles become commonplace. If that does remain the case – at least in densely populated areas – then we’re missing out on much of the benefits of autonomous vehicles. But without individual ownership, then there can hardly be an issue with enabling anti-social behaviour by the owners …

  17. GREAT article. VERY good. And then you had to flame….

    I do not find the “trolley problem” ridiculous. I agree there is way too much hand-wringing over it, way too much discussion, way too much pontificating. I also think that the best response to the excessive hand-wringing is to engage with it, not make fun of it or arrogantly dismiss it as mental masturbation or idiocy. The problem DOES exist in the real world, even if only very very (repeat as you see fit) rarely. To wit:

    “Evansville school bus unloading students hit by car’

    Kenny Douglass Posted: 11/03/2016 7:42 PM

    VANDERBURGH CO., IN (WFIE) – The Vanderburgh County Sheriff’s Office is investigating a crash involving a school bus. It happened around 3 p.m. Thursday on Old State Road, south of Knollview Drive. Deputies say the driver of a red Pontiac Grand Prix exited a curve and noticed the school bus stopped in the opposing lane when he veered into the front of the bus to avoid hitting the children crossing the street. We’re told there were about 35 students on the bus at the time of the crash. Thankfully, no one was reported hurt on the scene. The crash is under investigation.”

    If you go to the article you will see the driver’s car is very badly damaged, so that he indeed did assume significant risk to his own life. Just because no one was hurt does not mean that the driver did not face a “real world” trolley problem here.

    Please, can we engage rationally with this issue, point out that it is virtually meaningless, even trivial, and that we can move past, it, but not attack opponents who think otherwise of it with ad hominem rants about their intelligence, motivation, or even sanity. One can argue that the very professors who seize on this issue because they are desperate mirror other professors who have jumped onto the pro-AV bandwagon because THEY are desperate.

    I rest my case.


    ​===​

    Regards,

  18. An obvious point I know, but some of these dilemmas are more policy than technology. Prohibiting cars from driving without passengers eliminates the abuse of school pickup systems (which already happens at our kid’s public school in SF, Some parents send nannies to get in line an hour before dismissal). A less drastic solution could be that cars can only drive empty when headed home, or to a parking lot. Likewise allowing EVs in HOVs (which ends Jan 1, 2019). Right now the judgement is that encouraging a transition to EVs is more important than equality. Like any subsidy, it should sunset once it has, hopefully, done what it was designed to do.

    Thank you for that 2-year old’s solution to the trolley problem, that made my day.

  19. Good article. Food for thought. As for “…that is how it is going to unfold”… There will be social issues, not necessarily the ones you foresee. No one has yet successfully predicted the future, but then again, you might be on to something 😉 And the trolley problem… it is not important in real terms and it will come up only exceptionally but it has the potential to be very controversial. The same irrational social sentiment at work in your example of “Ten deaths per year may be deemed too much…” will blow out of proportion the importance a freakishly rare death caused car’s “decision”

  20. I am sure that when cars were first introduced, it wasn’t the lower rungs of society which had the first Bentley’s, Peugeot’s etc. The real issue is whether this modality of driving has legs. If so, it will drift down to the entire population.

  21. Hi!
    I did my master’s thesis on the topic of pedestrian-AV interaction.
    Here’s a short video of a simple prototype we made:
    https://www.youtube.com/watch?v=qG9fH2EDa1g
    Also the thesis:
    http://publications.lib.chalmers.se/records/fulltext/238401/238401.pdf
    And here’s a PR-spinoff of the work we did:
    https://www.youtube.com/watch?v=INqWGr4dfnU

    My belief is that it’s not about if external AV communication is absolutely necessary, it’s more about creating a nicer, and more fluent, traffic environment for all actors (much like turning indicators, and brake lights etc.).
    What we tried to do was to allow pedestrians to see the AV’s current intentions, instead of giving suggestions on how to behave, following a “show, don’t tell” principle.
    A tricky part is to evaluate concepts like this since there are no pedestrians who have adapted their behavior and mental models to fit an AV traffic context yet.

    Regards,
    /Victor

    1. Thanks for the links. Somehow we will have to get to country wide, and better yet, worldwide, standards so that independent of vehicle manufacturer or geographic location one set of pedestrian learned reactions to vehicles will apply. As someone who regularly drives in countries with vehicles that drive on the left and on the right I know how hard it is cognitively to always remember which universe I am in so that my split second decisions on which way to steer in various circumstances are the right ones.

    2. This is an interesting approach. I am not sure how realistic it is to “create(ing) a nicer, and more fluent, traffic environment for all actors”, but the idea that it is possible to show pends your intention is certainly part of the task.

      Another issue raised by Rodney deals with perception of the situation by the AV (as opposed to the problem you are addressing which is presentation of the AV intent to the ped). While it is obvious that the scenario was staged, in your video you show a couple of the “I am about to yield” indications when peds are well off scene. This related precisely to one of the points in Rodney’s piece. Specifically, that there maybe a variety of situations where a pet is slightly off scene, or even partially in scene, and has no intention of walking into he path of the AV. Yet, if the AV cannot perceive this subtle difference in “ped-intention”, it may yield when there is no reason to yield. Imagine a ped standing between parked cars, slightly out in the roadway (because, perhaps they are about the get into their parked car), but they have their back to the road as they interact with a friend on the sidewalk. Does the AV yield to them? A human driver might slow down, seeking to assess the ped’s intention, but they would be unlikely to stop because the orientation and actions of the ped indicate that they are not seeking to cross the street. Yet the AV, unless it has some ability to perceive how the ped is standing and perhaps appreciate that he is talking with another ped will assume that he desires to cross the road, and will stop… Thereby causing more traffic, and possibly requiring the human driver to tell it that it is OK to go. More on this below.

  22. Excellent article Rodney. I just stumbled onto this doing some reading on AV policy.

    I think your points are well taken, and also very well stated. In the NHTSA AV Guidance document (https://one.nhtsa.gov/nhtsa/av/pdf/Federal_Automated_Vehicles_Policy.pdf), they define the operational design domain (ODD) as:

    “The ODD should describe the specific operating domain(s) in which the HAV system is designed to properly operate. The defined ODD should include the following information to define HAV systems’ capabilities:
    · Roadway types on which the HAV system is intended to operate safely;
    · Geographic area;
    · Speed range;
    · Environmental conditions in which the HAV will operate (weather,
    daytime/nighttime, etc.); and
    · Other domain constraints. “

    All of which is fine, but somewhat misses the point. The “other domain constrains” here would seem to include a veritable iceberg sized array of ped-driver and driver-driver interactions many of which are based on social norms and assumptions, and are this not only difficult for an AV to perceive, but may change based on time of day and/or locale and/or the age and social “category” of the humans involved (i.e peds and/or drivers). One only needs to drive in San Francisco (or perhaps Victor Malmsten Lundgren’s Berkeley!) , where cyclists are a protected species, to discover that young adult cyclists may cross lanes without even deigning to look behind them, while slightly more worldly bike messengers will ride even more erratically, but with much greater situational awareness, and older cyclists will ride like they were taught to in 1955… Similar variations exist in locales, and with different types of human drivers. The core issue is that, unlike automated trains on protected rights of way, AVs that expect to operate in mixed traffic environments will somehow need to understand the social norms and expectations of their current locations, and within those norms, then perceive the intentions of the humans. Exactly how AVs deal with this highly variable intention and perception issue will, IMO be a key factor in leaping the gap between levels 3 and 4.

  23. Sorry for the delay in responding to your post, Rodney, which we discussed in January. Here’s my full reply:

    https://www.forbes.com/sites/patricklin/2017/04/03/robot-cars-and-fake-ethical-dilemmas/

    tl;dr version: If you reject thought experiments for being fake, then you have to reject science experiments which are also fake. But both are useful, because they strip away real-world messiness to isolate the right variables in a controlled, artificial environment.

    1. No, I don’t reject thought experiments. But I do reject silly thought experiments that can not possibly be perceived by robots or cars (e.g., Asimov’s third law) and to say that those edge cases have to be handled. In your Forbes piece you argue for the need to handle edge cases, but that is in an idealized world where those edge cases can be detected. We could make up edge cases for anti-lock brakes that can’t be detected with current technology, and argue that until we figure out that edge case we shouldn’t have anti-lock brakes. That would be the wrong decision. My point is that these thought experiments with the trolley problem involve situations that will not be perceivable by cars for many decades at least. People imagine how good vehicle perception will be and then push the consequences of that level of perception to the limit. We won’t have that level of perception. So then the answer could be that in that case we shouldn’t have self driving cars (even level 3 autonomy) any time soon. I think that would be a disservice to greater overall safety. Imperfect improved safety should not be the enemy of improved safety.

      1. I suspect you haven’t read the article then, Rodney. Realism doesn’t matter for these experiments; you’re letting that distract you from the real lesson, which is to press on the general operating principles that are also in play in everyday decisions.

        And I don’t know of any experts who argue that edge cases here need to be solved before deploying the technology. This is a straw-man argument. The point is that OEMs need to be prepared to proactively defend their design decisions; they ignore that due diligence at their own risk (and creating risk for all of us).

      2. Patrick, I did read your article. I think you are guilty of setting up unanswerable questions for people writing software and suppliers installing it in cars. They are unanswerable because they are based on false assumptions, but you think there are answers that can be defended. There aren’t and they can’t, so my point is that the answers to unanswerable questions (based on false assumptions) can not possibly matter. That is where we disagree.

        That is not to say that sets of principles should not be used in writing the software. But one needs to realize that those principles will often not be operationalizable. (My spelling checker doesn’t like that word.)

      3. Anti lock brakes do have their issues – try seeing what happens if you use them to stop a small sports car suddenly to avoid a wayward cat, with a large truck following you at the same speed as you…

  24. Sorry, you’re still missing the point. Ok, forget about crazy dilemmas for a second. Let’s look at realistic scenarios. Are you saying these questions are unanswerable?

    From my article:

    “For instance, if a self-driving car were navigating through a narrow road with a group of people on the left but only one person on the right, where should it position itself in the lane? Should it treat both sides equally and drive straight down the middle? Or should it give more berth to the group of people and therefore scooch closer to the lone pedestrian? Any of these options seem reasonable, and the underlying issue is the same as in some unrealistic dilemmas: do numbers matter, and how far? Again, realism doesn’t matter.

    If you want realism anyway, there are many other everyday decisions in which technology developers might not recognize they’re doing ethics; for example, how much room should a passing car give to a big-rig truck, compared to a family van or bicyclist or pedestrian? ”

    If you were an OEM, your decisions here can create or transfer risk to other people, and this is a decision about ethics. The crazy dilemmas are merely pressing on the same question but in a much more dramatic way.

  25. Responding to your first point to my reply, here’s how I described the trolley problem:

    “Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake. Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car. Did your robot car make the right decision?”

    Are you saying autonomous cars cannot perceive this scenario, i.e., if a crash was impending, if a cluster of people were in front of it, or if one person was to its side? There might not be a “right” decision, but that’s not the goal anyway; the goal is to defend your decision thoughtfully.

    1. I think you are overestimating the accuracy of perception systems for cars any time in the next few years. The perceptual error rates make basing algorithms on these distinctions unlikely to be satisfying. And running the experiment multiple times on what humans think are the same conditions will give different results. There is no decision to be made here in this imaginary perceptual world.

  26. Not sure why you don’t think self-driving cars can detect impending crashes, or detect people, or gauge the size of a group/object—these are things they can already do /right now/.

    But let’s look at an easier case, if you like. This is the kind of example Noah Goodall discusses in reframing ethics as risk management: If your robot car is in the center lane of a highway, and there’s a big truck to your right, and a motorcyclist on your left, where should you (as the car’s programmer) position the car in the lane?

    If you say it should give more berth to the truck and inch closer to the motorcyclist, then you seem to be transferring the risk created by the truck to the biker; and decreasing risk of harm to yourself by increasing risk to others seems to be an ethical issue, doesn’t it?

    It reveals something about how you think lives should be prioritized, just as the other weird crash dilemmas do. And these assumptions can come back to haunt you, e.g., in a lawsuit or court of public opinion. It’s one thing if a human driver reflexively does it, and it’s another thing if a car was deliberately programmed to do the same thing—there’s now forethought, intention, premeditation, etc.

    I won’t continue to blow up your blog here. I look forward to possibly having this conversation in person someday, over drinks ideally. Maybe one of our mutual colleagues can make that happen!

    Thanks for the conversation. As I said in the article, it’s a common objection (that these crash dilemmas are so fake), and it’s not an unreasonable one; but whatever allergic reaction is going on to ethics is causing folks to miss the point.

  27. @Patrick Lin

    > If your robot car is in the center lane
    > of a highway, and there’s a big truck to
    > your right, and a motorcyclist on your left,
    > where should you (as the car’s programmer)
    > position the car in the lane?

    Easy. Where the probability of a crash is the lowest, considering all side conditions (e.g. possible swaying of motorbike). There are no (and should be no) ethics involved here at all. (The obvious only alternative would be, to have a *higher* probability of a crash based on some flimsy ethical rule.)

    There should be a clear precept for a self-driving car, which is to avoid an accident. If this condition can not be satisfied in a certain situation then all deals are off anyway, for a number of reasons: lack of reasonable rules, perception and reaction time being some of them.

    And in fact: the current (traffic) law & jurisdiction does know the terms human error, negligence or intent, when it comes to determining the root cause for an accident, but it is surely unheard of that the actions of a driver *after* the point, where an accident had become apparently unavoidable, will be judged by *any* moral imperative (e.g. to ‘minimize’ human damage), for a number of reasons: lack of reasonable rules, perception and reaction time being some of them.

    Bottom line: these types of moral questions are irrelevant for the status quo of cars with human drivers and they will remain irrelevant for a future self-driving era, imho.

    1. This sounds like a sensible approach to me, and indeed makes the trolley “problem” go away for driverless cars, just as it is not an issue for human drivers.

      Here though, is one that I think is interesting. It is not a drive time decision that a driverless car would need to ponder from a philosophical point of view, but rather is a policy decision for the people producing the software, but it has gnarly legal implications vs horrible pedestrian experience (PX? in the spirit if UX?).

      I noticed the other day on a very wet rainy day in Cambridge when there are big puddles on the sides of the road, that some nice drivers tend to cross the double yellow lines when they are about to pass a pedestrian on the sidewalk so that their right side wheels don’t go through the puddles and spray water on the pedestrian. A driverless car could be programmed to do the same, in order not to be a horrible “citizen” towards human pedestrians. But two things: (1) it would mean that the car would be breaking the rules of the roads, and (2) it does slightly increase the probability that the car will be involved in an accident with an oncoming car that it does not sense in time.

  28. >> I noticed the other day on a very wet rainy day in Cambridge
    >> when there are big puddles […]

    You indeed have a keen eye for the subtle net of social interactions involved in todays driving! And I agree: these will be very hard to completely map onto the abilities of a self-driving AI, if at all.

    There is, I believe, one other level of social relevance that has not been sufficiently considered. While test drives with driverless Google cars shown on youtube seem to confirm some fascination (albeit also alienation) with this new driving experience, it seems to me that the mass adaption of self driving cars will have to take another social hurdle, apart from technical and legal ones.
    Western society is build around higly self-centered individuals who deem themselfs in control over any aspect of modern life and its multitude of risk profiles. Will these types of individuals voluntarily cede the control and degrees of freedom involved in giving up the conventional car? (And what happens to the collective psyche when the first deadly accident with a self-driving car happens?)
    I have my doubts. Even today there are drivers who steadfastly refuse to use an automatic gearbox – despite it’s being around since the 1940s. And has somebody analyzed the actual degree of utilization of driving assistant systems, such as speed and lane control?
    Overall I could actually picture a future where we will not adopt level 4/5 self-driving transportation (except maybe for the railway) and stay on an evolutionary course of developing conventional cars …

  29. >> Will these types of individuals voluntarily cede
    >> the control and degrees of freedom involved in
    >> giving up the conventional car?

    This just crossed my mind and beautifully conveys the sense of freedom attached to the conventional car use in modern society, here artfully blended by author Camille Paglia with the female emancipation:

    “But Melanie Daniels* is free as a bird: she rides alone, amused by her own meddlesome thoughts and plans. Tippi Hedren** is terrific as she shifts gears and rests her fawn-gloved hands on the steering wheel. What could be more representative of modern female liberation than an elegantly dressed woman gunning a roadster through the open countryside?”

    From: “The Birds”, by Camille Paglia, British Film Institute, 1998

    * movie character in Hitchcock’s “The Birds”
    ** actress ibid.

  30. Beautiful article. I think it is safe to extend the unexpected consequences of arpanet to the provision of truthful information and data to the general public, resulting in a complete distortion in public discourse and societal organization. We are seeing this impacting elections, policy decision making and even how we care for the education of children. The perplexing aspect of this is that we are very aware that the public funded all this tech through their taxes – and the flurry of innovation that happened in universities and national labs putting a gigantic amount of IP in open source or at a discounted premium price. Yet now, are those who integrated this tech in clever ways so they scale, payback to society? I don’t see that happening. Without the revenues back to society into taxes, we are unable to fund initiatives to socially integrate technology and ease the transition points – basically punishing the most vulnerable in many cases.

Comment on this

Your email address will not be published. Required fields are marked *