Perhaps through this essay I will get the bee out of my bonnet that fully driverless cars are a lot further off than many techies, much of the press, and even many auto executives seem to think. They will get here and human driving will probably disappear in the lifetimes of many people reading this, but it is not going to all happen in the blink of an eye as many expect. There are lots of details to be worked out.
In my very first post on this blog I talked about the unexpected consequences of having self driving cars. In this post I want to talk about about a number of edge cases, which I think will cause it to be a very long time before we have level 4 or level 5 self driving cars wandering our streets, especially without a human in them, and even then there are going to be lots of problems.
First though, we need to re-familiarize ourselves with the generally accepted levels of autonomy that every one is excited about for our cars.
Here are the levels from the autonomous car entry in Wikipedia which attributes this particular set to the SAE (Society of Automotive Engineers):
- Level 0: Automated system has no vehicle control, but may issue warnings.
- Level 1: Driver must be ready to take control at any time. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.
- Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.
- Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks, but must still be prepared to take control when needed.
- Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
- Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decision.
There are many issues with level 2 and level 3 autonomy, which might make them further off in the future than people are predicting, or perhaps even forever impractical due to limitations on how quickly humans can go from not paying attention to taking control in difficult situations. Indeed as outlined in this Wired story many companies have decided to skip level 3 and concentrate on levels 4 and 5. The iconic Waymo (formerly Google) car has no steering wheel or other conventional automobile controls–it is born to be a level 4 or level 5 car. [This image is from Wikipedia.]
So here I am going to talk only about level 4 and level 5 autonomy, and not really make a distinction between them. When I refer to an “autonomous car” I’ll be talking about ones with level 4 or level 5 autonomy.
I will make distinctions between cars with conventional controls so that they are capable of being driven by a human in the normal way, and cars like the Waymo one pictured above with no such controls, and I will refer to that as an unconventional car. I’ll use those two adjectives, conventional, and unconventional, for cars, and then distinguish what is necessary to make them practical in some edge case circumstances.
I will also refer to gasoline powered driverless cars versus all electric driverless cars, i.e., gasoline vs. electric.
Ride-sharing companies like Uber are putting a lot of resources into autonomous cars. This makes sense given their business model as they want to eliminate the need for drivers at all, thus saving their major remaining labor cost. They envision empty cars being summoned by a customer, driving to wherever that customer wants to be picked up, with absolutely no one in the car. Without that, having the autonomy technology doesn’t make sense to this growing segment of the transportation industry. I’ll refer to such an automobile, with no-one in it as a Carempty. In contrast, an autonomous car which has a conscious person in it, whether it is an unconventional car and they can’t actually drive it in the normal way, or whether it is a conventional car but they are not at all involved in the driving, perhaps sitting in the back seat, as Careless, as presumably that person shouldn’t have to care less about the driving other than indicating where they want to go.
So we have both an unconventional and a conventional Carempty and Careless, and perhaps they are gasoline or electric.
Many of the edge cases I will talk about here are based on the neighborhood in which I live, Cambridgeport in Cambridge, Massachusetts. It is a neighborhood of narrow one way streets, packed with parked cars on both sides of the road so that it is impossible to pass by if a car or truck stopped in the road. A few larger streets are two way, and some of them have two lanes, one in each direction, but at least one nearby two way street only has one lane–one car needs to pull over, somehow, if two cars are traveling on the opposite direction (the southern end of Hamilton Street in the block where the “The Good News Garage” of the well known NPR radio brothers “Click and Clack” is located).
HOW MUCH DRIVING CAN A NON-DRIVER DO?
In a conventional Careless a licensed human can take over the driving when necessary, unless say it is a ride sharing car, and in that case humans might be locked out of using the controls directly. For an unconventional Careless, like one of the Waymo cars pictured above, the human can not take over directly either. So a passenger in a conventional ride-sharing car, or in an unconventional car are in the same boat. But how much driving can that human do?
In both cases the human passenger needs to be able to specify the destination. For a ride-sharing service that may have been done on a smart phone app when calling for the service. But once in the car the person may want to change their mind, or demand that the car take a particular route–I certainly often do that with less experienced drivers who are clearly going a horrible way, often at the suggestion of their automated route planners. Should all this interaction be via an app? I am guessing, given the rapid improvements in voice systems, such as we see in the Amazon Echo, or the Google Home, we will all expect to be able to converse by voice with any autonomous car that we find ourselves in.
We’ll ignore for the moment a whole bunch of teenagers each yelling instructions and pranking the car. Let’s just think about a lone sensible mature person in the car trying to get somewhere.
Will all they be able to do is give the destination and some optional route advice, or will they be able to give more detailed instructions when the car is clearly screwing up, or missing some perceptual clue that the occupant can clearly recognize? The next few sections give lots of examples from my neighborhood that are going to be quite challenging for autonomous cars for many years to come, and so such advice will come in handy.
In some cases the human might be called upon to, or just wish to, give quite detailed advice to the car. What if they don’t have a driver’s license? Will the be guilty of illegally driving a car in that case? How much advice should they be allowed to give (spoiler alert, the car might need a lot in some circumstances)? And when should the car take the advice of the human? Does it need to know if the person in the car talking to it has a driver’s license?
WHAT TO DO ABOUT A BLOCKED ROAD
In my local one-way streets the only thing to do if a car or other vehicle is stopped in the travel lane is to wait for it to move on. There is no way to get past it while it stays where it is.
The question is whether to toot the horn or not at a stopped vehicle.
Why would it be stopped? It could be a Lyft or an Uber waiting for a person to come out of their house or condominium. A little soft toot will often get cooperation and they will try to find a place a bit further up the street to pull over. A loud toot, however, might cause some ire and they will just sit there. And if it is a regular taxi service then no amount of gentleness or harshness will do any good at all. “Screw you” is the default position.
Sometimes a car is stopped because the driver is busy texting, most usually when they are at an intersection, had to wait for some one to the cross in front of them, their attention wandered, they started reading a text, and now they are texting and have forgotten that they are in charge of an automobile. From behind one can often tell what they are up to by noticing their head inclination, even from inside the car behind. A very gentle toot will usually get them to move; they will be slightly embarrassed at their own illegal (in Massachusetts) behavior.
And sometimes it is a car stopped outside an eldercare residence building with someone helping a very frail person into or out of the car. Any sort of toot from a stopped car behind is really quite rude in these circumstances, distressing for the elderly person being helped, and rightfully ire raising for the person taking care of that older person.
Another common case of road blockage is that a garbage truck stopped to pick up garbage. There are actual two varieties, one for trash, and one for recyclables. It is best to stop back a bit further from these trucks than from other things blocking the road, as people will be running around to the back of the truck and hoisting heavy bins into it. And there is no way to get these trucks to move faster than they already are. Unlike other trucks, they will continue to stop every few yards. So the best strategy is to follow, stop and go, until the first side street and take that, even if, as it is most likely a one-way street, it sends you off in a really inconvenient direction.
Yet a third case is a delivery truck. It might be a US Postal Service truck, or a UPS or Fedex truck, or sometimes even an Amazon branded truck. Again tooting these trucks makes absolutely no difference–often the driver is getting a signature at a house, or may be in the lobby of a large condominium complex. It is easy for a human driver to figure out that it is one of these sorts of trucks. And then the human knows that it is not so likely to stop again really soon, so staying behind this truck once it moves rather than taking the first side street is probably the right decision.
If on the other hand it is a truck from a plumbing service, say, it is worth blasting it with your horn. These guys can be shamed into moving on and finding some sort of legal parking space. If you just sit there however it could be many minutes before they will move.
A Careless automobile could ask its human occupant whether it should toot. But should it make a value judgement if the human is spontaneously demanding that it toot its horn loudly?
A Carempty automobile could just never toot, though the driver in a car behind it might start tooting it, loudly. Not tooting is going to slow down Carempties quite a bit, and texting drivers just might not care at all if they realize it is a Carempty that is showing even a little impatience. And should an autonomous car be listening for toots from a car behind it, and change its behavior based on what it hears? We expect humans to do so. But are the near future autonomous cars going to be so perfect already that they should take no external advice?
Now if Carempties get toot happy, at least in my neighborhood that will annoy the residents having tooting cars outside their houses at a much higher level than at the moment, and they might start to annoy the human drivers in the neighborhood.
The point here is that there is a whole lot of perceptual situations that a an autonomous vehicle will need to recognize if it is to be anything more than a clumsy moronic driver (an evaluation us locals often make of each other in my neighborhood…). As a class, autonomous vehicles will not want to get such a reputation, as the humans will soon discriminate against them in ways subtle and not so subtle.
Maps DOn’T Tell the Whole Story
Recently I pulled out of the my garage and turned right onto the one way street that runs past my condominium building, and headed to the end of my single block street, expecting to turn right at a “T” junction onto another one way street. But when I got there, just to the right of the intersection the street was blocked by street construction, cordoned off, and with a small orange sign a foot or so off the ground saying “No Entry”.
The only truly legal choice for me to make was to stop. To go back from where I had come I needed to travel the wrong way on my street, facing either backwards or forwards, and either stopping at my garage, or continuing all the way to the street at the start of my street. Or I could turn left and go the wrong way on the street I had wanted to turn right onto, and after a block turn off onto a side street going in a legal direction.
A Careless might inform its human occupant of the quandry and ask for advice on what to do. That person might be able to do any of the social interactions needed should the Careless meet another car coming in the legal direction under either of these options.
But Carempty will need some extra smarts for this case. Either hordes of empty cars eventually pile up at this intersection or each one will need to decide to break the law and go the wrong way down one of the two one way streets–that is what I had to do that morning.
The maps that a Carempty has won’t help it a whole lot in this case, beyond letting it know the minimum distance it is going to have to be in a transgressive state.
Hmmm. It is OK for a Carempty to break the law when it decides it has to? Is it OK for a Careless to break the law when its human occupant tells it to? In the situation I found myself in above, I would certainly have expected my Careless to obey me and go the wrong way down a one way street. But perhaps the Careless shouldn’t do that if it knows that it is transporting a dementia patient.
How are the police supposed to interact with a Carempty?
While we have both driverful and driverless cars on our roads I think the police are going to assume that as with driverful cars they can interact with them by waving them through an intersection perhaps through a red light, stopping them with a hand signal at a green light, or just to allow someone to cross the road.
But besides being able to understand what an external human hand signaling them is trying to convey, autonomous cars probably should try to certify in some sense whether the person that is giving them those signals is supposed to be doing so with authority, with politeness, or with malice. Certainly police should be obeyed, and police should expect that they will be. So the car needs to recognize when someone is a police officer, no matter what additional weather gear they might be wearing. Likewise they should recognize and obey school crossing monitors. And road construction workers. And pedestrians giving them a break and letting them pass ahead of them. But should they obey all humans at all times? And what if in a Careless situation their human occupant tells them to ignore the taunting teenager?
Sometimes a police officer might direct a car to do something otherwise considered illegal, like drive up on to a sidewalk to get around some road obstacle. In that case a Carempty probably should do it. But if it is just the delivery driver whose truck is blocking the road wanting to get the Carempty to stop tooting at them, then probably the car should not obey, as then it could be in trouble with the actual police. That is a lot of situational awareness for a car to have to have.
Things get more complicated when it is the police and the car is doing something wrong, or there is an extraordinary circumstance which the car has no way of understanding.
In the previous section we just established that autonomous cars will sometimes need to break the law. So police might need to interact with law breaking autonomous cars.
One view of the possible conundrum is this cartoon from the New Yorker. There are two instantly recognizable Waymo style self driving cars, with no steering wheels or other controls, one a police car that has just pulled over the other car. They both had people in them, and the cop is asking the guy in the car that has just been pulled over, “Does your car have any idea why my car pulled it over?”.
If an autonomous car fails to see a temporary local speed sign and gets caught in a speed trap, how is it to be pulled over? Does it need to understand flashing blue lights and a siren, and does it do the pull to the side in a way that we have all done, only to be relieved when we realize that we were not the actual target?
And getting back to when I had to decide to go the wrong way down a one way street, what if a whole bunch of Carempties have accumulated at that intersection and a police officer is dispatched to clear them out? For driverful cars a police officee might give a series of instructions and point out in just a few seconds who goes first, who goes second, third, etc. That is a subtle elongated set of gestures that I am pretty sure no deep learning network has any hope at the moment of intpreting, of fully understanding the range of possibilities that a police officer might choose to use.
Or will it be the case that the police need to learn a whole new gesture language to deal with driverless cars? And will all makes all understand the same language?
Or will we first need to develop a communication system that all police officers will have access to and which all autonomous cars will understand so that police can interact with autonomous cars? Who will pay for the training? How long will that take, and what sort of legislation (in how many jurisdictions) will be required?
A lot of cars get towed in Cambridge. Most streets get cleaned on a regular schedule (different sides of the same street on different days), and if your car is parked there at 7am you will get towed–see the sign in the left image. And during snow emergencies, or without the right sticker/permit you might get towed at any time. And then there are pop-up no parking signs, partially hand written, that are issued by the city on request for places for moving vans, etc. Will our autonomous cars be able to read these? Will they be fooled by fake signs that residents put up to keep pesky autonomous cars from taking up a parking spot right outside their house?
If an unconventional Carempty is parked on the street, one assumes that it might at any time start up upon being summoned by its owner, or if it is a ride-share car when its services are needed. So now imagine that you are the tow truck operator and you are supposed to be towing such a car. Can you be sure it won’t try driving away as you are crawling under it connect the chains, etc., to tow it? If a human runs out to move their car at the last minute you can see when things are going to start and adjust. How will it work with fully autonomous cars?
And what about a Carempty that has a serious breakdown, perhaps in its driving system, and it just sits there and can no longer safely move itself. That will need to be towed most likely. Can the tow truck operator have some way to guarantee that it is shut down and will not jump back to life, especially when the owner has not been contactable, to put it in safe mode remotely? What will be the protocols and regulations around this?
And then if the car is towed, and I know this from experience, it is going to be in a muddy lot full of enormous potholes in some nearby town, with no marked parking areas or driving lanes. The cars will have been dumped at all angles, higgledy-piggledy. And the lot is certainly not going have its instantaneous layout mapped by one of the mapping companies, providing the maps that autonomous cars rely on for navigation. To retrieve such a car a human is likely going to have to go do it (and pay before getting it out), but if it is an unconventional car it is certainly going to require some one in it to talk it through getting out of there without angering the lot owner (and again from experience, that is a really easy thing to do–anger the lot owner). Yes, in some distant future tow lots in Massachusetts will be clean, and flat with no potholes deeper than six inches, and with electronic payment systems, and all will be wonderful for our autonomous cars to find their way out.
Don’t hold your breath.
OTHER TRICKY SITUATIONS
What happens when a Carempty is involved in an accident? We know that many car companies are hoping that their cars will never be involved in an accident, but humans are dumb enough that as long as there are both human drivers and autonomous cars on the same streets, sometimes a human is going to drive right into an autonomous car.
Autonomous cars will need to recognize such a situation and go through some protocol. There is a ritual when a fender bender happens between two driverful cars. Both drivers stop and get out of their cars, perhaps blocking traffic (see above) and go through a process of exchanging insurance information. If one of the cars is an autonomous vehicle the the human driver can take a photo on their phone (technology to the rescue!) of the autonomous car’s license plate. But how is a Carempty supposed to find out who hit it? In the distant future when all the automobile stock on the road have transponders (like current airplanes) that will be relatively easy (though we will need to work through horrendous privacy issues to get there), but for the foreseeable future this is going to be something of a problem.
And what about refueling? If a ride-sharing car is gasoline powered and out giving rides all day, how does it get refueled? Does it need to go back to its home base to have a human from its company put in more gasoline? Or will we expect to have auto refueling stations around our cities? The same problem will be there even if we quickly pass beyond gasoline powered cars. Electric Carempties will still need to recharge–will we need to replace all the electric car recharging stations that are starting to pop up with ones that require no human intervention?
Autonomous cars are likely to require lots of infrastructure changes that we are just not quite ready for yet.
Impacts on the Future of Autonomous Cars
I have exposed a whole bunch of quandaries here for both Carempties and Carelesses. None rise to the moral level of the so called trolley problem (do I kill the one nun or seven robbers?) but unlike the trolley problem variants of these edge cases are very likely to arise, at least in my neighborhood. There will be many other edge case conundrums in the thousands, perhaps millions, of unique neighborhoods around the world.
One could try to have some general purpose principles that cars could reason from in any circumstances, perhaps like Asimov’s Three Laws, and perhaps tune the principles to the prevailing local wisdom on what is appropriate or not. In any case there will need to be a lot of codifying of what is required of autonomous cars in the form of new traffic laws and regulations. It will take a lot of trial and error and time to get these laws right.
Even with an appropriate set of guiding principles there are going to be a lot of perceptual challenges for both Carempties and Carelesses that are way beyond those that current developers have solved with deep learning networks, and perhaps a lot more automated reasoning that any AI systems have so far been expected to demonstrate.
I suspect that to get this right we will end up wanting our cars to be as intelligent as a human, in order to handle all the edge cases appropriately.
And then they might not like the wage levels that ride-sharing companies will be willing to pay them.
But maybe not. I may have one more essay on how driverless cars are going to cause major infrastructure changes in our cities, just as the original driverful cars did. These changes will be brought on by the need for geofencing–something that I think proponents are underestimating in importance.
Recall that Isaac Asimov used these laws as a plot device for his science fiction stories, by laying out situations where these seemingly simple and straightforward laws led to logical fallacies that the story proponents, be they robot or human, had to find a way through.
28 comments on “Edge Cases For Self Driving Cars”
Mr. Brooks, thank you very much, I enjoyed the read. I could not agree more with “I suspect that to get this right we will end up wanting our cars to be as intelligent as a human”.
It is time to say the emperor has no clothes. Driving is an AGI activity. And there is no indication we are moving anywhere in AGI direction.
If you are referring to 4 & 5 only then why are you referring to one way streets? Surely they will be a thing of the past when ‘ways’ on a street are blurred by the most effective routes of the collective autonomous cars utilising the road network.
In the very long term that may well be the case. This essay is about how we get there from here. If we are going to have level 4 or level 5 cars in the short term (less than 10 years) then they will be sharing roads with cars with drivers, and we’ll still have one way streets. Estimates are that once all new cars are level 4/5 it will take about 30 years for driverful cars to disappear, unless there is draconian regulation shutting them down. History says that people in the US don’t like have machines taken away from them by legislation, no matter how dangerous they are to themselves and others.
Great article – I wonder if tow truck drivers will cary ‘blackout covers’ they can put over the sensors, disabling an automated car to ensure it won’t suddenly start up. Likewise, having some special way to communicate with such cars to send it specific signals might be best. We have ‘learned’ the way to communicate with other drivers – i.e. what a wave means, what putting hands up mean etc, we can define special language/gestures that are clear.
Regarding crazy teenagers messing with cars – I am sure that will happen, and we will probably create laws about ‘interfering with an autonomous vehicle’ and the self-driving cars will likely all have video recording to be usable as evidence, as well as phone home whenever a suspicious situation occurs.
Great points, and thanks!
Rod, I will write a more detailed response later — I am about to go on vacation and I just left a conference in Europe on self-driving car testing.
In this article you make a very popular error. You see something on the road that seems difficult to solve after a short (or medium) reflection and wonder if it’s going to be so hard as to take years or decades. With that logic, 15 years ago we would have looked at the fact that robots couldn’t handle public roads at all and conclude there were far too many problems to get where they are now.
In fact anything you’ve seen, anything you’ve thought of with a few months of thought an observation, and anything a car will encounter in many human lifetimes of driving are already known to most of the self-driving teams out there, especially Waymo which has done 4 million km of driving. Mostly California, but they and others will be spreading to Cambridge and many more difficult places before long. As the fleets grow, they will rack up thousands of human lifetimes of experience encountering things.
On top of that, people are building simulators. Many companies at the testing conference are, and many companies already have. And they take every strange thing they encounter and they put it in the simulator. They also take each strange thing that’s important and they make tools to run a thousand different variants of it, combining this with that and that. Every type of stopped car on every type of road. They put things you would never imagine in the simulator. And they test them all every build.
NHTSA actually requires in their current rules (which may get overturned by the republicans) that all companies make public sensor logs from all challenging and dangerous incidents. If that happens, both academics and companies will take every odd thing that happens to every car from every company and put it into simulator models. Every car will get tested against every variation of every problem that programmers can parameterize. Your list is actually not that scary at all — if that’s the best you found to be afraid of, we’re going to see cars in your area much sooner than I expected.
The simulations won’t cover absolutely everything. But they will make it very rare that you run into something you can’t handle. When you do, you call back to the control center, where a human looks out the cameras and figures out what to do. Whether to toot a horn or back up or anything else.
In addition, you just don’t drive down roads that are often blocked, unless you are going to an address on that road. Especially in an unmanned vehicle. Or any other road that’s problematic. You stick to the roads you know you can handle best.
Brad, I don’t think we have ever met, but I assure you that I am in weekly contact and discussions with key players worrying about self driving cars at the OEMs, startups, and non-traditionals. And yes, in fact I can imagine what they are putting in the simulators as I talk all the time to the teams that are building them. And they all agree that there is a long tail that is going to take a long time to fill out.
I must have not written my post very well, as you are making my points while thinking you are disagreeing with me. Yes, all these things are possible and will eventually come to pass, and I say so right in the first paragraph. My point is that it is not just going to happen in the short term, as many people are prognosticating. It is going to take a long time to get all these changes in place, and while we have human driven cars and pedestrians sharing roads with self driving cars it is going to be quite difficult. There will be lots of hiccups along the way, and there will be lots of regulatory challenges.
Also, your last paragraph about just staying away from difficult places–my point is that in my neighborhood that is impossible, right here in the USA. [There are many countries where things are much tougher, even on major thoroughfares in major cities.] By your argument there will not be self driving cars in my neighborhood. And that was what I was saying, though in the longer term I am confident we will get there. But that may be thirty years away.
(Guess I didn’t make much impression in the times we have met, oh well…)
The point I have making is not that your problems aren’t real, but that they are not dealbreakers because almost all that any of us can think of, whatever we will observe on the city streets, that is being and will be enumerated, and will be put into sims, and all cars that use those sims will be tested in all the parameterized variants of the scenarios that the coders can program. Over time, the number of situations that cause a problem diminishes. It does not go to zero, and perhaps it reaches a plateau as new things arise but the only question is when it gets small enough that you can deploy. Humans fail at strange road situations all the time, and we deploy them.
You combine this with a few other key resources. First, most of the time, you have a human in the car, and almost all the time, that human is capable of assisting. Second, when that’s not true, you have a data network where a vehicle can call upon remote help. That network is already quite large, and it is sure to grow to close to 100% penetration. It does not have to get to 100% penetration for the reason I cite — you just don’t send unmanned vehicles to places they can’t handle that have no data. If you really need to go there, you put data there. That’s not free but it’s not that expensive.
Your neighbourhood, though I have not driven it, does not seem impossible from your description. But if it truly is, we then move to my least favoured solution, the political one. People are going to love these cars and the rules will bend, with time, to enable them. If Manhattan can get rid of honking and gridlock, things can change anywhere. So that means that yes, if you block a street for five minutes, better worry the self-driving car does not come up behind you and send a video of your car blocking the street (presuming this is against the law) to enforcement. Enforcement’s AIs won’t do anything to you the first time, but the 20th time, you get to the top of their lists, and pay for it. And you’ll stop blocking the street.
If that’s what we need. We don’t need it most places, most places it does work. Maybe India needs it. Maybe Boston does. If so, and the public has to choose between having these wonderfully handy vehicles for people and cargo, and what they have now, I think most will change.
I’m having trouble figuring out what we are disagreeing on… Yes, we will solve all these problems with time.
In the first paragraph I say: “They will get here and human driving will probably disappear in the lifetimes of many people reading this, but it is not going to all happen in the blink of an eye as many expect.”, followed by: “There are lots of details to be worked out.”.
I am arguing that it is not all going to happen in the blink of an eye as many have predicted. All neighborhoods will have their own unique set of problems, as well as variations of the ones I describe for mine. And yes, laws to accommodate self driving cars will end up changing our cities, but over time. Not by 2020 as many have predicted. It will be a long slow process. But yes, it will all get solved.
What I am trying to do with these posts is bring to the tech enthusiast world that there is more work to do, and more problems to be solved.
Erm. Wouldn’t Uber just have the option of 5G remote control built into all of their vehicles? Anytime something unexpected happens, or a rider requests, then the car is temporarily taken over by one of the bank of full-time human drivers employed in Uber’s remote control centre. Meanwhile the AI learns from how the human driver handles the problem.
Yeah, I am guessing this will be part of the solution. It is a tried and true mechanism. Many years ago InTouch Health in Santa Barbara with hundreds of deployed remote presence robots for doctors in distant US hospitals had an operations center in Argentina. Operators there would take over the robots a night to make sure they were plugged in to the rechargers, do preventive maintenance etc. Aethon in Pittsburgh, with tug robots deployed in hospitals around the US taking dirty bedding autonomously to the laundry, and used meal trays and dishes back to the kitchen, had a central operations center in Pittsburgh. I visited about 12 years ago. Whenever a robot got intro trouble it would call the center and an operator would take control, looking through the cameras on board the tug to fix the problem. Both these companies benefitted from WiFi being pervasive in hospitals (for remote access to medical records from hand-helds) already–if they had had to get hospitals to install a network just for them I don’t think either could have overcome that hurdle. So using 5G for Ubers etc., makes sense. But see below.
Another case that I have seen, also on the order of 12 years or more ago, was in the port of Singapore, the world’s highest volume container port, stretching six miles along the coast of Singapore, which is remarkable for a country the size of Martha’s Vineyard. Many of the containers are getting switched between ships–it is a central switching node from many different Asian ports, for containers heading to North America and Europe, and so it is the hub of a hub and spoke mechanism shuffling containers to the right destinations. Most containers are only on the ground for 24 hours or so, stacked up quite a few high. At the time an AI planner (written in Prolog!!–it is the ultimate blocks world after all) would say where each container had to go, and cranes on aerial rails would get them to and from ships and to and from the right ground stack. But the last few seconds of pickup and put down were done by a human who would be switched into the crane cameras and the accurately drive the crane’s position during the terminal few meters of the grasp for pickup, or the put down.
So yes, this may well be the sort of solution that a ride share company uses for difficult situations, and might be provided as a service for private owners of self driving cars. 5G is probably the right network. Tests start in 11 cities in the US this year. Will cover about 100 million people in the US by 2022. It will slowly, but eventually, fill out the tail over a few more years.
BUT, this will not be available in more than a few places by 2020, when many have predicted driverless cars will be well established and deployed. And besides the network there will be lots of other infrastructure and regulations to build out.
I am not saying that solutions will not be found an implemented eventually. I am saying that there are so many challenges (this is just one of many, many) that it is going to take a decade at least until we have even partial penetration, and many decades until it is the default.
Nice essay and a good way to get into the nitty gritty of car driving. One question, would you believe a control centre with operators could help the poor carelesses when they get stuck? At least they could give authorized instructions on weird and unexpected situations. And it would show even with level 4 or 5, humans are not quite or if the equation yet…
Yep, I agree. See my comments above in reply to Laurence.
Great read, thanks for sharing.
What about a Mechanical Turk-style solution? One could imagine Carempties in a bind streaming their sensor feeds out to off-site gig driver, with a response time not far off from the distracted, texting driver.
Yep, again, see my reply to Laurence above.
One of your problems has been solved by various industrial machines and most race cars–an externally mounted master shutoff switch that disconnects the batteries. So the tow truck driver can “safe” the vehicle for towing. Of course, that brings with it its own problems, like malicious folk slapping that button when you drive past them.
And it’s interesting to hear your commentary from the standpoint of someone who lives in a driving environment that is very, very complex and difficult. My own environment is radically different–rural, roomy, occasionally muddy–but with no fewer problems to solve. The difference is that they will put a priority on solving your problems because yours affect a large percentage of the population, but my environment’s problems will be very low on the priority list. I’m not optimistic that enough of the problems will be solved to make this practical technology.
I suspect that manufacturers and owners will not want the battery disconnected–self driving cars are going to be complex devices and will probably want to keep situational awareness at all times as part of their self-certification of non-interference and chain of custody, at the very least, but also as part of asset tracking–ride-share companies will not want their autos to disappear from their dynamic geographic database. BUT, even if a battery disconnect is the answer there are two other problem besides the malicious outsider. We will need a standard, and legislated, way for it to be implemented, which will take some years to come about. And secondly we will need some sort of authority to be granted to someone in order to implement the safe-ing. Will any tow truck operator be able to legally safe any self-driving car? If so what “key” will they have, and how will that be protected? All these issues are solvable technically. My point is that getting agreement and widespread adoption is going to take time.
And yes, as always, the future will most likely not be delivered uniformly or equitably.
First off, great article. As a Mid-Cambridge resident, your edge cases are all too familiar to me.
One solution that I see for a lot of the scenarios involving hand signals is granting authority to police and others through some sort of proximity device. These would be government-issued and signal to autonomous cars that the person is to be obeyed.
For street cleaning, snow emergencies, moving vans, etc. perhaps a map where city officials can geofence particular sections of streets would work.
Many thanks for your detailed treatment of the challenges facing AV development. I am working with members of your team at Rethink to assess the viability of Sawyer for an application that will ultimately require a degree of machine situational awareness and so am interested in your thoughts on AVs, which face similar, though far more extreme (particularly in your fair city) challenges.
I wonder how you view the apparently remarkable improvement in disengagement rates reported by Waymo, Ford and others between 2015 and 2016. My impression has been that this indicated the passage of a tipping point in AV system learning that put development on an exponential trajectory. Do you disagree with this assessment or do you simply see the challenges to full autonomy as so great as to be many years off even with exponential growth?
Thanks in advance for your thoughts.
Thoughts on level 6 autonomous cars?
From last weeks Economist:
In a world run by blockchains, decentralisation could be pushed even further, to include objects. Once they have their own identity and can be controlled via a blockchain, it is possible to imagine them becoming, in a way, self-determining. A few years back, Mike Hearn, a former bitcoin developer who now works for R3, a blockchain consortium, suggested the idea of self-driving cars which are also financially autonomous. Guided by smart contracts, they would stash away some of the digital money they make by ferrying people around, so as to pay for repairs or to replace themselves when repairs are no longer worthwhile. They would put themselves in long-term parking if not enough rides are to be had—or emigrate to another city. They could issue tokens to raise funds and to allow owners to get part of their profits.
“A level six autonomous car is legally a person”? Do they get to vote?
Is this Pavlo, as in the Pavlo I know?
They should get to vote, probably do a better job of it than the lot of us.
Yes, Rod, the one and only. Surprised you only know one Pavlo. Looking forward to discussing over a beer someday.
This is why I often say that we’ve done the first 99% of autonomous driving but the next 99% is going to be much harder.
One issue that people seem to forget is that driving involves conversations with other drivers and other people.
The focus on carempty is strange — it’s as if the goal was one-to-one replacement for a driver like the blow-up doll auto-pilot in the move Airplane. If the goal is to transport people and/or goods then we should be thinking systems rather than just replacement.
I focussed on carempty because Uber, and now Lyft, are looking at getting the driver out of their cars, for I think clear reasons–not having to pay them. But that means that an Uber or Lyft will be completely empty when it is on its way to pick up passengers. Not too long ago, Uber was saying they would be rolling this out in 2020. I was trying to make the point that there are so many edge cases that it is not likely to happen soon.
Agree. Too bad the press tends to be so noncritical. I also wish we’d talk about machine training rather than machine learning. This is one reason I wrote http://rmf.vc/IEEEAlienDrove.
(BTW your site doesn’t do notification of responses …)
” There’s word in business circles that the computer industry likes to measure itself against the Big Three auto-makers. The comparison goes this way: If automotive technology had kept pace with Silicon Valley, motorists could buy a V-32 engine that goes 10,000 m.p.h. or a 30-pound car that gets 1,000 miles to the gallon — either one at a sticker price of less than $50. Detroit’s response: ‘OK. But who would want a car that crashes twice a day?’ “
There needs to be a level – level 1.5 perhaps – where the autonomous system takes over the control of the car from the human driver for few brief seconds to avoid a collision. That seems to me a very worthwhile goal to pursue and far more realizable than Level 4 and 5.
I wonder if autonomous vehicles are going to have redundant systems in case something critical fails. I’ve had the blind-side monitor and auto braking system in my car just stop working for no reason, then start working again in a day or two for no reason. It was not a problem since I was warned that it wasn’t working and could deal with it as driver. But if something critical fails in a car with no way for a driver to be the “redundant system” it seems that a built-in redundancy would be a must.
There is going to have to be a lot of redundancy and continuous self test of sensor systems to make sure they are working properly.