A story on how far away self-driving cars are just came out in The Verge. It is more pessimistic than most on when we will see truly self-driving cars on our existing roads. For those of you who have read my blog posts on the unexpected consequences and the edge cases for self-driving cars or my technology adoption predictions, you will know that I too am pessimistic about when they will actually arrive. So, I tend to agree with this particular story and about the outstanding problems for AI that are pointed out by various people interviewed for the story.
BUT, there is one section that stands out for me.
Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around. As an example of an unpredictable case, I asked him whether he thought modern systems could handle a pedestrian on a pogo stick, even if they had never seen one before. “I think many AV teams could handle a pogo stick user in pedestrian crosswalk,” Ng told me. “Having said that, bouncing on a pogo stick in the middle of a highway would be really dangerous.”
“Rather than building AI to solve the pogo stick problem, we should partner with the government to ask people to be lawful and considerate,” he said. “Safety isn’t just about the quality of the AI technology.”
Now I really hope that Andrew didn’t say all this stuff. Really, I hope that. So let’s assume someone else actually said this. Let’s call him Professor Confused, whoever he was, just so we can reference him.
The quoted section above is right after two paragraphs about recent fatal accidents involving self-driving cars (though probably none of them should have been left unattended by the person in the driver’s seat in each case). Of the three accidents, only one involves an external person, the woman pushing a bicycle across the road in Phoenix this last March, killed by an experimental Uber vehicle driving itself.
In the first sentence Professor Confused seems to be saying that he is giving up on the promise of self-driving cars seamlessly slotting into the existing infrastructure. Now he is saying that every person, every “bystander”, is going to be responsible for changing their behavior to accommodate imperfect self-driving systems. And they are all going to have to be trained! I guess that means all of us.
The great promise of self-driving cars has been that they will eliminate traffic deaths. Now Professor Confused is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior? What just happened?
If changing everyone’s behavior is on the table then let’s change everyone’s behavior today, right now, and eliminate the annual 35,000 fatalities on US roads, and the 1 million annual fatalities world-wide. Let’s do it today, and save all those lives.
Professor Confused suggests having the government ask people to be lawful. Excellent idea! The government should make it illegal for people to drive drunk, and then ask everyone to obey that law. That will eliminate half the deaths in the US immediately. Let’s just do that today!
I don’t know who the real Professor Confused is that the reporter spoke to. But whoever it is just completely upended the whole rationale for self-driving cars. Now the goal, according to Professor Confused, as reported here, is self-driving cars, right or wrong, über alles (so to speak). And you people who think you know how to currently get around safely on the street better beware, or those self-driving cars are licensed to kill you and it will be your own damn fault.
PS This is why the world’s relative handful of self-driving train systems have elaborate safe guards to make sure that people can never get on to the tracks. Take a look next time you are at an airport and you will see the glass wall and doors that keep you separated from the track at all times when you are on the platform. And the track does not intersect with any pedestrian or other transport route. The track is routed above and under them all. We are more likely to geo fence self-driving cars than accept poor safety from them in our human spaces.
PPS Dear Professor Confused, first rule of product management. If you need the government to coerce a change in behavior of all your potential customers in order for them to become your actual customers, then you don’t got no customers for what you are trying to sell. Hmmm. In this case I guess they are not your customers. They are just the potential literal roadkill in the self-satisfaction your actual customers will experience knowing that they have gotten just the latest gee whiz technology all for themselves.
50 comments on “Bothersome Bystanders and Self Driving Cars”
Oh, the state of AI is in worse shape than this. From ‘Aristotle created the Computer’:
These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.
From ‘Dark history of intelligence as domination’:
The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.
1703, Leibniz completely misinterpreted the hexagrams of I Ching as confirmation of the universality of his binary system.
The end-result in 2018:
So we may need to scrap a lot of the frameworks and start over —- unless other Professors Confused are suggesting that we lobotomise people and replace our quantum oxygen-carbon-nitrogen-hydrogen atoms with on/off electricity-conducting silicon chips; thereby reducing the wonderful complexity and dynamism of human biology, culture and behavior to fit the machine’s binary logic.
I’m working hard on two long essays on topics related to this. Hopefully I’ll have them in good shape very soon.
That’d be great! I can’t wait to read them…
I have written a short book called Driverless cars: on a road to nowhere which raised precisely the point about how AV developers expect the world around them to be adapted to their needs
The idea of self-driving cars has never appealed to me, so I thank you for this perceptive albeit snarky exposition.
I too was shocked when I read what Andrew Ng had presumably said in that article… It’s hard to imagine he was the one who said those dumb things about training bystanders, or maybe it was some meaningless text generated by a GAN he designed and trained? Or maybe Drive.ai has hit an insurmountable roadblock and he got desperate…
And someone said I was snarky. But the GAN idea is funny.
I would hope that he was being ironic, and that the parts which made that clear were excised by the reporter … if not, we need a clear out of the prosletites and prophets of this brave new world. I’m not advising violence, but perhaps a trained ML system which will label them as “extreme views”?
I don’t know whether that is the case or not. This seemed like a big turn around from “self driving cars are here” which was a headline Andrew Ng used (he had it in his text too): https://medium.com/@andrewng/self-driving-cars-are-here-aea1752b1ad0
My point is that if he redefines self-driving cars to mean every bystander has to be trained to be near them, then they are hardly the self driving cars we were promised. (And of whose timelines I have always been skeptical.)
Your train comment isn’t entirely correct. The Docklands Light Railway in east London operates self-driving trains with open platforms.
BUT, they are not unmanned. There is a PSA (Passenger Service Agent), a human, on each train who controls opening and closing the doors. So I don’t think this contradicts my point at all. It is the passenger access that is the trickiest part on trains. These particular trains allow wheelchair access, and the PSA is probably helpful with that. They do not allow bikes which can not be folded since this service has been running since the 1980’s I assume that it is the PSA that enforces that, not a Deep Learning network!
One shouldn’t confuse the unfortunate things Professor Ng said or the unfortunate state of AV AI , with the optimistic prospects of AI even for future AV.
The fundumental problem mentioned (and others) will be solved.
Critisicm is more constructive in a positive note then as a destructive pessimistic one.
I look forward for the problems being solved and AV being as common and mundane as any car usage.
People tend to overestimate the short term and underestimate the long term , from the perspective of 20 years ago AV is doing great.
Maybe you haven’t been reading my posts. That is exactly my position. The proponents have been saying full autonomy everywhere in the next five years, or less. Professor Confused is taking a very different position now, and that is my point.
Up in the sky look – It’s a bird – it’s a plane……actually I don’t know what it is – and that’s where we are are with computer recognition.
Any system which can’t tell the difference between a Polar bear and a Meerkat is of no use to mankind.
I agree with the post by Twain that we may have to scrap everything and start again.
Not all gloom though as progress is being made in brain simulations.
However we have a long way to go – champagne in the 22nd century anyone?
I think 22nd century is a better expectation for many of these things.
On the other hand the sentence of prof. Confused highlights what’s the most difficult obstacle to self-driving cars, which is not “generalization” as pointed out in “The Verge” article, and it is not related to how the car can drive itself but how the car can understand (anticipate) what others will be doing. By “others” I mean car drivers, cyclists, pedestrians, animals, with their different behavioral styles (driving in Naples requires different socio-driving skills than in Berlin and Delhi, even if cars are the same).
The solution proposed to adapt our behavior to that of the autonomous car sounds like a joke.
But it was written as though he wasn’t joking. Which is why I refuse to believe that he actually said these things. They are just too wacky. And yes, I completely agree about the different driving styles. Cambridgeport in Massachusetts is completely different from San Francisco, which itself is completely different from Mountain View (the latter two places are swarming with Cruise and Waymo cars respectively — I have never seen a single one in Cambridgeport).
A random question. Will the tech industry, before selling us 5,000 fix that would probably save 5,000 lives annually, today? I am talking about Apple, LG, etc. downloading code to their phones that would shut off all connectivity as soon as their accelerometers detect motion over, say, 10 mph? Cheap, fast, and would dramatically reduce distracted driving and crashes. Now, whenever I bring this up, people object on behalf of passengers: “Why should the passengers be stopped from watching Netflix?” or whatever. My snarky reply is “Passenger convenience is worth 5,000 lives annually?” My more serious reply is, these firms have technological resources of almost infinite depth, and have we already decided they cannot find a way around the passenger problem? We just through up our hands? End of rant.
This is a good idea. Details need to be figured out of course. It might make sense to have insurance companies involved. Many cars already connect to a cell phone. So if you want a discount on insurance (and insurance companies would be willing to charge less for drivers who are provably not texting etc.) you register your phone, the car company puts in some extra tech (fingerprint recognition in the steering wheel?) and then as long as you are driving without texting your insurance goes down (requires the car making a report to the insurance company–the telematics are already in many cars), and then there is a severe penalty if you use the phone for forbidden purposes while driving. This makes it a carrot approach. Of course, this is just a speculative flow of thoughts here. Many details to think through.
Mnyeah, it sounds promising but then if it ever became reality, someone would immediately publish an app that disabled the phone’s accelerometer (or tricked it to think the phone is stationary etc).
For instance, here’s a WikiHow page listing 4 ways to disable a car’s seat belt alarm, starting with a “seat belt alarm stopper” – a device that looks exactly like a seat belt clip, the part that goes into the belt slot, except without the belt:
Apple already has this. I believe it is opt in, it detects when you are driving and disables texts and such. It is easy to get around, but it does help. I don’t know if android has a similar feature. Part of the problem is knowing who is a driver and who is a passenger.
And this Dr. Ng pushed MOOCs as a universal solution to education?
The funny thing is that there probably is more value in disrupting roads altogether. Meaning going 3d. Could be underground for vertical cities and air for horizontal ones. Or any combination. Lots of tech challenges + social challenges with air, but at least it’s a blank slate.
Minor nitpick: most of Vancouver’s rail mass transit is self-driving and a lot of the stations don’t have any staff presence at all on the platforms (no staff on the trains in general).
Yep, it is competing with Singapore on having the longest amount of driverless track in the world. Vancouver has special glass walls separating passengers from the tracks, as in airports. Without that even driverless systems usually have a person on board each train for to deal with closing the doors and passenger safety. The Shimbashi station of the self-driving Yurikamome line in Tokyo has no glass walls, but it as a human monitor on platform. The other stations all have glass walls.
Vancouver actually has open platforms downtown. It’s pretty wild.
Do those down town stations have a person controlling the doors? On the Yurikamome line in Tokyo the “downtown” terminal station at Shimbashi has open platforms and a person to control the doors, whereas the quieter stations have glass walls and the doors are automated. Vancouver has both a special human police force for its automated train system (Skytrain) and according to Wikipedia has “SkyTrain Attendants provide customer service and first aid, troubleshoot train and station operations, and perform fare checks alongside the transit police force”. Also an average of 23 closed circuit TVs in each station that are monitored by humans. So it there is a lot of human real-time human oversight on the system. I think when people hear “self driving train” they tend to think that the system runs without people. The Vancouver system is impressive but clearly relies on a lot of people to run.
“If you need the government to coerce a change in behavior of all your potential customers in order for them to become your actual customers, then you don’t got no customers for what you are trying to sell.”
Except you can very well imagine self-driving car companies helicopter-dropping cash on cities if they become test tracks, with a long list of stipulations on restricting access and policing the general populace in how to adapt to the new order. Especially smaller cities, and possibly smaller cities with municipal-liberal leanings. City governments can’t move out. Corporate cash can move in. The Amazon HQ2 bidding process shows what municipalities are prepared to do. It never ends well.
Rodney, sure Andrew’s remarks sound a bit ridiculous, but consider I live in Las Vegas, and as a 35 years plus driver, I felt I needed to “retrain my neural networks” (i.e. brain) when driving after nearly hitting a pedestrian. So I’m keenly aware when a pedestrian is around the roadside, and my brain jumps into high action when I detect they are there. I felt I was a much better and safer driver. But this neural network training of mine had little use when I visited the busy streets of San Francisco!…. boy, what a conflict! People are everywhere, and cars are racing up and down narrow paths …and pedestrians, seem to jump out when they think they can get by some cars! So there might be some merit to Andrews thoughts after all. Just some food for thought! Eric.
[[I have recently been in San Francisco fairly regularly and compared to Camridgeport, Massachusetts, where I have lived for the last decade (and worked next to for 35 years), SF seems to me to be calm with wide open streets, orderly drivers, and unbelievably well behaved pedestrians… come visit Cambridgeport!!! That variation is just in the US. Other countries have even more variations.]]
People who are not familiar with the history of self driving cars, nor my writings, and my somewhat snarky writing style, may not get the points I am trying to make. The hype about self driving cars has been that the are imminent, and that they will be safe, and slot right into our existing infrastructure very soon. Early predictions were for 2017, and I have a whole list of quote from senior executives predicting various dates centered around 2020, with 2018, 2019, etc. sprinkled in there. The new reveal here is one of the biggest advocates of “they are already here”, see https://medium.com/@andrewng/self-driving-cars-are-here-aea1752b1ad0 is now saying, well we’ll need the government to be involved, and we’ll need to get people to adapt, and… My early blog posts (reference int he first couple of sentences) were about how it was not going to be just plopping self driving cars into existing infrastructure and social structures. The cities are going to have to be changed, and people are going to have to adapt. That will not be overnight. It will need to be incremental, and it will take decades. So the techno dream of self driving cars replacing drivers willy nilly in the next handful of years is wildly wrong. This latest Verge story shows that the advocates are starting to see that too.
Its odd how regularly the hype about GOFAI transforming our daily life outruns the technology. The situation may even be worse than the article states: I’ve read that none of the self driving systems can cope with a roundabout.
Although things are slowly changing much of current AI still uses the symbol based computational system which assumes a ‘knowable objective truth’. Surely an unreasonable assumption in a world which contains pogo sticks.
Whereas, Dr. Brooks, your early automata showed promise in coping with the unexpected. Yet correlating our unpredictable world with an AI’s behaviour has been awfully slow, taking more than 30 years before a robot was able to learn how to manipulate objects through observation.
I guess, on the bright side, we are now generating the theories needed to build control systems able to cope with a world of surprises. The early Subsumption Architecture, of modules and layers, is being leveraged by innovative thought in such areas as Systems Theory, Nonlinear Dynamics and Perceptual Control Theory. And last year Rupert Young’s paper, A General Architecture for Robotics Systems: A Perception-Based Approach to Artificial Life, had a great reception.
But I’m saddened by the long wait we’ve had. If thirty years ago just a small part of the massive amount of time and money sunk into autonomous driving had been spent on developing Subsumption Architecture then by now have more than a vacuum cleaner to show for it. But that never happened so instead of safe, fully autonomous, self driving cars we only have Roomba. That sucks. And its why I have no hope of seeing AGI in my lifetime. I guess I’ll have to settle for AV in 5yrs.
Self-driving cars were never meant to reduce road accidents. Anyone who thinks so is naive. That’s just marketing; nobody actually cares. They are all about convenience and reducing labor cost by getting rid of drivers in trucks, trains, taxis and so on.
Thus, it’s acceptable if self-driving cars are just about as good as human drivers (good enough to be legal). I think Andrew’s (badly articulated) point is that we can not expect the car AI to be perfect. It should be able to detect obstructions in the road regardless of what they are of course (Probably, self-driving cars will even be better at this due to radar and other sensors that can see better than a human could) but it can’t magically teleport away if some dumb idiot jumps into the road with a pogo stick from behind a parked bus. We should expect some degree of self-preservation from other road users.
Maybe there is some legitimacy to calling out SV-tech fanatics on their elitism and “technological advancement at all cost”-ideology but imo this article serves mostly to create outrage about self-driving cars without much to back it up.
I personally know many people who work on self driving technology who are absolutely inspired by the safety angle. They see it as a way to reduce the number of road deaths. And I applaud them for that.
But why do we need self-driving cars, to improve road safety? We’d get the same result with more and better public transport, train lines for long distance road journeys and the adoption of cycling for short-distance journeys. These also happen to be transport solutions much less harmful to the environment than, well, building even more cars (self-driving or not).
Why is that that, to improve safety, we need to wait for several decades for self-driving tech to mature, when we can use existing technologies right now and achieve the same result?
Stassa is right on point. When did driving my car become such a problem? If people want to do other things while travelling, take public transport or carpool. Right now I can’t even get Alexa to play the song I want on Spotify. And I am going to let this technology drive me around town?
The article is excellent, but the focus on pedestrians excludes other major issues of adoption.
1) non-self driving cars on the road. The average vehicle in the US is 11.7 years old and this average age has been rising since 2008. This means the non-self driving cars will be on the road for at least several decades, barring a major legislative push like Cash For (non-self driving) Clunkers. Said non-self driving cars are themselves a major issue with self-driving cars; these all play by different rules according to the specific driver.
2) The real intent for self driving cars is replacing chauffeurs. That is what Uber does – chauffeurs for the masses via app, replacing chauffeurs for the 1% working in limo companies, replacing full time chauffeurs for the really, really idle rich. The logical next step is replacing the human entirely. This makes much more sense given that the self-driving cars will be tremendously more expensive than an equivalent non-self driving car: the sensors, computers, software and liability insurance are going to cost in the high 5 digit to low 6 digit range.
3) The maintenance is also going to be interesting – who is responsible for looking after said vehicles when they inevitably break down? One of the reasons we don’t have flying cars is that flying vehicles are immensely more challenging to keep in repair than a ground vehicle. Ailerons, fuel mixes, weather conditioning, control systems, instrumentation – the list goes on and on for what a flying vehicle must have working in order to not crash.
A self driving car is not going to be as bad, but it is going to be much more complex to maintain than a non-self driving car. The sensors are going to be vital – and there will certainly be at least 2 sets. The compute system is also vital, as is the communications. Bent antennas (cellular equivalent) will be more than just an annoyance. Then there’s the extra gear connecting the computing/comms/sensors with the actual car. Between dust, vibration, temperature extremes and the odd ding, I guarantee that the self-driving car is going to require a dramatically different maintenance regime – ironically just as the non-self driving cars are getting so much better (see average life of car on the road above).
See my two links in the first paragraph to a couple of my blog entries on self-driving cars. They cover most of the territory (plus more) that you mention here.
I can see cities changing to adapt to self driving cars. Maybe selected streets would be fenced off to keep pedestrians out. We already keep pedestrians off freeways, some lanes are bus only, etc. Caltrain has fenced off most of their tracks to restrict pedestrian access.
Agree, and it is not going to be as seamless as the enthusiasts have predicted. Changes will happen to our cities for self driving cars, but they are going to be slow, and geographically incremental. So there won’t be as much benefit as people have predicted for a long time, which may slow down adoption even more. Level two driver assist is going to be where a lot of the action will be for the next few years.
If the only way to make self-driving cars work is to hand over more rights to public space to them than cars already take, that is a huge strike against them, one that I would hope most cities would push back on very strongly.
Allowing autonomous vehicles on our streets and highways is placing a whole ton of trust in those writing code for all the various systems involved in safely driving a car.
It should be up to “we the people”, not companies hungry from profit for new inventions, or the courts.
I’ll bet if it were put up for a vote, self-driving would lose big time!
Thanks, interesting post. If I read you right (more from the responses to other comments than in the post itself), you and Prof Confused agree – self-driven cars will require adaptation by all road users and the development of new norms about road use – just as the transition from carts to cars did. And that will take a long time, and in fact may never ‘end’, just as norms about ordinary driving continue to evolve. So is your concern that the advocates of the new haven’t pointed that out before? Is that surprising? Why would they? That’s our job, the media’s and our politicians. I suspect they’re the ones who are asleep at the wheel, if you’ll pardon the pun, not the AV companies.
No, my concern is deeper.
I think they really did think that self driving cars were just around the corner and were going to be on our roads soon, and they would be safe. I’ve been saying for a while (read the references in my first paragraph) that this is not true for many reasons. But the imminence has become expected fact. This has some disastrous consequences:
1. Some cities, in an attempt to be innovative, have been issuing licenses for aggressive out in public experimentation with self driving cars. They have not been tested to anything like the standards we require for braking systems, say, and so the general public is being exposed to thousands of pounds of metal driving at 30mph without any way of knowing whether today’s software load is full of bugs or corner cutting. People who think this is OK have mistakenly believed the hype and claims of the AV enthusiasts.
2. Professor confused is now pushing the responsibility for safety onto the general public and governments, while his own company is deploying self driving vehicles on public roads this year (in Frisco, TX).
3. People with level 2 assist features have bought into the hype and are treating their cars as level 4 (lots and lots of stories about Tesla drivers not paying attention, and even third party systems you can buy, which silence the warnings about having your hands off the wheel). This has killed a few Tesla owners. Soon it will start killing random bystanders. Those bystanders did not sign up for the risks of walking around assuming that humans are driving the cars near them to now discover that they are being driven by software systems which are a new download today, perhaps. Rapid updates of software when it is your FB feed is one thing–when it is for thousands of pounds of speeding metal within a few feet of bystanders is another.
I think the pundits have display gross irresponsibility. And a few bad accidents will set the adoption back by years, where more careful evangelism and measured adoption would indeed save lives over time.
>> The great promise of self-driving cars has been that they will eliminate traffic deaths. Now Professor Confused is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior? What just happened?
Well, to be honest it does sound like “the old bait-and-switch”. If I understand correctly, that’s a marketing tactic where a retailer promises one product or service at a reduced price, then, when a customer declares interest, offers a different product or service of a lower quality than the promised one or at a higher price etc.
I guess though what’s really going on here is that, after the few high-profile fatal accidents, the hype about the inherent safety of self-driving cars is starting to die down and the people who promoted them as inherently safe in the first place, are now seeking to adjust the public’s expectations. Perhaps professor Confused is not so much confused, as concerned- that there might be a bit of a backlash, if the safer-than-human self-driving cars we were promised go the way of the flying car and what we get instead are machines that fail in ways humans never do and still can’t drive in all the contexts humans can.
I think this is what Professor Confused might have meant – a complete hostile takeover of a legislature, as a continuation of a process that has taken city streets from the people and gave 80 % of them to cars (and their makers): https://www.vox.com/2015/1/15/7551873/jaywalking-history
That history of how jay walking became a crime is very interesting!
Dutch guy here. Our government is banning cars from city centres and giving roads back to bicycles.
I’m so glad that now even the optimists are realising that full autonomy is not around the corner.
But I don’t find that quote particularly bad, especially if the context is already “co-habitation of people and self-driving cars”.
We need to solve the problem of car/pedestrian communication. One possible solution is to tell people “crossing in front of a self-driving car is just too dangerous, don’t do it”.
I suppose his suggestion could be worded better e.g. “governments could make special roads marked for self-driving” or “governments can set a standard way to mark which cars are autonomous…”
I agree that the phrasing is really bad but the basic “people need to adapt” concept is correct. Finally the optimists are realising it.
Wonderful article that clearly separates scaling of a technology from an advanced demonstration of it. To the myriad of issues around self-driving car deployment add to that – inequality. If people need to “adapt”, is the onus going to be on those who live in the least wealthy neighborhoods or who cannot afford a self-driving car ride in the future? Earlier adopters tend to be more wealthy districts, and they benefit both from the tech, but also the profits earned from deploying it. Alternate modes of utilizing the road is far more common in less wealthy neighborhoods where we have failed to maintain the infrastructure, including sidewalks. I think we are overlooking the amount of investment required to fairly impose stricter codes of conduct in the streets. Now let us think beyond Palo Alto and even the satellite cities in the Bay Area. How about India or Brazil? Whole populations will be left out of this change simply because it is too expensive to update the infrastructure to have proper sidewalks, lanes, etc. What are we going to do? This requires changes in what we think about in terms of taxation etc. There are far too many societal variables to get this done in a couple of years. Enforcing much stricter monitoring of jaywalking might also end up infringing on our right to privacy and civil liberties – so it will not be straightforward to impose.
I think we are starting to see the limitations of pure “CS” thinking when we want to apply it outside of digital only domains. Medicine, transportation, energy etc, have a series of established operating safety regulations, critical service requirements and available infrastructure rules that comprise of engineering thinking with policy, economics, social norms etc. Reducing all that to “ humans need to be compliant to my tech” is beyond naive, but perhaps even cruel and at a minimum very misleading. I am concerned about how much governments will spend on this boondogle, and the alternative spending for this. For example, wouldn’t it be better to train a million people to be proficient in operating AI-enabled manufacturing equipment or programming AI, than to prepare a couple of cities to suddenly enable AVs by restricting citizens or enforcing fines on them?
As a society, I would like journalists to stop catering to tech companies alone and evangelism. They operate in a society that has supported their existence. We owe it to citizens to justify economic and even moral choices.
By the way, Prof Brooks – it would be great if you could offer a similar perspective on automated manufacturing. Do you think we are seeing a renaissance in the area? Are the changes there easier to attain? What it represents for those who work in the factory line?