A very recent article follows in the footsteps of many others talking about how the promise of autonomous cars on roads is a little further off than many pundits have been predicting for the last few years. Readers of this blog will know that I have been saying this for over two years now. Such skepticism is now becoming the common wisdom.
In this new article at The Ringer, from May 16th, the author Victor Luckerson, reports:
Elon Musk, the driverless car is always right around the corner. At an investor day event last month focused on Tesla’s autonomous driving technology, the CEO predicted that his company would have a million cars on the road next year with self-driving hardware “at a reliability level that we would consider that no one needs to pay attention.” That means Level 5 autonomy, per the Society of Automotive Engineers, or a vehicle that can travel on any road at any time without human intervention. It’s a level of technological advancement I once compared to the Batmobile.
Musk has made these kinds of claims before. In 2015 he predicted that Teslas would have “complete autonomy” by 2017 and a regulatory green light a year later. In 2016 he said that a Tesla would be able to drive itself from Los Angeles to New York by 2017, a feat that still hasn’t happened. In 2017 he said people would be able to safely sleep in their fully autonomous Teslas in about two years. The future is now, but napping in the driver’s seat of a moving vehicle remains extremely dangerous.
When I saw someone tweeting that Musk’s comments meant that a million autonomous taxis would be on the road by 2020, I tweeted out the following:
Let’s count how many truly autonomous (no human safety driver) Tesla taxis (public chooses destination & pays) on regular streets (unrestricted human driven cars on the same streets) on December 31, 2020. It will not be a million. My prediction: zero. Count & retweet this then.
I think these three criteria need to be met before someone can say that we have autonomous taxis on the road.
The first challenge, no human safety driver, has not been met by a single experimental deployment of autonomous vehicles on public roads anywhere in the world. They all have safety humans in the vehicle. A few weeks ago I saw an autonomous shuttle trial along the paved beachside public walkways at the beach on which I grew up, in Glenelg, South Australia, where there were two “two onboard stewards to ensure everything runs smoothly” along with eight passengers. Today’s demonstrations are just not autonomous. In fact in the article above Luckerson points out that Uber’s target is to have their safety drivers intervene only once every 13 miles, but they are way off that capability at this time. Again, hardly autonomous, even if they were to meet that goal. Imagine having a breakdown of your car that you are driving once every 13 miles–we expect better.
And if normal human beings can’t simply use these services (in Waymo’s Phoenix trial only 400 pre-approved people are allowed to try them out) and go anywhere that they can go in a current day taxi, then really the things deployed will not be autonomous taxis. They will be something else. Calling them taxis would be redefining what a taxi is. And if you can just redefine words on a whim there is really not much value to your words.
I am clearly skeptical about seeing autonomous cars on our roads in the next few years. In the long term I am enthusiastic. But I think it is going to take longer than most people think.
In response to my tweet above, Kai-Fu Lee, a very strong enthusiast about the potential for AI, and a large investor in Chinese AI companies, replied with:
If there are a million Tesla robo-taxis functioning on the road in 2020, I will eat them. Perhaps @rodneyabrooks will eat half with me?
I readily replied that I would be happy to share the feast!
Luckerson talks about how executives, in general, are backing off from their previous predictions about how close we might be to having truly autonomous vehicles on our roads. Most interestingly he quotes Chris Urmson:
Chris Urmson, the former leader of Google’s self-driving car project, once hoped that his son wouldn’t need a driver’s license because driverless cars would be so plentiful by 2020. Now the CEO of the self-driving startup Aurora, Urmson says that driverless cars will be slowly integrated onto our roads “over the next 30 to 50 years.”
Now let’s take note of this. Chris Urmson was the leader of Google’s self-driving car project, which became Waymo around the time he left, and is the CEO of a very well funded self-driving start up. He says “30 to 50 years”. Chris Urmson has been a leader in the autonomous car world since before it entered mainstream consciousness. He has lived and breathed autonomous vehicles for over ten years. No grumpy old professor is he. He is a doer and a striver. If he says it is hard then we know that it is hard.
I happen to agree, but I want to use this reality check for another thread.
If we were to have AGI, Artificial General Intelligence, with human level capabilities, then certainly it ought to be able to drive a car, just like a person, if not better. Now a self driving car does not need to have general human level intelligence, but a self driving car is certainly a lower bound on human level intelligence. Urmson, a strong proponent of self driving cars says 30 to 50 years.
So what does that say about predictions that AGI is just around the corner? And what does it say about it being an existential threat to humanity any time soon. We have plenty of existential threats to humanity lining up to bash us in the short term, including climate change, plastics in the oceans, and a demographic inversion. If AGI is a long way off then we can not say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and when it does show up it will be in a world that we can not yet predict.
Do people really say that AGI is just around the corner? Yes, they do…
Here is a press report on a conference on “Human Level AI” that was held in 2018. It reports that of respondents to a survey at that conference said they expected human level AI to be around in 5 to 10 years. Now, I must say that looking through the conference site I see more large hats than cattle, but these are mostly people with paying corporate or academic jobs, and of them think this.
Ray Kurzweil still maintains, in Martin Ford’s recent book that we will see a human level intelligence by 2029–in the past he has claimed that we will have a singularity by then as the intelligent machines will be so superior to human level intelligence that they will exponentially improve themselves (see my comments on belief in magic as one of the seven deadly sins in predicting the future of AI). Mercifully the average prediction of the 18 respondents for this particular survey was that AGI would show up around 2099. I may have skewed that average a little as I was an outlier amongst the 18 people at the year 2200. In retrospect I wish I had said 2300 and that is the year I have been using in my recent talks.
And a survey taken by the Future of Life Institute (warning: that institute has a very dour view of the future of human life, worse than my concerns of a few paragraphs ago) says were are going to get AGI around 2050.
But that is the low end of when Urmson thinks we will have autonomous cars deployed. Suppose he is right about his range. And suppose I am right that autonomous driving is a lower bound on AGI, and I believe it is a very low bound. With these very defensible assumptions then the seemingly sober experts in Martin Ford’s new book are on average wildly optimistic about when AGI is going to show up.
AGI has been delayed.
59 comments on “AGI Has Been Delayed”
Using the word “delayed” implies that an action has caused the timeline to be pushed back. I don’t think that is the case here. Rather, the growing consensus is now more realistic — in other words, it’s closer to what you have always maintained!
AGI has not been delayed; the public’s perception of AGI’s arrival is closer to reality.
That title was a self-serving attempt at irony… only the perception of when it might arrive has been delayed.
I think full autonomous cars will require an almost complete AGI, not a lower bound one, unless roads are modified to accommodate these cars.
Two points here. I think we will get autonomous cars by modifying our infrastructure and that in itself will take a long time. So I actually think Urmson is being optimistic about true level 5. We will get autonomy in his time frame but only in modified geographies. As to whether true level five requires full AGI, I am not so sure, but I may be too optimistic on that one.
“SDC” proponents are quick to write off human drivers as being inefficient, distracted, etc, etc. On the contrary, it’s the SDCs are blind, deaf, unthinking, unfeeling, unaware, non-existent, pretending to be otherwise, pretending to be a human. I’ve maintained from Day One [since the DARPA Stanford challenge days] than an SDC [fully autonomous, no steering wheel for human takeover etc] is sci-fi and fantasy. In the current systems (Waymo, Tesla, BMW, whatever), there is no ‘there’ there.
I largely agree. See today’s story on Tesla’s lane changing. https://www.washingtonpost.com/business/2019/05/22/teslas-automatic-lane-changing-feature-is-far-less-competent-than-human-driver-consumer-reports-says/
Note that Tesla’s system (and probably all systems) does not notice brake lights or turn signals, but instead tries to drive only based on what it perceives about where and what trajectory other cars are instantaneously doing.
In another comment I recommended to a Tesla fan that they keep their hands on the wheel in order to stay alive. More worryingly they need to keep their hands on the wheels so that innocent other drivers who did not consent to driving amongst robots at 60mph, can also stay alive.
In the fullness of time we will have capable self driving cars. But not next year, and likely not without all sorts of changes to infrastructure and communications between vehicles (whether peer to peer or through a centralized broker). It is going to take a long time.
Listening to the media one could be forgiven for believing that we were on the verge of a vehicle with level 5 autonomy.
As you know the leader of the pack is Tesla, with 500K+ vehicles on the road. Each one collecting precious data to feed into its self improving deep learning algorithms.
I watched the official Tesla Self Driving video:
And thought wow!!! Where do I sign up?
But then I watched an independent Tesla lover testing out his model 3 in the UK:
And I was more than a little disappointed. The car wasn’t anywhere near as clever as I assumed it would be. There’s no way you could drive for more about 1/2 mile on a UK A-Road without having to intervene to save your own life.
There’s a way to go before the dream is realised.
Have you considered the effect raised pavement markers (https://en.wikipedia.org/wiki/Raised_pavement_marker) could have on this project? A coordinated effort by the private and public sector could be very worthwhile.
Yes, lots of modifications to our road ways will speed the arrival of autonomous vehicles. But that issue is orthogonal to the argument I am making here.
70% AI accuracy gives way to 30% in-accuracy. That’s not had to achieve, but not good. 95% AI accuracy gives way to 5% in-accuracy. That’s where we’re at today. Also not good. It’s diminishing the 5% in-accuracy to 0.0000000000005% that’s gonna be the hard part! You are right Rodney, maybe another 20 years for autonomous cars? However, why are we working so hard for full autonomous? … when we can just start placing beacons on the road itself and reduce the problem to partial autonomous. But no matter what the solution, the human element is still a big issue.
Again, I agree that we will likely modify our roadways to speed the arrival of autonomous vehicles. But that is orthogonal to the argument I am making here. I am saying that if well placed people are saying that true level 5 (no modifications to roadways) is many decades away then surely AGI must be even longer away. I have talked about how I think adoption will play out in other posts. This post is not about that at all.
I agree that with the observation that a lack of AGI is preventing self-driving cars.
A car based solely upon neural networks and statistical learning cannot figure out driving conditions and cannot deduce what to do.
My own work in symbolic logic and knowledge based AI techniques has been kept on track by what Rodney Brooks has written – with the exception that I believe in hierarchical contextual world models. I am following Alan Turing’s guidance to construct the mind of a child and to proceed to educate it via dialog.
I must not have been very clear in my writing. I did not say at all that a lack of AGI is preventing self-driving cars. Whether that is true or not is not part of what I was trying to argue in this post.
I agree with all of that. A speculation though – that there exists some method of cracking Moravec’s Paradox from the bottom aka replicating evolution/environment interactions in A-Life. That sounds like I described it so I have a pet plan to declare – I don’t. The concept is monstrous.
Do you have a speculation or a different take?
I spent a large part of my academic career (as distinct from my many companies) working on just such bottom up approaches. You can explore my publications at http://people.csail.mit.edu/brooks.
> The Artificial Life Route to Artificial Intelligence: Building Embodied Situated Agents
> Cambrian Intelligence: The Early History of the New AI
Fantastic, thank you.
It’s a relief to talk to somebody who knows what they’re talking about. I know we all have our pet theories and that’s fine, informed speculation is fine – but some days I think there’s about ten – hundred people working or thinking about A.I and the rest are contributors to fan-fiction.net and studied the fundamentals at tvtropes.org.
I think Elon Musk is himself aware that his predictions are too optimistic but it is a war against the market, against the short termism of traders. It’s an economic game.
On the other hand, Ray Kurzweil still has more credibility given the number of predictions made: 87% success rate
Singularity is still science fiction for the current state of technology, but I have a question to ask:
Considering the progress in the field of NLP (Google Duplex, Xiaoice, Cainiao of Alibaba, GPT-2, etc…), do you think it is possible that in 2029, the machine will pass the turing test?
If your answer is not far from that date then I think we should start taking Ray seriously.
See my 25,000, or so, word response to this in my for essay series starting at https://rodneybrooks.com/forai-steps-toward-super-intelligence-i-how-we-got-here/. It is a complex question and and ill formed, and so there is much to say to explain why the zeitgeist has it wrong. That’s why it took so many words… [And no, I did not write this to respond to you personally, but I get this question a lot…]
What if we make all-atonomous-cars in roads got controlled by one regulatory system?
So all the cars’ direction and location are calculated and shared with the system to control…?
I vaguely think it by the way.
Thank you for all great articles always
Scroll up to see my responses to earlier posters about changing the world. Same answer: yes, we will deploy autonomous cars by changing how we think about them operating, not the current thought that they will be independent one for one replacements of human driven cars on today’s road infrastructure. But that is irrelevant to the argument I was making in this blog post.
It’s funny–one of the last places I worked constantly told us to temper our expectations of the future. The result almost always meant being overly conservative. Even my original enthusiasm would usually be off by a year or three. You are right that Musk’s prediction is too soon. Yet, I prefer it to the next-to-never conservative positions of most! Musk’s predictions inspire others to accept the inevitability of a thing in nearer terms.
When one has authority in the world one needs to be careful about what one says, so that people do not take actions based on wrong information. That includes having people take their hands off the steering wheel and dying.
So I won’t get to see AGI or GOFAI (good old-fashioned artificial intelligence)
in my already superannuated lifetime? (I saw Dwight David Eisenhower once, when he was running for the presidency and my Army father took us kids to catch sight of Ike.) And here I have been rooting for AI all along.
I went to a Marvin Minsky lecture back around 1983, where so many people came in that Professor Earl B. Hunt of AI fame had to sit on the floor. To my shame, I feel asleep in front of http://en.wikipedia.org/wiki/Carver_Mead as he lectured about his artificial retina. I met the snarky Rodney Brooks at Seattle’s University Book Store when he was signing copies of “Flesh and Machines: How Robots Will Change Us” on February 28, 2002 — seventeen years ago!
Snarky? Snarky? Oh dear…
Just one question sir, have you sat in a tesla which is in auto pilot mode on mass pike from Natick to boston set to 75 mph?
Happy to give you a ride to give you an appreciation for where we have come and where we are going
Yes, I am very well aware of how far we have come. I was at the very first announcement, by Ernst Dickmanns at the 1987 ISRR in Santa Cruz, of autonomous freeway driving at 90Km/h on the autobahn outside Munich. And I have followed every development since. The initial demonstration was monumental, and the progress has been breathtaking.
However, average case performance does not include edge case performance, and that is what is delaying deployment. So your comment is completely irrelevant to my argument.
And I recommend that you keep your hands on the wheel. I would prefer that you live.
My question is that have you ever have so much real world driving data at hands? If not, how do you predict what is the limitation based on that amount of real world data?
That is not the point of this post. You are upset about something I didn’t say.
I can’t help but latch onto this:
“Calling them taxis would be redefining what a taxi is.”
This point, Rodney, has been proven in 1997 when, I believe, the term AGI was coined. It used to be clear what “intelligence” means: it’s what humans have, to varying degrees. Over time, but especially since the 1990s, the term “artificial intelligence”, which originally was the name of a domain of science and research, morphed into a product label. When pre-fixed by “artificial”, the term “intelligence” has been castrated to mean whatever pattern matching software is able to do. I hope you don’t mind my continuing with your quote “…if you can just redefine words on a whim there is really not much value to your words.” Today’s “ai” is nothing more than enhanced electronic data processing, eEDP, if you will.
From eEDP to Intelligence, it is a long way indeed.
If you read my future of AI and robotics series on this blog (book length in total) you will see that I address the bastardization of the term AI though AGI and ASI. I like to continue to work with the original definitions from 1956 (actually written inn 1955, and not really definitions at all…).
Chris Urmson thinks limited deployments of self-driving cars will happen within 5 years:
“I think within the next five years you’ll start to see kind of the early small scale deployments of this technology, [and] once we get to that it will start to scale relatively quickly. But this is a change that’s going to scale…over decades, not over…weeks.” (https://www.pcmag.com/article/367343/aurora-is-not-building-autonomous-cars-its-building-safe-d)
Another interview where Urmson says essentially the same thing:
“Yes, it can happen. I think you’re going to see small-scale deployments in the next five years, and then it’s going to phase in over the next 30 to 50 years.” (https://www.theverge.com/2019/4/23/18512618/how-long-will-it-take-to-phase-in-driverless-cars)
Another recent quote from Urmson:
“I think over the next five years we’re going to see tens of thousands of these vehicles on the road. Then the story we’ve been telling about the increased opportunity being demonstrated, we’ll see the capital investment to scale the production of these vehicles.” (https://saemobilus.sae.org/automated-connected/feature/2019/03/view-from-a-visionary-chris-urmson)
“Our partners would like to see a 2020 or 2021 kind of time frame. So, we’re moving as quickly as we can to support that. At that time frame, we’re talking tens of thousands of vehicles, which is huge compared to the thousandish-maybe vehicles that are around today. But that will just be the beginning of the deployment, when we think about impact in the world. That was part of the thinking with Aurora was it’s gonna take so many years to get the technology to work, and it takes a similar kind of number of years to build the cars that the technology is going to come into.” (https://www.theatlantic.com/technology/archive/2018/03/the-man-with-the-most-valuable-work-experience-in-the-world/556772/)
So, Urmson’s 2020/2021 timeline for when the technology is ready to deploy is not far off from Elon Musk’s 2020 timeline.
You are mining words to find a story that is not in them. Urmson is clear that he thinks level 5 is 30 to 50 years away. That does not mean that there won’t be incremental deployments between now and then, indeed there must be. The early deployments in the next five years will not be level 5. The current plans for deployments are in limited geographies, and usually in closed worlds where the the problem is reduced from dynamic to static so that the vehicles can simply stop in their tracks at any time when they are confused. This is not one for one replacement of driverful cars. But it is a necessary set of steps as part of a decades long process. You can wish all you want ain’t so that level five autonomy is coming any time soon. And precisely zero of your quotes suggest otherwise. [Oh, and we are going to go from precisely zero (worldwide) vehicles without a safety drive to tens of thousands without safety drivers in the 2020 to 2021 time frame. No. Not going to happen.
Urmson’s quotes suggest he is forecasting tens of thousands of Level 4 vehicles without safety drivers within the next 5 years. To me, deployments of Level 4 taxis in select, geofenced areas is a huge milestone because that’s when we start to see the economic and safety impact of driverless vehicles. For residents of a city where Level 4 taxis are widely available, there will be a meaningful impact on their lives.
It’s possible Urmson will clarify his remarks in the future and it will turn out his goals/predictions are different from my interpretation. So, I’m not certain this is what he means.
We overestimate in the short-term and underestimate in the long-term. In 2014 AI experts said that a Go beating AI is at least 10 years away. So I am much more optimistic.
The other aspect to consider is swarm intelligence – vehicles talking to vehicles all of them equipped with some form of narrow driving AI.
As I have said in many other of my replies, your points are irrelevant to my argument–they are orthogonal.
And let me expand on what I mean by that.
My argument is of the form that A is strictly a superset of B, and we see here good reason to think that is going to take 30 to 50 years, if not longer. Therefore A will take at least that long, and perhaps much longer.
Your argument is of the form that we don’t need to do B, but instead humanity could do C. Whether or not that is true is irrelevant to my argument, as it does not rely on comparing A to C.
What are the specific technical challenges for self-driving cars that present-day machine learning can’t solve? Computer vision? Behaviour prediction? Path planning/driving policy? I am curious to know where specifically folks thinks the difficulty lies, and why they think that fundamental advances in AI are needed, rather than more training data, larger neural networks, and incrementally better neural network architectures.
With regard to computer vision, how seriously do we take the fact that neural networks can outperform humans on certain 2D image classification tasks? In particular, the ImageNet challenge?
With regard to the path planning/driving policy part of the problem, I’ve personally been impressed by DeepMind’s AlphaStar and OpenAI Five as proofs of concept of imitation learning and reinforcement learning for taking complex, tactical actions in an uncertain, partially observable, largely unpredictable 3D environment populated by multiple agents. In what ways is driving a more difficult problem for imitation learning and reinforcement learning than StarCraft or Dota?
Tesla is in a unique position to do large-scale imitation learning and real world reinforcement learning. Tesla has around 500,000 cars with its latest sensor suite, which are driving at an annualized rate of about 6 billion miles per year or about 20,000 years of continuous driving per year. This is approximately 500x than Waymo, which I believe is the next closest company in terms of real world miles. It seems possible that some problems you can’t solve with access to 1/500th as much data you might be able to solve with access to 500x more data. I have a hunch that Tesla’s large-scale fleet learning approach may be successful where small-scale approaches struggle.
What’s true for imitation learning and reinforcement learning is also true for behaviour prediction, since Tesla can exploit automatic labelling. (See here: https://youtu.be/A44hbogdKwI)
With regard to computer vision, is there skepticism from experts that the computer vision problems for self-driving cars can be solved by existing methods in the near term? It’s true that a neural network can’t be expected to recognize any object that could ever exist on Earth, and most objects that exist on Earth could end up in the middle of a road somewhere. But there are two strong counterpoints here:
1) As I understand it, there are ways for a self-driving car to detect that an unknown object is in the road. For instance, a neural network recognizes road, and it recognizes when there is a missing patch in the middle of the road (because an object is blocking the cameras from seeing the road).
2) Certain exceedingly rare events may confound self-driving cars forever, but as long as they are exceedingly rare this may be acceptable. Most car crashes happen mostly because humans make a relatively small number of common mistakes. If self-driving cars are far better at humans than avoiding the common mistakes, and unable to handle exceedingly rare events, the net result may be that self-driving cars crash at a much lower rate overall. (The same argument applies to path planning/driving policy.)
At the moment, I’m not sure what to think. Some experts like François Chollet are pessimistic: https://twitter.com/fchollet/status/1128064656541605888?s=21
Others like Wojciech Zaremba are optimistic: https://twitter.com/woj_zaremba/status/1127986411917856768?s=21
I would love to hear more in-depth debates among machine learning experts about the specific challenges self-driving cars face.
I personally find it helpful to break the problem down into three parts: computer vision, behaviour prediction, and path planning/driving policy. Each of these parts can then be further broken down into sub-parts, such as object detection and semantic segmentation of driveable roadway under the computer vision category. What parts or sub-parts do we agree are solveable in the near term with present-day machine learning, and what parts or sub-parts do some experts think require fundamental advances in AI? Why? And why do other experts disagree?
Recently, Alex Kendall — a Cambridge computer vision and robotics researcher and the CTO of Wayve, a self-driving car startup — wrote a blog post about applying reinforcement learning and imitation learning to robots: https://alexgkendall.com/reinforcement_learning/now_is_the_time_for_reinforcement_learning_on_real_robots/
His closing remarks have stuck in my head: “There is a huge opportunity to work on A.I. for robotics today. Hardware is cheaper, more accessible and reliable than ever before. I think mobile robotics is about to go through the revolution that computer vision, NLP and other data science fields have seen over the last five years.
Autonomous driving is the ideal application to work on. Here’s why; the action space is relatively simple. Unlike difficult strategy games like DOTA, driving does not require long term memory or strategy. At a basic level, the decision is either left, right, straight or stop. The counter point to this is that the input state space is very hard, but computer vision is making remarkable progress here.”
Jeff Schneider — a robotics professor at Carnegie Mellon and a former engineering lead at Uber ATG — said something similar in a recent talk on self-driving cars. After he talked about the difficulty of finding enough training examples for reinforcement learning, he said:
“But not everything is bleak. The problem is not quite as hard as some of the reinforcement learning problems that we’re used to. So, with self-driving cars you have dense rewards, you have a constant reward signal, you have very modest time horizons — you really only have to look ahead a few seconds to drive well — and it’s the rare events that you’re trying to go after. It’s not doing a maze. It’s not playing Montezuma’s Revenge and trying to find the magic incantation and sequence of hallways to go through that hits the sparse reward at the end. It’s not that at all. It’s very dense rewards, short time horizons. We just have to do it efficiently because it’s hard to get data for it.”
(Source is 57:53 in this video: https://youtu.be/jTio_MPQRYc?t=57m53s)
You are getting into the world of trying to predict the results of scientific research. It took 30 years of effort to get from back propagation to deep learning. Speculating about whether specific current research projects will lead to real results is a fools errand. There are always thousands of ongoing ideas and thoughts in research. The vast majority fail. Trying to predict which will bring real results is a losing strategy. Instead we must fund thousands of them and eventually see who are the two or three winners. Wanting, as you do to see a speculative path forward is bound to be wrong. Wisdom comes from decades of hard work and tens of thousands of failures.
“What are the specific technical challenges for self-driving cars that present-day machine learning can’t solve? ” Most of them.
“You are getting into the world of trying to predict the results of scientific research.”
On the contrary, I am wondering about the debate between A) machine learning experts who think fully autonomous driving can be solved with no further scientific breakthroughs and B) machine learning experts who think fully autonomous driving requires new scientific breakthroughs.
(A) think all the scientific research problems have been solved, and the remaining work is “just” engineering using the same foundational ideas that have proven so successful with the various ImageNet challenge winners, Google Translate, OpenAI GPT-2, DeepMind’s AlphaStar, OpenAI Five, and so on.
(B) think existing foundational ideas are insufficient, and that new foundational ideas will need to be invented/discovered before fully autonomous driving will be a feasible engineering project.
The example I gave of a person who seems to be among (A) is Wojciech Zaremba. François Chollet was the example of (B). Folks like Andrej Karpathy at Tesla and Alex Kendall at Wayve also fall into (A), but of course they do because if they didn’t they probably wouldn’t be leading autonomous vehicle engineering projects.
Beyond just compiling a list of who falls into which camp, I am curious about why folks in the two camps disagree. It would be wonderful to see a written format friendly debate (long form, not on Twitter!) where they can cite results from papers and so on. If experts disagree, the truth can’t be so obvious, and I’m personally intensely curious to see them drill down into specifics and offer empirical evidence or theoretical reasons why they think we need or don’t need new foundational ideas in AI to solve computer vision, behaviour prediction, and path planning/driving policy for fully autonomous driving.
For instance, Zaremba and Chollet seem to make exactly opposite assertions.
Zaremba says: “I am a believer in Tesla’s approach to self-driving cars. They use data from their 0.5M cars. Such data should be enough to cover every possible case. This approach has to work.”
Chollet says: “…when it comes to difficult real-world problems like self-driving, “lots of data” is very different from “infinite data”. The long tail is fat… There is no realistic amount of data that covers ‘everything you can encounter’ in the real world.”
I am curious to know why they (apparently) have such opposite intuitions, and how to surface relevant evidence or at least think more systematically about these competing intuitions.
(Strictly speaking, Chollet doesn’t say that you need to collect data on “everything you can encounter'” to solve self-driving cars, and when he says a “realistic amount of data”, I don’t know for sure that he means the amount of data you can collect with a fleet of hundreds of thousands or millions of cars like Tesla, as opposed to a fleet of just a few hundred like Waymo, Cruise, and everyone else.)
I would also be very interested in this debate you proposed, Trent. On the question of whether a huge volume of driving data would be sufficient, I think there are many extremely difficult problems to solve. For example, what about all the very dangerous maneuvers drivers made that luckily did not result in crashes because other drivers avoided them? I see no easy way to identify those actions.
I have been intrigued by this company that has divided the entire planet into 3 meter square blocks and assigned a three word combination to each block. It is fascinating to me and I wonder if a solution might not be in roads and addresses but navigating a sequence of these blocks?
Yes, it’s a neat algorithm, but the only relation to automated driving is they both involve locations? The complexity of automated driving is being able to steer correctly from second to second, and knowing when to brake. Taking the right route has already been solved by every major mapping app and car navigation software.
Are you sure you’re not thinking linearly Rodney?
Ray Kurzweil’s dates seem very close but I got the impression he put a lot of thought and effort into working them out. How did you work out 2300?
Thanks for sharing your thoughts.
My impression is that Ray started with the year 2029 and worked backwards to get a justification for the year. He thinks he can survive on his medical approach until 2029, but then desperately wants to upload his consciousness to a super intelligent machine so that he will have eternal life. As I have said before this is techno-religion, eternal life without the inconvenience of having to believe in God. Every time I see him I tell him that he, along with every human in history, is going to die. He hates that idea.
I chose 2300 to illustrate the absolute inanity of predicting dates for when we will understand intelligence.
Someone earlier accused me of being snarky. I do not understand why that would have occurred to them.
The question I have is: what are the plans to achieve AGI? There are clearly people who think it is possible to get to AGI from where we are now, even soon, in the course of a human lifetime- but how do they expect this to happen?
A look at the HLAI 2018 conference programme gives me a few hints: HLAI is actually three conferences, the Artificial General Intelligence Conference, AGI, the International Conference on Biologically Inspired Cognitive Architectures, BICA, and the Neural-Symbolic Learning and Reasoning Conference, NeSy.
So it seems that at least two of the plans involve some kind of architecture inspired by biological brains, probably the human brain, and the integration of neural networks with symbolic AI. The rest (i.e. the content of the AGI sub-conference) seems to be speculative discussions about how soon AGI can arrive and how it will affect society.
Are the subjects of BICA and NeSy really new ideas? “Biologically inspired cognitive architectures” really sounds like good, old-fashioned connectionism. Neuro-symbolic integration has always seemed to me like a good idea but I’m pretty sure it’s not anything brand new either- I can find results for “neuro symbolic” on DBLP that date back to 1992. I’m aware of some newer work and renewed interest in symbolic AI by statistical machine learning researchers but I’m not convinced many of those folks have the necessary background to figure out how to bring the two ends together.
In any case, I don’t see any new, fresh ideas here, and I don’t see a clear road map. I see just a jumbled mess of old ideas sometimes combined with each other and with generous servings of hope that if enough secret ingredients are mixed up together, something entirely new will magickally arise.
But I might be wrong, in my inexperienced arrogance, so I would like to ask: is there a plan? Does anyone have any idea how to get to AGI from where we are at the moment? Is anyone actually, actively trying to achieve this goal?
That is my frustration with the AGI crowd. They may be sincere, as another reader has commented here, but I can not see any plan from any of them. Rather, most (as I have documented in an earlier post where I did an exhaustive analysis of the proceedings of many years of one of the AGI conference series) of their papers are about how long it will take. A few other papers are small technical results–and I don’t say in a dismissive way–all really good progress comes from tens of thousands of small technical results piled on top of each other over decades.
But I see no plans on how to achieve AGI, and no new ideas from “AGI enthusiasts” on how to get there. No coherent intellectual ideas about how to proceed. Hinton always had a coherent intellectual approach and goal. And he succeeded ultimately after decades of perseverance by him and his followers within his intellectual framework.
It is extraordinarily hard to make progress without a coherent intellectual framework, and so far AGI has not come up with one.
“Someone earlier accused me of being snarky. I do not understand why that would have occurred to them.”
That was me a few days ago in the comments up above. I first noticed your “snarkiness” about a year ago when you made snarky remarks about the quite sincere AGI people in Prague. In your various blog posts, you are snarky about proposed AGI solutions with remarks like, “you and a thousand others”. Meanwhile, since I am older than you I have been at this AI thing longer than you. I keep up on things, and when you fumbled on February 28, 2002, searching out loud for the name of the neuroscientist in Iowa, you thanked me for calling out “Damasio!” from the audience at your book-signing in Seattle. You snarkily refuse to countenance proposed AI solutions, but I will go right past your snarkiness, because, after coding AI Minds that think in English, German and Russian (all ignored by the AI community), now I am releasing what you and your legion of readers may please Google using quotes for “artificial intelligence in Latin language”. My brand-new “Mens Latina” will get the academic world of Classical studies to turn their massive and mighty attention towards the previously unrelated field of artificial intelligence. To quote Vergil: “Imperium sine fine dedi.”
The quote from me above was an intentionally snarky comment to make fun of myself for being snarky. It was meant as irony. And as self-deprecating and as an admission of snarkiness. As was my earlier reply to you “Snarky?”.
And yes, I do not give credence to people suggesting that a certain route is going to lead to AGI, or whatever. We know that thousands of routes must be tried for any scientific goal, and the vast majority will fail, often after decades of work by hundreds of people. Often the suggestions are posed, implicitly, as “prove to me that this idea of mine is wrong”. I say, fine go work on it. But it is not up to me to prove to anyone of that their approach will not work. Someone may well be right eventually, but it is impossible to now know which of the current vague ideas will be the seed of something right after hundreds or thousands of person years of effort are poured in to it. That is why (paraphrasing your words about me above) I say “you and a thousand others”. The thousands of attempts are critical to final success. But I am snarky about arm chair proponents who suggest something and have no intention of doing the hard work to prove that they are correct, and indeed there are thousands of such arm chair opinions.
Hinton turned out to be right, but it took 30 years of hard work to get to that. And he doesn’t say “you should have listened to me 30 years ago”. Instead he is proud of all the work that fleshed out the ideas to the point of success that they are at now. Thousands of others who had their own ideas and worked hard on them for 30 years did not have such success, and they are not bitter. Scientists know that uncovering what works and doesn’t is a long term hard work enterprise, not an arm chair social media sport. [Yes, that last phrase was snarky.]
Fully autonomous cars won’t happen until AI can predict human behavior. Not likely. Most autonomous car accidents have occurred when a human makes an improper decision. Autonomous cars will only work when the human is removed from the equation.
Excellent article. Here is the hypothetical situation I use to illustrate how far away we are from truly autonomous control, which also points to the need for (near?) AGI to effect such control:
Consider a 16 year old driving down a suburban neighborhood street during his first solo drive. Ahead, he notices some kids playing soccer on a front lawn. As he approaches that house a soccer ball rolls into the
street 10 yards in front of his car. He hits the brakes, instinctively inferring that a kid may chase the ball.
Now consider the same situation, but it is winter and he sees the kids making snowmen on the lawn by aggregating soccer-ball sized snowballs on top of one another. As he approaches the house, 10 yards ahead he sees one of these snow balls roll into the street. Instinctively, he knows the kids are not going to chase it into the street and he continues driving, smashing the snowball with his tire, and smiles at the laughing kids.
Wake me up when an autonomous car can differentiate between these two scenarios…
AGI delay is a watershed moment for few additional matters; for innovators and VC’s who started the hype, and realized that innovation meet real public-infrastructure-sphere is not as regulation-free as tech-sphere innovation or even new business models, which are elective. The public will examine tech innovation more closely before adopting the hype. For regulators, especially in the US who will wait for an innovation to become a product/service, or consolidate into an industry because regulating fragmented *entities* is not viable. Lastly, in the EU autonomous vehicles are regarded from a societal perspective designed to realize the requisite EU social well-being of its citizens, I believe tech innovations will be closely examined in the future.
I’m pleased Rod provides a more sober view about AGI’s arrival. I’ll be more extreme and suggest that AGI will come to be seen as a narrow goal, while the attraction of human amplification, augmentation, enhancement, and empowerment will grow. Lewis Mumford (Technics & Civilization, 1934) made clear that progress (overcoming “the obstacle of animism”) comes from “dissociation”, that is, displacing the idea of human imitation by the more powerful idea of technology to “serve human needs”. AI researchers have made the first step by moving from human mimicry to statistical methods. Another step will be downplaying the idea of humanoid robotics (Roomba was a good contribution for this idea). A third step will be shifting from excessive machine autonomy to ensuring human control while increasing the level of automation. AI methods will grow in importance as they become successful components of tools for human use, like search, navigation, photography, etc.
I’m a safety driver for a SDC company. I can tell you categorically that SDCs are decades away. The piecemeal improvements to the Robot Operating System ROS [the fundamental codebase from the DARPA Stanford challenge] are glacially slow, often written by programmers who have never driven a car! Effectively, it’s a extremely complicated robotic car cruising city streets hanging on a wing and a prayer that the car doesn’t just stop in the middle of an intersection with a jerking halt, or worse. The technology is both impressive and terrifying. It’s operating in a human ecosystem diametrically opposed to its existence, as it follows all traffic laws, doesn’t turn right on red, infuriating the hapless driver behind. Unless every other car on the road rigorously follows the same driving regulations there will be conflict and resentment. SDCs are in no way ready for prime time. Accumulating millions of miles in Autonomous Mode doesn’t translate to reaching Level 5 in an error-prone human world and sharing the road with a full panoply of drivers, pedestrians, cyclists, dogs, jaywalkers, cell phone zombies, scooters, skateboards, one-wheelers, pogo stick jumpers, etc…
Are we sure an AGI would be able to drive a car?
It seems to me that driving is a low-level ‘reflexive’ skill, not really related to reasoning, as such.
Of course in humans, intelligence emerges from this dynamic system, but will AGI’s have the same architecture?
If a so-called artificial general intelligence can’t do things that humans can be taught to do (e.g., drive a car) then it is not as general as a human, so it is not AGI.
I agree that most people and even likely most software engineers greatly underestimate what an extraordinary achievement AGI would be and how challenging it is to progress the state of the art in machine learning. I prefer not to use “AI” at all to refer to current systems as they are so astronomically far from intelligence.
But I think it’s reasonable to at least consider the threat of people using existing ML systems for malicious purposes or for political gain. For example, I know of a company that has developed a ML system that identifies people’s political positions based on their social media. It’s easy for me to imagine using such a system combined with aggressive targeted advertising to gain an unfair campaign advantage.
Overall though I really don’t think there’s much to be concerned about with ML. Certainly not an existential threat by any stretch :-P.
Greetings, Rod… Bob Blum here. (I was on the 4th floor of Margaret Jacks Hall, Computer Science Dept. at Stanford from 1976 to 1986 when you were working with Tom Binford.)
Excellent post and many great comments. My reply ballooned into an entire article here:
First, the points of agreement…
Level 5 self-driving car (SDC) arrival as a lower bound for the arrival of AGI; Yes! AGI is many, many decades in the future. I always like your arguments and those of Doug Hofstadter and Gary Marcus. I worry about nuclear annihilation and global warming much more than I do about rampaging robots.
Excellent comment by Trent Eady (quoting Alex Kendall): Autonomous driving… the ideal application for AI…
That gets at the essence of my article (basically tech and money “push” and market demand “pull” (to quote Ed Feigenbaum.) I’ve never seen money being thrown at AI like it is now: billions of dollar per year. I estimate 100,000 engineers will soon be working the SDC/AV problem worldwide. Detailed evidence in my article.
Level 4 will roll out by gradual expansion of services offered by Waymo, GM, Lyft and others… to more challenging climates and busier roads.
Gradually, more and more of the edge cases from the long tail will be solved (some by theory breakthroughs and others by handcrafted kludges.)
I think Level 5 will arrive by about 2040. (My job is to stay alive until then to monitor progress.) It’ll be helped before that by remote control operation.
Stassa: good comment about the AI conferences. I enjoyed the recent debate between Gary Marcus and Yoshua Bengio at MILA on YouTube. … Gary’s in fine form. He’s a neuro-symbolic hybrid enthusiast. I am too.
Don’s comment: In my article I alluded to your example with the soccer ball versus the snowball. You nailed it. The SDC needs to capture intuitive physics and intuitive psychology. I love Josh Tenenbaum’s work on this (one of my fave YouTubes: