Blog

Predictions Scorecard, 2022 January 01

rodneybrooks.com/predictions-scorecard-2022-january-01/

On January 1st, 2018, I made predictions about self driving cars, Artificial Intelligence, machine learning, and robotics, and about progress in the space industry. Those predictions had dates attached to them for 32 years up through January 1st, 2050.

I made my predictions because at the time I saw an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.

I was accused of being a pessimist, but I viewed what I was saying as being a realist. Today, I am starting to think that I too, reacted to all the hype, and was overly optimistic in some of my predictions. My current belief is that things will go, overall, even slower than I thought four years ago today. That is not to say that there has not been great progress in all three fields, but it has not been as overwhelmingly inevitable as the tech zeitgeist thought on January 1st, 2018.

As part of self certifying the seriousness of my predictions I promised to review them, as made on January 1st, 2018, every following January 1st for 32 years, the span of the predictions, to see how accurate they were. This is my fourth annual review and self appraisal, following those of 20192020, and 2021. I am an eighth of the way there!  Sometimes I throw in a new side prediction in these review notes.

The biggest news across these three fields this year is what appears to be a breakthrough into space tourism. For a few minutes at one point this year there were 19 people weightless in space at the same time and 8 of them were not professional astronauts. That happened on December 11th. There were six people aboard Blue Origin’s New Shepherd in a sub-orbital flight, three Chinese astronauts on the Chinese Tiangong space station, seven regular crew members on the International Space Station (three from the US, two from Russia, and one each from Japan and the European Space agency), and three visitors to that same station, one a professional Russian astronaut, accompanying a Japanese billionaire and his publicity person. As we will see below there were two other orbital flights and three other sub-orbital flights with tourists on board in 2021.

UPDATE of 2019’s Explanation of Annotations

As I said in 2018, I am not going to edit my original post, linked above, at all, even though I see there are a few typos still lurking in it. Instead I have copied the three tables of predictions below from 2021’s update post, and have simply added a total of twelve comments to the fourth columns of the three tables. As with last year I have highlighted dates in column two where the time they refer to has arrived.

I tag each comment in the fourth column with a Cyan (#00ffff) colored date tag in the form yyyymmdd such as 20190603 for June 3rd, 2019. In 2022 I have started highlighting the new text put in for the current year in LemonChiffon (#fffacd) so that it is easy to pick out this year’s updates.

The entries that I put in the second column of each table, titled “Date” in each case, back on January 1st of 2018, have the following forms:

NIML meaning “Not In My Lifetime, i.e., not until beyond December 31st, 2049, the last day of the first half of the 21st century.

NET some date, meaning “No Earlier Than” that date.

BY some date, meaning “By” that date.

Sometimes I gave both a NET and a BY for a single prediction, establishing a window in which I believe it will happen.

For now I am coloring those statements when it can be determined already whether I was correct or not.

I have started using LawnGreen (#7cfc00) for those predictions which were entirely accurate. For instance a BY 2018 can be colored green if the predicted thing did happen in 2018, as can a NET 2019 if it did not happen in 2018 or earlier. There are five predictions now colored green, the same ones as last year, with no new ones in January 2020.

I will color dates Tomato (#ff6347) if I was too pessimistic about them. Last year I colored one date Tomato. If something happens that I said NIML, for instance, then it would go Tomato, or if in 2020 something already had happened that I said NET 2021, then that too would have gone Tomato.

If I was too optimistic about something, e.g., if I had said BY 2018, and it hadn’t yet happened, then I would color it DeepSkyBlue (#00bfff). None of these yet either. And eventually if there are NETs that went green, but years later have still not come to pass I may start coloring them LightSkyBlue (#87cefa). I did that below for one prediction in self driving cars this year.

In summary then: Green splashes mean I got things exactly right. Red means provably wrong and that I was too pessimistic. And blueness will mean that I was overly optimistic.

So now, here are the updated tables.

Self Driving Cars

Spoiler alert: very little movement in deployment of actual, for real, self driving cars.

Way back four years ago when I made my predictions about “self driving cars” that term meant that the cars drove themselves, and that there was no one in the loop at a company office, or following in a chase car, or sitting in the drive or passenger seat ready to take over or punch a big red button. As I documented in last year’s update the AV company’s conveniently neglect to mention these uncomfortable truths when they give press releases about their great progress. Even tech savvy people do not realize how very far all the breathless press stories about self driving cars being deployed are from being accurate about what is really going on. Last year I gave nine questions that every reporter should ask at any announcement by an AV car company. I have started seeing some similar questions being asked by more savvy reporters this last year. I have no idea whether any of them read my blog.

As an example of how companies gloss ofter things, if you carefully watch Waymo’s breathless “ooh” and “ah” filled video about their first deployment in San Francisco you will occasionally see the hands and knees of the safety driver sitting in the driver’s seat, conveniently unmentioned in the video that is trying to give the impression that their service is deployed. If you carefully read about the Chandler, Arizona, deployment and watch videos from there you will see how there is constant contact with people watching from home base, there are rescue vehicles that come and deposit a human driver to take over in some cases, and most importantly that the scale of the number of rides they give per week is tiny, way too few to be anything but a giant cash drain. That is two years after it was announced that the service was real and deployed. Nope.  Still very early days.

And (h/t Mohamed Amer) here is Cruise(GM)’s very first driverless pickup less that two months ago, in San Francisco.  Note that the passenger has two Cruise employees talking to him  through the car. The passenger is also a Cruise employee, and notes that he was not allowed by Cruise to bring his three year old along for the ride. And there is a camera person at both the pick up and drop off location. No indication of how far the vehicle drove driverless before or after this ride, nor whether there is a chase vehicle (with a safety person on board ready to stop the driverless vehicle) or not, nor whether it could operate at any other pick up or drop off location (Steve Jobs’ demos of new Apple technology were famously scripted down to every key press as otherwise he would hit some bug or the other).  This is an important step towards real deployment, but it is a long long way from real deployment.

The Hubris Of Self Driving Cars

In fact driverless cars, with safer roads, have been predicted again and again, for over 80 years, and it is always just around the corner. I recommend the book Autonorama by Peter Norton for some balance to the recent hype that really, really, autonomous cars are just around the corner once again.  It will make many techno people uncomfortable and even angry.  But it is good to have your internal beliefs challenged once in a while.

Norton points out that many companies have talked up the imminence of autonomous cars for a long time. Here is the list he documents for one such company, GM (which bought Cruise in 2016):

  • 1939 World’s Fair at their Futurama exhibit, GM promised autonomous vehicles by 1960.
  • 1964 World’s Fair in Futurama II, GM promised it again.
  • 2010 GM and SAIC (Shanghai Automotive Industry Corporation) promised that it would arrive by 2030 as Xing! (autonomous Shanghai)
  • 2017 GM 2017 Sustainability Report: Zero Crashes. Zero Emissions. Zero Congestion.

All worthy goals, but worthy goals doesn’t mean we yet no how to do it.  We certainly didn’t in 1939 and 1964. And we certainly didn’t know how to do it when Waymo started (inside Google) back in January 2009 (thirteen years ago and counting…).

We have seen the long tail of difficult situations delay deployments for years.  We know that humans can drive cars with “only” 35,000 annual fatalities just in the United States. We don’t know yet whether any of the sensor suites from any of the companies (all are different from the capabilities of human eyes) are sufficient to reach that goal or whether the presence of significant numbers of AVs on roads shared with humans will so impact the dynamics of driving that things get much worse.  We can have high faluting goals of zero this and zero that, but we actually have no idea whether the real world will allow that to happen without significant changes in approach.  WE JUST DON’T KNOW YET.

What we do know is that human drivers have many different capabilities, perceptual, reasoning, and an apparent ability to model what is happening and make predictions. Pure learning approaches are unlikely to capture that capability, so systems based largely on learning are making a big unproven bet. That hasn’t stopped massive funding going into two new startups in the last year, one for cars (Wayve) and one for trucks (Waabi) that try to solve the whole problem from pretty much a pure learning perspective. Didn’t we see this movie before? (Again, thanks to Mohamed Amer for pointing me at these two.)

Industry Misses

I believe that the hubris surrounding self driving cars has lead to delays in actual safety progress we could have already deployed if we had not been so distracted. I talk about this in the context of my own hubris after the table of the state of my predictions just below.

But first I will once again present a summary of predictions made by industry leaders that I scraped from a driverless future web site back in 2017. The site still exists but the last updates to it were made in late 2018. If you click on the predictions page you will see longer explanations of all these predictions that I gathered from it.

Each line is an individual prediction with the year it was made in parentheses. The orange arrows indicate that the companies later released updated later dates or cancelled their predictions completely, and that I personally noticed those updates. There may well be more that I have missed.

The years in blue indicate when the industry leaders thought these predictions would come to pass. I have highlighted all the dates up through 2021, now numbering 17 of the 23 predictions. Not one of them has happened or is even close to happening.

Hubris on a mass delusion scale. Audi fully autonomous by 2017?  That is Teslan in its delusion level.

Prediction
[Self Driving Cars]
Date2018 CommentsUpdates
A flying car can be purchased by any US resident if they have enough money.NET 2036There is a real possibility that this will not happen at all by 2050.
Flying cars reach 0.01% of US total cars.NET 2042That would be about 26,000 flying cars given today's total.
Flying cars reach 0.1% of US total cars.NIML
First dedicated lane where only cars in truly driverless mode are allowed on a public freeway.
NET 2021
This is a bit like current day HOV lanes. My bet is the left most lane on 101 between SF and Silicon Valley (currently largely the domain of speeding Teslas in any case). People will have to have their hands on the wheel until the car is in the dedicated lane.20210101 It didn't happen any earlier than 2021, so I was technically correct. But I really thought this was the path to getting autonomous cars on our freeways safely. No one seems to be working on this...
20220101
Perhaps I was projecting my solution to how to get self driving cars to happen sooner than the one for one replacement approach that the Autonomous Vehicle companies have been taking. The left lanes of 101 are being rebuilt at this moment, but only as a toll lane--no special assistance for AVs. I've turned the color on this one to "too optimistic" on my part.
Such a dedicated lane where the cars communicate and drive with reduced spacing at higher speed than people are allowed to driveNET 2024
First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.
NET 2021
The pick up and drop off points will not be parking spots, but like bus stops they will be marked and restricted for that purpose only.20190101 Although a few such services have been announced every one of them operates with human safety drivers on board. And some operate on a fixed route and so do not count as a "taxi" service--they are shuttle buses. And those that are "taxi" services only let a very small number of carefully pre-approved people use them. We'll have more to argue about when any of these services do truly go driverless. That means no human driver in the vehicle, or even operating it remotely.
20200101
During 2019 Waymo started operating a 'taxi service' in Chandler, Arizona, with no human driver in the vehicles. While this is a big step forward see comments below for why this is not yet a driverless taxi service.
20210101 It wasn't true last year, despite the headlines, and it is still not true. No, not, no.
20220101
It still didn't happen in any meaningful way, even in Chandler. So I can call this prediction as correct, though I now think it will turn out to have been wildly optimistic on my part.
Such "taxi" services where the cars are also used with drivers at other times and with extended geography, in 10 major US citiesNET 2025A key predictor here is when the sensors get cheap enough that using the car with a driver and not using those sensors still makes economic sense.
Such "taxi" service as above in 50 of the 100 biggest US cities.NET 2028It will be a very slow start and roll out. The designated pick up and drop off points may be used by multiple vendors, with communication between them in order to schedule cars in and out.
Dedicated driverless package delivery vehicles in very restricted geographies of a major US city.NET 2023The geographies will have to be where the roads are wide enough for other drivers to get around stopped vehicles.
20220101
There are no vehicles delivering packages anywhere. There are some food robots on campuses, but nothing close to delivering packages on city streets. I'm not seeing any signs that this will happen in 2022.
A (profitable) parking garage where certain brands of cars can be left and picked up at the entrance and they will go park themselves in a human free environment.NET 2023The economic incentive is much higher parking density, and it will require communication between the cars and the garage infrastructure.20220101
There has not been any visible progress towards this that I can see, so I think my prediction is pretty safe. Again I was perhaps projecting my own thoughts on how to get to anything profitable in the AV space in a reasonable amount of time.
A driverless "taxi" service in a major US city with arbitrary pick and drop off locations, even in a restricted geographical area.
NET 2032This is what Uber, Lyft, and conventional taxi services can do today.
Driverless taxi services operating on all streets in Cambridgeport, MA, and Greenwich Village, NY. NET 2035Unless parking and human drivers are banned from those areas before then.
A major city bans parking and cars with drivers from a non-trivial portion of a city so that driverless cars have free reign in that area.NET 2027
BY 2031
This will be the starting point for a turning of the tide towards driverless cars.
The majority of US cities have the majority of their downtown under such rules.NET 2045
Electric cars hit 30% of US car sales.NET 2027
Electric car sales in the US make up essentially 100% of the sales.NET 2038
Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.NIMLThere might be some small demonstration projects, but they will be just that, not real, viable mass market services.
First time that a car equipped with some version of a solution for the trolley problem is involved in an accident where it is practically invoked.NIMLRecall that a variation of this was a key plot aspect in the movie "I, Robot", where a robot had rescued the Will Smith character after a car accident at the expense of letting a young girl die.

Hubris

From my comments above the previous table it is clear that I think there has been too much hubris around driverless cars. But as I went through updating which predictions had stood the test of time I realized that I too had been guilty of hubris in my predictions.

There are two predictions about the earliest date that something might happen that I am now sure will not happen.  And it is not because of technical difficulty but because no one is working on them.

The first is about having dedicated lanes on freeways with special infrastructure installed that cars can interact with to make autonomous driving easier. Back in 1997 there was a project called the National Automated Highway System Consortium to do just that.  But when the hubris of fully self-driving cars made the need for external help for AVs seem redundant, such projects disappeared.

The second is the idea of leaving your car at the entrance to a garage and having it drive itself to park in a much tighter space than you could do.

Both these approaches to getting some sort of autonomy deployed in just a few years seem entirely doable to me, and I still believe that.  But the lure of full self driving (as in full self driving, not merely as a marketing ploy) drove everyone away from these approaches.

My hubris was to think that others would see things the same way and would work on them.  Given that no one was working on them do I really deserve credit for predicting they wouldn’t happen for a while?

Personal Gloating, An Irresistible Sin!!

Here I go.  For the last two years I have had a little rant that in April of 2019 the CEO of Tesla had said there would be one million autonomous Tesla taxis on the road by the end of 2020. Kai Fu Lee and I had agreed at the time to eat all such taxis on December 31st, 2020. I.e., actually ingest them. All of them.  Of course there were zero such autonomous Tesla taxis on the road at the end of 2020. And a year later the number is still zero.

Note that in the last two weeks Tesla has agreed to eliminate the ability of drivers of their vehicles to play video games on the car’s display screen while the car is in motion. This was under regulatory pressure, indicating that “Full Self Driving” software is still not fully self driving. The latest spin from Tesla is that of course the words in the name of the software, “Full Self Driving”, could not possibly be interpreted to mean full self driving.

Defending Against Slings And Arrows, Another Irresistible Sin!!

Earlier this year someone used the fact that my prediction for when electric vehicles would account for 30% of automobile sales in the US as being no early than the year 2027 (see table above) as proof that I am a pessimist. Therefore, they said, my predictions about autonomous vehicles could not be trusted.

EV sales in the US were 1.7% of the total market in 2020 (up from 1.4% in 2019). We’ll need four doublings of that in seven years to get to 30%. It may happen. It may happen sooner than 2027.  But not by much. It would be a tremendous sustained growth rate that we have not yet seen.

Robotics, AI, and Machine Learning

With respect to my predictions for AI and ML there are only three that come close to being in play this year, either in terms of work that was done that impacts my predictions, or the date is close to when I said that something would or would not happen. I have annotated the table below in those three places; the next big thing, change in perspective of how to measure AI success, and robots that can really get around in our houses in a general purpose way.

The Next Big Thing

Back in 2018 I predicted that “the next big thing”, to replace Deep Learning, as the go to hot topic in AI would arrive somewhere between 2023 and 2027.  I was convinced of this as there has always been a next big thing in AI.  Neural networks have been the next big thing three times already. But others have had their shot at that title too, including (in no particular order) Bayesian inference, reinforcement learning, the primal sketch, shape from shading, frames, constraint programming, heuristic search, etc.

We are starting to get close to my window for the next big thing.  Are there any candidates?  I must admit that so far they all seem to be derivatives of deep learning in one way or another.  If that is all we get I will be terribly disappointed, and probably have to give myself a bad grade on this prediction.

So far the things that I see bubbling around and getting people excited are transformers, foundation models, and unsupervised learning.

Transformers provide a front end to deep learning that lets it handle sequential data rather than all input at once. This lets it be applied to natural language and to image sequences. We have seen tremendous hype about large language models using this mechanism. There is something going on there, but nothing as astounding or important as people imagine.

These language models are over interpreted by people as understanding what they are spitting out, especially when the press writes stories where they have cherry picked responses. But they come with incredible problems, including copyright violations, intellectual theft of code, and even outright life threatening danger when they find their way into consumer products. Tech companies have a real problem in rushing some of these systems to market.

Overall the will to believe in the innate superiority of a computer model is astounding to me, just as it was to others back in 1966 when Joseph Weizenbaum showed off his Eliza program which occupied a just a few kilobytes of computer memory. Joe, too, was astounded at the reaction and spent the rest of his career sounding the alarm bells. We need more push back this time around.

Foundation models are large trained models that start out as a basis for tuning particular applications.  There has been some self important announcements with a sort of me too feel (“Hey, I produced a foundation model too!!”), which don’t amount to much of an intellectual contribution.  If this turns out to be the next big thing I am going to have to rip off my mask of equanimity and revert to my natural state of being a grumpy old man.

Unsupervised learning is an idea that has been around for a long time.  Not a big intellectual jump to want to get it into deep learning–may be a hard technical problem, but not an intellectual breakthrough this time around.  (RIP Teuvo Kohonen who passed away last month.)

So, am I wrong, or was I just too optimistic about how soon the next big thing will show up?  I’ve given myself another 28 years of reviewing my predictions so perhaps there is time for something real and big to show up, and then I’ll be able to claim that I was simply too optimistic…

Beyond The Turing Test And Asimov’s Laws 

I start to see the serious technical press really trying to come to terms with judging whether systems are intelligent or not, and the number of articles questioning the Turing Test seem to be on the rise. Perhaps this is driven by the perceptions of what transformer based natural language systems can do. They are not intelligent but they can can fool an awful lot of the people an awful lot of the time.

As is often the case, Melanie Mitchell has a really good analysis, this time on the Turing Test and GPT-3. She is a renowned academic but she now also writes for Quanta Magazine.

Getting Around In A House

The first mass market robots that could get around reliably in ordinary people’s houses were introduced by my company iRobot at a two hundred dollar price point twenty years ago this year, in September of 2002. iRobot has now sold around 35 million Roombas and I am guessing that the follow on brands have together sold about the same number.

But we all know that Roombas could never be Rosie the Robot from the cartoon series The Jestsons. They are limited to a single flat floor and can’t get up high enough to really see objects that matter to humans in the houses that they bumble around in.  One of my predictions is that A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc. won’t even be a lab demo until 2026, and not be deployed for real until 2030 and at low cost in 2035.

Amazon just made a good and necessary step towards this capability with release of the Astro home robot. I got to spend time with some Astros before they were announced and I was impressed with the technical capabilities and at their price for all that technology. Whether they will be a successful product or not I can not hazard a guess.

The impressive step that Amazon has made is in getting a really reliable SLAM (Simultaneous Localization And Mapping) system that can quickly build a very good map of a house without any help from humans. It starts up in an unknown place in an unknown environment, and within minutes has a solid useable map. It is still somewhat low to the ground but its camera on a mast lets it get views up to table top level. No way to handle steps yet, and it mostly avoids clutter, but this is definitely progress.

Prediction
[AI and ML]
Date2018 CommentsUpdates
Academic rumblings about the limits of Deep Learning
BY 2017
Oh, this is already happening... the pace will pick up.20190101 There were plenty of papers published on limits of Deep Learning. I've provided links to some right below this table. 20200101
Go back to last year's update to see them.
The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play.
BY 2018
20190101 Likewise some technical press stories are linked below. 20200101
Go back to last year's update to see them.
The popular press starts having stories that the era of Deep Learning is over.
BY 2020
20200101 We are seeing more and more opinion pieces by non-reporters saying this, but still not quite at the tipping point where reporters come at and say it. Axios and WIRED are getting close.
20210101 While hype remains the major topic of AI stories in the popular press, some outlets, such as The Economist (see after the table) have come to terms with DL having been oversold. So we are there.
VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning".
NET 2021
I am being a little cynical here, and of course there will be no way to know when things change exactly.20210101 This is the first place where I am admitting that I was too pessimistic. I wrote this prediction when I was frustrated with VCs and let that frustration get the better of me. That was stupid of me. Many VCs figured out the hype and are focusing on fundamentals. That is good for the field, and the world!
Emergence of the generally agreed upon "next big thing" in AI beyond deep learning.NET 2023
BY 2027
Whatever this turns out to be, it will be something that someone is already working on, and there are already published papers about it. There will be many claims on this title earlier than 2023, but none of them will pan out.20210101 So far I don't see any real candidates for this, but that is OK. It may take a while. What we are seeing is new understanding of capabilities missing from the current most popular parts of AI. They include "common sense" and "attention". Progress on these will probably come from new techniques, and perhaps one of those techniques will turn out to be the new "big thing" in AI.
20220101
There are two or three candidates bubbling up, but all coming out of the now tradition of deep learning. Still no completely new "next big thing".
The press, and researchers, generally mature beyond the so-called "Turing Test" and Asimov's three laws as valid measures of progress in AI and ML.
NET 2022
I wish, I really wish.20220101
I think we are right on the cusp of this happening. The serious tech press has run stories in 2021 about the need to update, but both the Turing Test and Asimov's Laws still show up in the popular press. 2022 will be the switchover year. [Am I guilty of confirmation bias in my analysis of whether it is just about to happen?]
Dexterous robot hands generally available.NET 2030
BY 2040 (I hope!)
Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc.Lab demo: NET 2026
Expensive product: NET 2030
Affordable product: NET 2035
What is easy for humans is still very, very hard for robots.20220101
There was some impressive progress in this direction this year with the Amazon's release of Astro. A necessary step towards these much harder goals. See the main text.
A robot that can provide physical assistance to the elderly over multiple tasks (e.g., getting into and out of bed, washing, using the toilet, etc.) rather than just a point solution.NET 2028There may be point solution robots before that. But soon the houses of the elderly will be cluttered with too many robots.
A robot that can carry out the last 10 yards of delivery, getting from a vehicle into a house and putting the package inside the front door.Lab demo: NET 2025
Deployed systems: NET 2028
A conversational agent that both carries long term context, and does not easily fall into recognizable and repeated patterns.Lab demo: NET 2023
Deployed systems: 2025
Deployment platforms already exist (e.g., Google Home and Amazon Echo) so it will be a fast track from lab demo to wide spread deployment.
An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse.NET 2030I will need a whole new blog post to explain this...
A robot that seems as intelligent, as attentive, and as faithful, as a dog.NET 2048This is so much harder than most people imagine it to be--many think we are already there; I say we are not at all there.
A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans.NIML

The Problem As I See It

I have often stated that I think the field of AI, despite the great practical successes recently of Deep Learning, is probably a few hundred years away from where most people think it is. We’re still back in phlogiston land, not having yet figured out the elements, including oxygen.

AI, Robotics, and Machine Learning are areas that I have a real personal investment in.  I wrote a terrible Masters thesis on ML back in 1977.  I joined the Stanford AI Lab later that year, then the MIT AI Lab four years later, and became director of that lab in 1997, merging it with LCS (Lab for Computer Science) to form MIT CSAIL in 2003, the largest lab at MIT, still today.  I have founded six AI and robotics companies.  After 45 years in the academic and industry trenches can I be unbiased?  Probably not.

I know that many who disagree with me will dismiss me for all that experience that I have. Perhaps those who agree with me should also dismiss me for the same reason!!

My current belief is that it all gets back to the symbol grounding problem, and even more deeply to adopting a computational approach to AI, Robotics, and ML (and I expect almost no one will agree with that latter claim).

Traditional AI put off the very hard problem of perception, and assumed that later that would be solved and a perception system would deliver symbols describing the world. What are symbols? Well, a convenient abstraction was Lisp symbols, and Lisp symbols really come down to whether two things have the same address in computer memory (with some technical fixes for copying garbage collectors moving stuff around everywhere…), with a little bit of dictionary like descriptions of each symbol in terms of other symbols. (Real dictionaries used by real people always bottom out in the physical experience of those people; not so a Lisp program.) Perhaps that certainty of symbols and separation from perception, and even the metaphor of computation, has lead people astray in the way they try to solve the problems of common sense, qualitative reasoning, inference, etc.  Many dear friends may take this as an attack on their work, but that is not how I mean it.  I mean it as more general questioning.

Along came Deep Learning and many thought that the perception problem had been solved, and that DL gave us those symbols, recognizing objects in the world. But I don’t think that is right either. I think what DL has done is solve the labeling problem, not the object recognition problem.  It gives good label, and it gives poor object. (And yes I meant label and object rather than labels and objects.)

If you want to expand your mind, try reading The Promise Of Artificial Intelligence by Brian Cantwell Smith. This is by far his shortest and most readable book from his forty years of producing them. It is exceptionally erudite. But it is still a hard slog, and even native English speakers will be consulting a dictionary quite often.

In this book Smith introduces the idea of registration, as a maintained relationship between an object outside of us and what goes on inside our head (and he would have it also in a classical computer) despite changes in perception and even context.  It feels a bit like autopoiesis for thinking rather than Maturana and Varela’s conception of it for being. It is a hard concept to grasp and hold onto.  It will be even harder to make it real.  But perhaps that is what AI will need.

And yes this little section is obscure, but the best I can do for a rather short blog post.

 

Space

There are three big stories in the space industry from 2021, and they all impact one or more predictions that I made.  The first is that space tourism notched up significantly from where it had been, both for sub-orbital and orbital trips.  The second is that Boeing had another serious setback on launching people into space; I had said by 2022, now I think that is less likely to happen.  And third, a lot of visibility into progress on SpaceX’s Starship (what was once called the BF Rocket) second stage progress in the first half of the year, and much less than people had expected for the first stage, with no launch yet.

Space Tourism Sub-Orbital

There were two players in each of sub-orbital and orbital space tourism.  Virgin Galactic and Blue Origin for sub-orbital, and Roscosmos and SpaceX for orbital trips.

Virgin Galactic had two manned launches of their sub-orbital Unity each of which got to a bit less than 90km in altitude (note that way back in 2004 Virgin Galactic had three manned flights that got over 100km). The first of this year’s flights was with two professional pilots only, but then on July 11th two pilots were joined by four civilians, including billionaire founder Richard Branson. This flight was only announced after Jeff Bezos had stated his intention to get into space as the first sub-orbital tourist, and Branson rushed in and beat him by nine days.  All six people on board were part of the Virgin Galactic organization, so none paid for their flights explicitly.

Blue Origin was a little like an actual tourism company in that all three of its manned sub-orbital space flights in 2021 had between one and four paying customers on board, though each launch also had non-paying celebratory guests. Jeff Bezos was on board the first of these flights which launched on July 20th, the 52nd anniversary of Armstrong and Aldrin landing on the Moon.  A total of 14 people flew on these three flights, none of whom were professional space travelers. As best as I can tell six of those seats were paid for. Each of the Blue Origin flights topped 100km in altitude.

This is the first year that there have been paying customers on sub-orbital flights and it looks like six of them had paid seats, so not yet the few handfuls of paying customers that I predicted would not happen until at least 2022. So my prediction of no earlier than 2022 was correct. Blue Origin looks like it has a chance to get a few handfuls of paying customers in 2022, and we’ll see whether Virgin Galactic launches its first paying customer this coming year.

There is a question in my mind whether there will be enough appetite for these very short experiences of only about five minutes of weightlessness.  The price per minute is much more than the rate for an orbital flight. Perhaps that will push the market more towards orbital tourism. In any case both the companies involved have much bigger space aspirations than just sub-orbital flights and may lose interest if either the demand or profit margins are not high enough. Sub-orbital tourism may never reach a breakneck pace as airplane pleasure flights did long ago–the market will decide.

Space Tourism Orbital

In 2021 we saw eight orbital space tourists, two of whom paid for their own rides, however all eight rides were paid for, unlike the sub-orbital cases.  The only previous space tourists were eight orbital seats, spread over seven individuals, between 2001 and 2009. In this one year, 2021, we had a doubling of the number of all time orbital space tourists.

The first orbital tourist flight of 2021 was carried out in September by SpaceX, when four people launched on the third flight of a Falcon 9 booster, aboard the second flight of a Dragon capsule. They were aloft just under 72 hours, or three days. One of the people onboard paid for the flight, and the other three were his guests. This is the only fully commercial orbital flight ever to date.

All previous orbital tourist flights had been on Soyuz vehicles going to the International Space Station (ISS), operated by Roscosmos, the same organization that sends all other Russian manned flights aloft.  The other four 2021’s tourists were on those same vehicles to the same destination, though three of the people were working for their flights. In October a Russian actor along with a movie director/camera operator to film her journeyed to shoot scenes for a movie. In December a Japanese billionaire, Yusaku Maezawa, along with his publicist, also went to the ISS. Both visits lasted 12 days. Maezawa is also signed up to loop around the Moon with SpaceX in 2023, delayed from the original announced goal of the fourth quarter of 2018.

Clearly 2021 was the best year ever for space tourism.

Prediction
[Space]
Date2018 CommentsUpdates
Next launch of people (test pilots/engineers) on a sub-orbital flight by a private company.
BY 2018
20190101 Virgin Galactic did this on December 13, 2018.
20200101 On February 22, 2019, Virgin Galactic had their second flight, this time with three humans on board, to space of their current vehicle. As far as I can tell that is the only sub-orbital flight of humans in 2019. Blue Origin's new Shepard flew three times in 2019, but with no people aboard as in all its flights so far.
20210101 There were no manned suborbital flights in 2020.
A few handfuls of customers, paying for those flights.
NET 2020
20210101 Things will have to speed up if this is going to happen even in 2021. I may have been too optimistic.
20220101
It looks like six people paid in 2021 so still not a few handfuls. Plausible that it happens in 2022.
A regular sub weekly cadence of such flights.
NET 2022

BY 2026
20220101
Given that 2021 only saw four such flights, it is unlikely that this will be achieved in 2022.
Regular paying customer orbital flights.NET 2027Russia offered paid flights to the ISS, but there were only 8 such flights (7 different tourists). They are now suspended indefinitely. 20220101
We went from zero paid orbital flights since 2009 to three in the last four months of 2021, so definitely an uptick in activity.
Next launch of people into orbit on a US booster.
NET 2019
BY 2021
BY 2022 (2 different companies)
Current schedule says 2018.20190101 It didn't happen in 2018. Now both SpaceX and Boeing say they will do it in 2019.
20200101 Both Boeing and SpaceX had major failures with their systems during 2019, though no humans were aboard in either case. So this goal was not achieved in 2019. Both companies are optimistic of getting it done in 2020, as they were for 2019. I'm sure it will happen eventually for both companies. 20200530 SpaceX did it in 2020, so the first company got there within my window, but two years later than they predicted. There is a real risk that Boeing will not make it in 2021, but I think there is still a strong chance that they will by 2022.
20220101
Boeing had another big failure in 2021 and now 2022 is looking unlikely.
Two paying customers go on a loop around the Moon, launch on Falcon Heavy.
NET 2020
The most recent prediction has been 4th quarter 2018. That is not going to happen.20190101 I'm calling this one now as SpaceX has revised their plans from a Falcon Heavy to their still developing BFR (or whatever it gets called), and predict 2023. I.e., it has slipped 5 years in the last year.
20220101
With Starship not yet having launched a first stage 2023 is starting to look unlikely, as one would expect the paying customer (Yusaku Maezawa, who just went to the ISS on a Soyuz last month) would want to see a successful re-entry from a Moon return before going himself. That is a lot of test program to get to there from here in under two years.
Land cargo on Mars for humans to use at a later date
NET 2026SpaceX has said by 2022. I think 2026 is optimistic but it might be pushed to happen as a statement that it can be done, rather than for an pressing practical reason.
Humans on Mars make use of cargo previously landed there.NET 2032Sorry, it is just going to take longer than every one expects.
First "permanent" human colony on Mars.NET 2036It will be magical for the human race if this happens by then. It will truly inspire us all.
Point to point transport on Earth in an hour or so (using a BF rocket).NIMLThis will not happen without some major new breakthrough of which we currently have no inkling.
Regular service of Hyperloop between two cities.NIMLI can't help but be reminded of when Chuck Yeager described the Mercury program as "Spam in a can".

Boeing’s Woes

NASA funded SpaceX and Boeing to develop two reusable capsules for launching NASA astronauts. Originally they were neck and neck on schedule. Each had some setbacks and things have taken longer than expected.

The SpaceX vehicle, Crew Dragon, first took people to space in 2020, and has now done so five times, including hosting the first non-Russian orbital tourist flight.

Boeing has not been so lucky. In their first unmanned orbital test flight in December 2019 there were serious software problems and the vehicle did not reach the ISS as planned.  During an August 2021 launch window for a re-fly of the unmanned test, problems with valves deep in the craft were discovered and it was removed from the launch pad. Current plans call for a launch in May 2022, but that is by no means certain.  Meanwhile NASA has reassigned crew members from each of the first two manned launches as those astronauts were getting stale waiting for Boeing to fix the problems. This is not a good sign.

Starship

SpaceX Starship (as distinct from Boeing Starliner) was an a very rapid launch, blow-up, try again, pace as 2020 turned into 2021. Starship will be the largest rocket ever flown and both its first and second stage are intended to be fully re-usable.

After a series of fiery flights which resulted in loss of vehicle, usually in spectacular explosions, SpaceX launched the second stage of Starship in early May 2021 and had it land softly back at the launch pad, remain standing, and survive without blowing up.

Attention shifted to the first stage and people were expecting it to launch soon, but it was not to be in 2021.  Three times, a second stage has been mated to a first stage, and sometimes a first stage has fired some engines in a static test. So far, however, there has been no launch attempt.  It thus remains to be seen whether the lessons from the second stage can lead to a faster test campaign for the first stage.

As usual with the CEO of SpaceX it is sometimes hard to discern fact from fantasy/trolling.  On November 30th (in the middle of the Thanksgiving break) he warned of potential bankruptcy for SpaceX unless all hands were on deck to fix problems with producing enough Raptor engines for Starship’s first stage. In the linked story “according to Musk’s email, SpaceX needs to launch Starship at least once every two weeks next year to keep the company afloat”. Next year in that context is 2022.

As a comparison, SpaceX launched its first Falcon 9 in June 2010.  They did not reach an annual rate of one launch every two weeks until 2020 with 26 launches. That was a ten year ramp up. So to go from zero launches to a run rate of one every two weeks by the end of the year seems rather ambitious even by SpaceX record breaking standards. [[The Falcon 9 launch rate in 2021 improved to once every 12 days.]]

An Astounding Historical Rocket Ramp Up

Incidentally, though by modern standards SpaceX develops and deploys at incredible speed, there is one historical instance of even faster rocket development, production, and deployment.

Werner Von Braun designed the V-2 rocket (originally designated A-4), and carried out the first test flight on October 3rd, 1942. That sub-orbital flight reached an altitude of 192km, roughly twice what Virgin Galactic and Blue Origin’s suborbital rockets have achieved.

The V-2 is the progenitor of all human spaceflight rockets, and the team that designed and produced them ended up as principals in both the Soviet and US space programs. Werner von Braun was the architect of the Apollo lunar landing program. However, back in 1942 he promoted the V-2 as a weapon, and by December of 1942 Hitler ordered them into mass production.

Over the next two years there were more than 300 further test flights, and then in the last few months of the war, starting on September 8th 1944 in an attack on newly liberated Paris, Germany launched over 3,000 of them on operational warhead delivery missions.  Three thousand operational flights in eight months.

 

The Origin of Robot Arm Programming Languages

rodneybrooks.com/the-origin-of-robot-arm-programming-languages/

So far my life has been rather extraordinary in that through great underserved luck1 I have been present at, or nearby to, many of the defining technological advances in computer science, Artificial Intelligence, and robotics, that now in 2021 are starting to dominate our world. I knew and rubbed shoulders2 with many of the greats, those who founded AI, robotics, and computer science, and the world wide web. My big regret nowadays is that often I have questions for those who have passed on, and I didn’t think to ask them any of these questions, even as I saw them and said hello to them on a daily basis.

This short blog post is about the origin of languages for describing tasks in automation, in particular for industrial robot arms. Three people who have passed away, but were key players were Doug Ross, Victor Scheinman, and Richard (Lou) Paul, not as well known as some other tech stars, but very influential in their fields. Here, I must rely not on questions to them that I should have asked in the past, but from personal recollections and online sources.

Doug Ross3 had worked on the first MIT computer, Whirlwind, and then in the mid nineteen fifties he turned to the numerical control of three and five axis machine tools–the tools that people use to cut metal parts in complex shapes. There was a requirement to make such parts, many with curved surfaces, with far higher accuracy than a human operator could possibly produce.  Ross developed a “programming language” APT (for Automatically Programmed Tool) for this purpose.

Here is an APT program from Wikipedia.

There are no branches (the GOTO is a command to move the tool) or conditionals, it is just a series of commands to move the cutting tool.  It does provide geometrical calculations using tangents to circles (TANTO) etc.

APT programs were compiled and executed on a computer–an enormously expensive machine at the time. But cost was not the primary issue.  APT was developed at the MIT Servo Mechanism Lab, and that lab was the home of building mechanical guidance systems for flying machines, such as ICBMs, and later the craft for the manned space program flights of the sixties.

Machine tools were existing devices that had been used by humans.  But now came new machines, never used by humans. By the end of that same decade work was proceeding by two visionaries, one technical and one business, working in partnership, to develop a new class of machine, automatic from the start, the industrial robot arm.

The first  arm, the Unimate, developed by George Devol and Joe Engelberger4 at their company Unimation, was installed in a GM factory in New Jersey in 1961, and used vacuum tubes rather than transistors, analog servos for its hydraulics rather than the digital servos we would use today, and simply recorded a sequence of positions to which the robot arm should go. Computers were too expensive for such commercial robots, so the arm had to be carefully built to not require them, and there was therefore no “language” used to control the arms.  Unimates were used for spot welding and to lift heavy castings into and out of specialized machines.  The robot arms did the same thing again and again, without any decisions making taking place.

In the late sixties, mechanical engineer Victor Scheinman at the Stanford AI Lab designed what became known as the Stanford Arm.  They were controlled at SAIL by a PDP-8 minicomputer, and were among the very first electric digital arms. Here is John McCarthy at a terminal in front of two Stanford Arms, and a steerable COHU TV camera:

In the five years (over a period of seven years total) that I was a member of the Hand-Eye group at SAIL I never once saw John come near the robots. Here is a museum picture of the Gold Arm.  It has five revolute joints and one linear axis.

The only contemporary electric digital arm was Freddy at Edinburgh University.

While the Stanford Arms (the Gold Arm, the Blue Arm, and the Red Arm) were intended to assemble real mechanical components as might be found in real products, Freddy was designed from the start to work only in a world of wooden blocks.

There was actually a fourth Stanford Arm built, the Green Arm. It had a longer reach and was used at JPL as part of a test bed for a Mars mission that never happened. In this 1978 photo the arm was controlled by a PDP-11 mini computer on a mobile platform.

The Stanford Arms were way ahead of their time and could measure and apply forces–the vast majority of industrial robotic arms can still not do that today.  They were intended to be used to explore the possibility of automatic assembly of complex objects. The long term intent (still not anywhere near met in practice even today) was that an AI system would figure out how to do the assembly and execute it. This was such a hard problem that first (and still today in practice) it was decided to try to write programs to assemble parts.

Richard (Lou) Paul, who left the Hand-Eye group right before I arrived there in 1977 (I knew him for decades after that), developed the first programming language for control of a robot arm trying to do something as sophisticated as assembly.  It was called WAVE. I suspect the way it came out was all based on a nerdy pun, or two, and it even today impacts the way industrial robots are controlled. Besides the language, Paul also developed inverse kinematics techniques, which today are known as IK solvers, and used in almost all robots.

By the late sixties the big US AI labs all had PDP-10 mainframes made by Digital Equipment Corporation in Massachusetts5.  The idea was that Paul’s language would be processed by the PDP-10, and then the program would largely run on the PDP-8.  But what form should the language take?  Well, the task was assembly, as in assembling mechanical parts. But many programs in those days were written in machine specific “assembly language” for assembling instructions for the computer, one by one, into binary.  Well, why not make the assembly language look like the PDP-10 assembly language???  And the MOVE instruction which moved data around in the PDP-10 would be repurposed to instead move the arm about.  Funny, and fully in tune with the ethos at the Stanford and MIT AI Labs at the time.

I cannot anywhere find either an online or offline copy of Paul’s 1972 Ph.D. thesis. It had numbers STAN-CS-311 and Stanford AIM-177 (for AI Memo)–if anyone can find it please let me know. [[See the comment from Bill Idsardi and Holly Yanco below. I downloaded and read the free version and have some updates based on that in my reply to Professor Yanco.]]  So here I will rely on a later publication by Bob Bolles and Richard Paul from 1973, describing a program written in WAVE. It is variously called STAN-CS-396 or AIM-220 — the available scan from Stanford is quite low quality, rescued from a microfilm scan from years ago. I have retyped a fragment of a program to assemble a water pump that is in the document so that it is easier to read.

For anyone who has seen assembler language this looks familiar, with the way the label L1 is specified and the comments follow a semi-colon, with one instruction per line. The program control flow even uses the same instructions as PDP-10 assembly language, with AOJ for add one and jump, SKIPE and SKIPN for skip the next instruction if the register is zero, or not. The only difference I can see is that there is just a single implicit register for the AOJ instructions, and that the skip instructions refer to a location (23) where some force information is cached (perhaps I have this correct, perhaps not…). This program seems to be grasping a pin, then going to a hole, and then dithering around until it can successfully insert the pin in the hole.  It is keeping a count of the number of attempts, though this code fragment does not use that count, and seems to be checking at the end whether it dropped the pin by accident (by opening the gripper and closing it just a little again to see if a force is felt as would be the case if the pin remains in the same place), though there is no code to act on a dropped pin.  Elsewhere in the document it is explained how the coordinates P and T are trained, by moving the arm to the desired location and typing “HERE P” or “HERE T” — when the arms were not powered they could be moved around my hand.

People soon realized that a pun on computer assembly language was not powerful enough to do real work.  By late 1974 Raphael Finkel, Russell Taylor, Robert Bolles, Richard Paul, and Jerome Feldman had developed a new language AL, for Assembly Language, an Algol-ish language based on SAIL (Stanford Artificial Intelligence Language) which ran on the lab’s PDP-10. Shahid Mujtaba and my officemate Ron Goldman continued to develop AL for a number of years, as seen in this manual.  They along with Maria Gini and Pina Gini also worked on the POINTY system which made gathering coordinates like P and T in the example above much easier.

Meanwhile Victor Schienman had spent a year at the MIT AI Lab, where he had developed a new small force controlled six revolute joint arm called the Vicarm. He left one copy at MIT6 and returned to Stanford with a second copy.

I think it was in 1978 when he came to a robotics class that I was in at Stanford, carrying both his Vicarm and a case with a PDP-11 in it. He set them up on a table top and showed how he could program the Vicarm in a language based on Basic, a much simpler language than Algol/SAIL.  He called the language VAL, for Victor’s AL. Also, remember that this was before the Apple II (and before anyone had even heard of Apple, founded just the year before) and before the PC–simply carrying in a computer and setting it up in front of the class was quite a party trick in itself!!

Also in 1978 Victor sold a version of the arm, about twice the size of the Vicarm, that came to be known as the PUMA (Programmable Universal Machine for Assembly) to Engelberger’s company Unimation. It became a big selling product using the language VAL.  This was the first commercial robot that was all electric, and the first one with a programming language. Over the years various versions of this arm were sold that ranged in mass from 13Kg to 600Kg.

The following handful of years saw an explosion of programming languages, usually unique to each manufacturer, and usually based on an existing programming language.  Pascal was a popular basis at the time.  Here is a comprehensive survey from 1986 showing just how quickly things moved!

By 2008 I thought that it was time we tried to go beyond the stop gap measure of having programming languages for robot arms, and return to the original dream from around 1970 at the Stanford Artificial Intelligence Lab.  I founded Rethink Robotics.  We built and shipped many thousands of robot arms to be used in factories. The first was Baxter, and the second was Sawyer. In the linked video you can see Sawyer measuring the external forces applied to it, and trying to keep the resultant force to zero by complying with the force and moving. In this way the engineer in the video is able to effortlessly move the robot arm to anywhere she likes. This is a first step to being able to show the robot where things are in the environment, as in the “HERE P” or “HERE T” examples with WAVE, and later POINTY. Here is a photo of me with a Sawyer on the left and an older Baxter on the right.

Back in 1986 I had published a paper on the “subsumption architecture” (you can see the lab memo version of that paper here). Over time this control system for robots came to be known as the “behavior based approach” and in 1990 I wrote the Behavior Language which was used by many students in my lab, and at the company iRobot, which I had co-founded in 1990.  In around 2000 an MIT student, Damian Isla, refactored my approach and developed what are now called Behavior Trees. Most video games are now written using Behavior Trees in what are the two most popular authoring platforms, Unity and Unreal.

For version five of our robot software platform at Rethink Robotics, called Intera, for interactive, we represented the “program” for the robot as a Behavior Tree. While it was possible to author a graphical behavior tree (there was no text version of a tree) using a graphical user interface, it was also possible to simply show the robot what you wanted it to do, and an AI system made inferences about what was intended, sometimes asked you to fill in some values using simple controls on the arm of the robot, and automatically generated a Behavior Tree.  It was up to the human showing the robot the task whether they ever looked at or edited that tree.  Here you can see Intera 5 running not only on our robot Sawyer, but also on the two other commercial robots at the time that had force sensing.  When my old officemate and developer of AL, Ron Goldman, saw this system he proclaimed something like “it’s POINTY on steroids!”.

Baxter and Sawyer were the first safe robots that did not require a cage to keep humans away from them for the humans’ protection. And Sawyer was the first modern industrial robot which finally got away from having a computer like language to control it, as all robots had since the idea was first developed at the Stanford AI Lab back in the very early seventies.

There is still a lot remaining to be done.

 



1I had written a really quite abysmal master’s thesis in machine learning in 1977 in Adelaide, Australia.  Somehow, through an enormous stroke of luck I was accepted as one of twenty PhD students in the entering class in Computer Science at Stanford University in 1977.  I showed up for my first day of classes on September 26th, 1977, and that day I was also one of three new students (only three that whole year!!) to start as research assistants at the Stanford Artificial Intelligence Laboratory (SAIL). At that time the lab was high above the campus on Arastradero Road, in a wooden circular arc building known as the D. C. Power Building.  It has since been replaced by a horse ranch.  [And there never was an A. C. Power Building at Stanford…]  It was an amazing place with email, and video games, and computer graphics, and a laser printer, and robots, and searchable news stories scraped continuously from the two wire services, and digital audio at every desk, and a vending machine that emailed you a monthly bill, and a network connected to about 10 computers elsewhere in the world, all in 1977. It was a time machine that only traveled forwards.

I was a graduate student at SAIL until June 1981, then a post-doc in the Computer Science Department at Carnegie Mellon University for the summer, and then I joined the MIT Artificial Intelligence Laboratory for two years. In September 1983 I was back at Stanford as a Computer Science faculty member, where I stayed for one year, before joining the faculty of the Electrical Engineering and Computer Science Department back at MIT, where I was again a member of the MIT Artificial Intelligence Laboratory.  I became the third director of the lab in 1997, and in 2003 I became director of a new lab at MIT formed by joining the AI Lab and the Lab for Computer Science: CSAIL, the Computer Science and Artificial Intelligence Lab.  It was then and remains now the largest single laboratory at MIT, now with well over 1,000 members.  I stepped down as director in 2007, and remained at MIT until 2010.

All the while, beginning in 1984, I have started six AI and robotics companies in Boston and Palo Alto.  Total time where I have not been part of any startups since 1984 is about six months.

2Sometimes I was their junior colleague, and at times I became their “manager” on org charts which bore no relation at all to what happened on the ground.  I also met countless other well known people in the field at conferences or when I or they visited each other’s institutions and companies.

3I often used to sit next to Doug Ross at computer science faculty lunches at MIT on Thursdays throughout the eighties and nineties. We would chat amiably, but I must admit that at the time I did not know of his role in developing automation! What a callow youth I was.

4I met Joe Engelberger many times, but he never seemed to quite approve of my approach to robotics, so I can not say that we had a warm relationship.  I regret that today, and I should have tried harder.

5Ken Olsen left MIT to found DEC in 1957, based on his invention of the minicomputer. DEC built a series of computers with names in sequence PDP-1, PDP-2, etc, for Programmed Data Processor. The PDP-6 and PDP-10 were 36 bit mainframes. The PDP-7 was an eighteen bit machine upon which UNIX and the C language were developed at Bell Labs. The PDP-8, a 12 bit machine, was cheap enough to bring computation into individual’s research labs, and the smaller successor the PDP-11 really became the workhorse small computer before the microprocessor was invented and developed. Ken lived in a small (by population size) town, Lincoln, on the route between Lexington and Concord that was taken on that fateful day back in 1775. I also lived in that town for many years and some Saturday mornings we would bump into each other at the checkout line of the town’s only grocery store, Donelan’s. Often Ken would be annoyed about something going on at MIT, and he never failed to explain to me his position on whatever it happened to be. I wish I had developed a strategy of asking him questions about the early days of DEC to divert his attention.

6For many years at MIT I took the Vicarm, that Victor had left behind, to class once a year when I would talk about arm kinematics. I used it as a visual prop for explaining forward kinematics, and the need for backward kinematics.  I never once thought to take a photograph of it. Drat!!!

 

Predictions Scorecard, 2021 January 01

rodneybrooks.com/predictions-scorecard-2021-january-01/

On January 1st, 2018, I made predictions about self driving cars, Artificial Intelligence and machine learning, and about progress in the space industry. Those predictions had dates attached to them for 32 years up through January 1st, 2050.

I made my predictions because at the time I saw an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.

I was accused of being a pessimist, but I viewed what I was saying as being a realist. Out of emotion I wanted to give some other predictions that went against the status quo expectations at the time. Fortunately I was able to be rational and so I only made one prediction that I view in retrospect as being hot headed (and I ‘fess up to it in the AI&ML table in this post). Any one can, and they do, make predictions. But usually they are not held to those predictions. So I am holding myself accountable to my own predictions.

As part of self certifying the seriousness of my predictions I promised to review them, as made on January 1st, 2018, every following January 1st for 32 years, the span of the predictions, to see how accurate they were. This is my third annual review and self appraisal, following those of 2019 and 2020. Only 29 to go!

I think in the three years since my predictions, there has been a general acceptance that certain things are not as imminent or as inevitable as the majority believed just then. So some of my predictions now look more like “of course”, rather than “really, that long in the future?” as they did in 2018.

This is a boring update. Despite lots of hoopla in the press about self driving cars, Artificial Intelligence and machine learning, and the space industry, this last year, 2020, was not actually a year of any surprises.

The biggest news of all is that SpaceX launched humans into orbit twice in 2020. It was later than they had predicted, but within my predicted timeframe. Neither they nor I are surprised or shocked at their accomplishment, but it is a big deal. It changes the future space game for the US, Canada, Japan, and Europe (including the UK…). Australia and New Zealand too. It gives countries in Asia, South America and Africa more options on how they plot their space ambitions. My heartfelt congratulations to Elon and Gwynne and the whole gang at SpaceX.

This year’s summary indicates that so far none of my predictions (except for the one hot headed one I mentioned above) have turned out to be too pessimistic. As I said last year, overall I am getting worried that I was perhaps too optimistic, and had bought into the hype too much.

An aside.
Some might claim that 2020 was special because of Covid, and that any previous predictions, made by others, more optimistic than I, should not count, because something unforeseen happened. No, I will not let you play that game. The reason that predictions turn out to be overly optimistic is because the people doing the predictions underestimated the uncertainties in bringing any technology to fruition. If we allow them to claim Covid got in the way, we should allow them to claim that something was harder than they expected and that got in the way, or the market didn’t like their product and that got in the way. Etc. My predictions are precisely about the unexpected having bigger impact than other people making predictions expect. Covid is just one of a long line of things people do not expect.
Repeat of 2019’s Explanation of Annotations

As I said in 2018, I am not going to edit my original post, linked above, at all, even though I see there are a few typos still lurking in it. Instead I have copied the three tables of predictions below from 2020’s update post, and have simply added a total of eight comments to the fourth columns of the three tables. As with last year I have highlighted dates in column two where the time they refer to has arrived.

I tag each comment in the fourth column with a cyan colored date tag in the form yyyymmdd such as 20190603 for June 3rd, 2019.

The entries that I put in the second column of each table, titled “Date” in each case, back on January 1st of 2018, have the following forms:

NIML meaning “Not In My Lifetime, i.e., not until beyond December 31st, 2049, the last day of the first half of the 21st century.

NET some date, meaning “No Earlier Than” that date.

BY some date, meaning “By” that date.

Sometimes I gave both a NET and a BY for a single prediction, establishing a window in which I believe it will happen.

For now I am coloring those statements when it can be determined already whether I was correct or not.

I have started using LawnGreen (#7cfc00) for those predictions which were entirely accurate. For instance a BY 2018 can be colored green if the predicted thing did happen in 2018, as can a NET 2019 if it did not happen in 2018 or earlier. There are five predictions now colored green, the same ones as last year, with no new ones in January 2020.

I will color dates Tomato (#ff6347) if I was too pessimistic about them. No Tomato dates yet. But if something happens that I said NIML, for instance, then it would go Tomato, or if in 2020 something already had happened that I said NET 2021, then that too would have gone Tomato.

If I was too optimistic about something, e.g., if I had said BY 2018, and it hadn’t yet happened, then I would color it DeepSkyBlue (#00bfff). None of these yet either. And eventually if there are NETs that went green, but years later have still not come to pass I may start coloring them LightSkyBlue (#87cefa).

In summary then: Green splashes mean I got things exactly right. Red means provably wrong and that I was too pessimistic. And blueness will mean that I was overly optimistic.

So now, here are the updated tables.

Self Driving Cars

Oh gosh.  The definition of a self driving car keeps changing.  When I was a boy, three of four years ago, it meant that the car drove itself. Now it means that a company gives a press release that they have deployed self driving cars, reporters give breathless headlines about deployment of said self driving cars, and then bury deep within the story that actually there is a person in the loop, sometimes in the car itself, and sometimes remotely.  Oh, and the deployments are in the most benign public suburbs imaginable, and certainly nowhere that a complete and sudden emergency stop is an unacceptable action.  And no mention  of restrictions on time of day, or weather, or particular roads within the stated geographical area.

Dear technology reporters, after this table, where almost nothing has changed, save for two comments dated Jan 1st 2021, I’ll give you some questions to ask before you anoint corporate press releases as breaking technology stories.  Just saying.

Prediction
[Self Driving Cars]
Date2018 CommentsUpdates
A flying car can be purchased by any US resident if they have enough money.NET 2036There is a real possibility that this will not happen at all by 2050.
Flying cars reach 0.01% of US total cars.NET 2042That would be about 26,000 flying cars given today's total.
Flying cars reach 0.1% of US total cars.NIML
First dedicated lane where only cars in truly driverless mode are allowed on a public freeway.
NET 2021
This is a bit like current day HOV lanes. My bet is the left most lane on 101 between SF and Silicon Valley (currently largely the domain of speeding Teslas in any case). People will have to have their hands on the wheel until the car is in the dedicated lane.20210101 It didn't happen any earlier than 2021, so I was technically correct. But I really thought this was the path to getting autonomous cars on our freeways safely. No one seems to be working on this...
Such a dedicated lane where the cars communicate and drive with reduced spacing at higher speed than people are allowed to driveNET 2024
First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.NET 2022The pick up and drop off points will not be parking spots, but like bus stops they will be marked and restricted for that purpose only.20190101 Although a few such services have been announced every one of them operates with human safety drivers on board. And some operate on a fixed route and so do not count as a "taxi" service--they are shuttle buses. And those that are "taxi" services only let a very small number of carefully pre-approved people use them. We'll have more to argue about when any of these services do truly go driverless. That means no human driver in the vehicle, or even operating it remotely.
20200101
During 2019 Waymo started operating a 'taxi service' in Chandler, Arizona, with no human driver in the vehicles. While this is a big step forward see comments below for why this is not yet a driverless taxi service.
20210101 It wasn't true last year, despite the headlines, and it is still not true. No, not, no.
Such "taxi" services where the cars are also used with drivers at other times and with extended geography, in 10 major US citiesNET 2025A key predictor here is when the sensors get cheap enough that using the car with a driver and not using those sensors still makes economic sense.
Such "taxi" service as above in 50 of the 100 biggest US cities.NET 2028It will be a very slow start and roll out. The designated pick up and drop off points may be used by multiple vendors, with communication between them in order to schedule cars in and out.
Dedicated driverless package delivery vehicles in very restricted geographies of a major US city.NET 2023The geographies will have to be where the roads are wide enough for other drivers to get around stopped vehicles.
A (profitable) parking garage where certain brands of cars can be left and picked up at the entrance and they will go park themselves in a human free environment.NET 2023The economic incentive is much higher parking density, and it will require communication between the cars and the garage infrastructure.
A driverless "taxi" service in a major US city with arbitrary pick and drop off locations, even in a restricted geographical area.
NET 2032This is what Uber, Lyft, and conventional taxi services can do today.
Driverless taxi services operating on all streets in Cambridgeport, MA, and Greenwich Village, NY. NET 2035Unless parking and human drivers are banned from those areas before then.
A major city bans parking and cars with drivers from a non-trivial portion of a city so that driverless cars have free reign in that area.NET 2027
BY 2031
This will be the starting point for a turning of the tide towards driverless cars.
The majority of US cities have the majority of their downtown under such rules.NET 2045
Electric cars hit 30% of US car sales.NET 2027
Electric car sales in the US make up essentially 100% of the sales.NET 2038
Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.NIMLThere might be some small demonstration projects, but they will be just that, not real, viable mass market services.
First time that a car equipped with some version of a solution for the trolley problem is involved in an accident where it is practically invoked.NIMLRecall that a variation of this was a key plot aspect in the movie "I, Robot", where a robot had rescued the Will Smith character after a car accident at the expense of letting a young girl die.

I’m still not counting the Waymo taxi service in Chandler, a suburb of Phoenix, as a driverless taxi service. It has finally been opened to members of the public, and has no human driver actually in the car, but it is not a driverless service yet. What about the October 8th headline: Waymo finally launches an actual public, driverless taxi service with lede Fully driverless technology is real, and now you can try it in the Phoenix area. No. It is not fully driverless. When you read down in the story you eventually find that, just as for the last year with a more restricted customer base, there is still a human in the loop. The story proudly proclaims that “[m]embers of the public who live in the Chandler area can hail a fully driverless taxi today”, but a few sentences later say the economics may not work out “because Waymo says the cars still have remote overseers”. That would be a person in the loop. Although Waymo doesn’t give a straight answer of how many cars a single overseer oversees, they refuse to say directly that is more than one. I.e., they are not willing to say that there is less than one full person devoted to overseeing the control of each “driverless” taxi. This is not a driverless taxi service. Tech press headlines be damned. Oh, and by the way, Waymo’s aim was to get to 100 rides per week by the end of 2020. Not exactly a large scale deployment, and not open to the public at any sort of scale, and certainly not driverless.

But what about this December 9th headline: Cruise is now testing fully driverless cars in San Francisco? Again it pays to read the story. Carefully. There is a Cruise employee in the passenger seat of every one of the “fully driverless” cars and they can stop the car whenever they choose. And just to be sure, every vehicle is also monitored by a remote employee. Two safety humans for every one of these so called fully driverless cars. A story from the same day in the Washington Post reports that the testing will only be in the Sunset district of the city, and only on certain streets. SF residents know that that is a very residential area with very little traffic, and four way stop signs at intersections every block. The traffic moves very slowly even when the streets are deserted, and it is always safe to come to a complete stop.

Reporters should ask all companies saying they are testing fully driverless cars, whether as a taxi service or not, the following questions, at least.

  • Is there an employee in the “driverless car”? Are they able to stop the car or take over in any way?
  • Is there a remote person able to monitor the car? Are they able to take over in any way, even just commanding that the car stop, or being able to set a way point which will let the car get out of a stuck situation? How many cars does such a remote person monitor today (as distinct from aspirations for how many in the future)?
  • Is there a chase vehicle following the driverless car to prevent rear end collisions from other random drivers? (I understand that Cruise does this in San Francisco; and I think I have seen it myself with a Waymo vehicle in the last week, but I may have misinterpreted.) If there is a chase vehicle does someone in that vehicle have the ability to command the driverless vehicle to stop? Note that a chase vehicle is a whole second car driven by a person just to enable the driverless car to be safe…
  • Are there any restrictions on the time of day that the system operates?
  • Will testing change if there is really bad weather or will it go on regardless?
  • What is the area that the test extends over, and are there any streets that the cars will not go on?
  • If it is a taxi service are there designated pickup and drop off sites, or can those be anywhere? And that includes 3 meters further (regular taxis and rideshare services do this) at the passenger’s request.
  • Once a passenger gets in can they change the desired destination? Can they abort the ride and get out whenever they want to?
  • Can the passenger communicate with the car (not a remote person) by voice?

Most of these questions will have answers that the PR people, at this point, will not want to be forthcoming about. Sometimes they will be “unable to answer” even when it is a simple yes/no question (e.g., see the Cruise story linked in the questions above)–that is likely not a sign of them not actually knowing the answer…

The only company “that says it has” tested fully driverless cars, with no chase vehicles on California roads is Nuro, but that is a long way from a taxi service since they use special purpose tiny vehicles that can not accommodate even one person.

Towards the end of the year AutoX, a company based in China, has said they are operating a driverless taxi service in a number of cities. First note, however, that the service is not open to the public, and is only available for demonstrations by company employees. They say they have removed both drivers in the cars and remote drivers, but none of the articles I have seen have answered the comprehensive set of questions I have above, including whether there is a person doing remote monitoring rather than remote driving. The one video I have seen is from inside an empty vehicle as it drives on empty(!!) roads. I have spent a lot of time in China in a lot of different cities, and I have never seen an empty road.

I am not saying there is anything wrong with these small steps towards driverless cars. They are sensible, and necessary steps along the way. But if you look at the headlines, and even the first few paragraphs of most stories about driverless cars, they breathlessly proclaim that the future is already here. No it is not. Read the fine print.  We are very far from an actual driverless taxi service that can compete with a traditional taxi company (which has to make money), or even one of the ride share companies which themselves are still losing money. The deployed demonstration systems have many more people per car working full time than any of these other systems, are only operating a tiny number of rides per day, and would not be profitable even if each ride was charged at a thousand times that charged by the incumbents. We are still a long, long way from a viable profitable autonomously driven taxi service.

The last two years have seen a real shakeout in expectations for deployment of self driving cars.  The most recent example of this is was in December 2020 when Uber sold their self driving car efforts for minus $ 400 million. That’s right.  They gave away the division working on self driving cars, once thought to be the way to finally make Uber profitable, and sent along with it $ 400 million in cash.  [[… If only they had asked me. I might have bought it for minus $ 350 million! A relative bargain for Uber!]]

To illustrate how predictions have been slipping, here is a slide that I made for talks, based on a snapshot of predictions about driverless cars from March 27, 2017. The web address still seems to give the same predictions with a couple more at the end that I couldn’t fit on my slide. In parentheses are the years the predictions were made, and in blue are the dates for when the innovation was predicted to happen. The orange arrows point at predictions that I have since seen companies retract. I may have missed some retractions. The dates highlighted in red have passed, without coming to fruition. None of the other predictions have so far borne fruit.

In summary, twelve of the twenty one predictions have passed and vanished into the ether. Of the remaining nine I am confident that the three for 2021 will also not be attained. And then there are none that come due until 2024. I think it is fair to say that predictions for autonomous vehicles in 2017 were wildly overoptimistic. My blog posts of January 2017 and July 2018 tried to tamp them down by talking about deployment issues that were not being addressed, and that, in my opinion, needed large amounts of research to solve. It turns out that before you get to my concerns just the long tail of out of the ordinary situations, and not even the really long tail, just some pretty common aspects of normal driving in most cities and on freeways, have delayed the revolution.

Two more car topics

A year ago, in my update at the start of 2020, I had a little rant about a prediction that Elon Musk had made about how there would be a million Tesla robo-taxis on the road by the end of 2020. His prediction certainly helped Tesla’s stock price. Here is my rant.

<rant>
At the same time, however, there have been more outrageously optimistic predictions made about fully self driving cars being just around the corner. I won’t name names, but on April 23rd of 2019, i.e., less than nine months ago, Elon Musk said that in 2020 Tesla would have “one million robo-taxis” on the road, and that they would be “significantly cheaper for riders than what Uber and Lyft cost today”. While I have no real opinion on the veracity of these predictions, they are what is technically called bullshit. Kai-Fu Lee and I had a little exchange on Twitter where we agreed that together we would eat all such Tesla robo-taxis on the road at the end of this year, 2020.
</rant>

Well, surprise!, everyone.  2020 has come and gone and there are not a million Tesla robo-taxis on the road. There are zero. I, and Kai-Fu Lee, both had complete confidence in our predictions of zero. And we were right. Don’t worry. Undaunted by complete and abject failure to predict accurately, twice again, Elon has new predictions. In July he predicted level 5 autonomy in 2020, and in December he made predictions of full autonomy in 2021, with his normal straight face. Those predictions have again have pumped up Tesla’s stock price. Never give up on a winning strategy!

Finally I want to say something about flying cars.  As with “self driving cars” the definition has been changing as PR departments for various companies try to claim progress in this domain. My predictions were about cars that could drive on roads, and could take to the air and fly when their driver wanted them to. Over the last couple of years “flying car” has been perverted to mean any small one or two person flying machine, usually an oversize quadcopter or octodecacopter, or some such, that takes off and lands and then stays exactly where it is until it takes off again. These are not cars. They are helicopters. With lots of rotors. They are not flying cars.

Artificial Intelligence and Machine Learning

I had not predicted any big milestones for AI and machine learning for the current period, and indeed there were none achieved. In the world of AI hype that is a bold statement to make. But I really do believe it. There have been overhyped stupidities, and I classify GPT-3 as one, see the analysis at the end of this section.

No research results from this last year are easily understood by someone not in the thick of the actual research. Compare this to the original introduction of the perceptron, or back propagation, or even deep learning. Geoff Hinton was able to convey the convergence of three of four ideas that came to be known as deep learning, in a single one hour lecture, that a non-specialist could understand and appreciate. There have been no such innovations in the last year. They are all down in the weeds.

I am not criticizing research being down in the weeds. That is how science and engineering gets done. Lots and lots of grunt work making incremental improvements. But thirty years from now we probably won’t be saying, in a positive way, “remember AI/ML advances in 2020!”  This last year will blend and become indistinguishable from many adjacent years.  Compare that to 2012, the first time Deep Learning was applied to ImageNet. The results shocked everyone and completely changed two fields, machine learning and computer vision. Shock and awe that has stood the test of time.

In last year’s review I had some thoughts and links to things such as the growing carbon foot print of deep learning, on comparison to how quickly people learn, on non generalizability of games that are learned with reinforcement learning. I also argued why game playing has been such a success for DeepMind, and why it will probably not generalize as well as people expect. These comments occurred both just before and just after last year’s version of the table below. I stand by all those comments and invite you to go look at them.

In the table below I have added only three items, two of which relate to how people look at AI and ML, and one to a missing capability, rather than technical advances.

Prediction
[AI and ML]
Date2018 CommentsUpdates
Academic rumblings about the limits of Deep Learning
BY 2017
Oh, this is already happening... the pace will pick up.20190101 There were plenty of papers published on limits of Deep Learning. I've provided links to some right below this table. 20200101
Go back to last year's update to see them.
The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play.
BY 2018
20190101 Likewise some technical press stories are linked below. 20200101
Go back to last year's update to see them.
The popular press starts having stories that the era of Deep Learning is over.
BY 2020
20200101 We are seeing more and more opinion pieces by non-reporters saying this, but still not quite at the tipping point where reporters come at and say it. Axios and WIRED are getting close.
20210101 While hype remains the major topic of AI stories in the popular press, some outlets, such as The Economist (see after the table) have come to terms with DL having been oversold. So we are there.
VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning".
NET 2021
I am being a little cynical here, and of course there will be no way to know when things change exactly.20210101 This is the first place where I am admitting that I was too pessimistic. I wrote this prediction when I was frustrated with VCs and let that frustration get the better of me. That was stupid of me. Many VCs figured out the hype and are focusing on fundamentals. That is good for the field, and the world!
Emergence of the generally agreed upon "next big thing" in AI beyond deep learning.NET 2023
BY 2027
Whatever this turns out to be, it will be something that someone is already working on, and there are already published papers about it. There will be many claims on this title earlier than 2023, but none of them will pan out.20210101 So far I don't see any real candidates for this, but that is OK. It may take a while. What we are seeing is new understanding of capabilities missing from the current most popular parts of AI. They include "common sense" and "attention". Progress on these will probably come from new techniques, and perhaps one of those techniques will turn out to be the new "big thing" in AI.
The press, and researchers, generally mature beyond the so-called "Turing Test" and Asimov's three laws as valid measures of progress in AI and ML.NET 2022I wish, I really wish.
Dexterous robot hands generally available.NET 2030
BY 2040 (I hope!)
Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc.Lab demo: NET 2026
Expensive product: NET 2030
Affordable product: NET 2035
What is easy for humans is still very, very hard for robots.
A robot that can provide physical assistance to the elderly over multiple tasks (e.g., getting into and out of bed, washing, using the toilet, etc.) rather than just a point solution.NET 2028There may be point solution robots before that. But soon the houses of the elderly will be cluttered with too many robots.
A robot that can carry out the last 10 yards of delivery, getting from a vehicle into a house and putting the package inside the front door.Lab demo: NET 2025
Deployed systems: NET 2028
A conversational agent that both carries long term context, and does not easily fall into recognizable and repeated patterns.Lab demo: NET 2023
Deployed systems: 2025
Deployment platforms already exist (e.g., Google Home and Amazon Echo) so it will be a fast track from lab demo to wide spread deployment.
An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse.NET 2030I will need a whole new blog post to explain this...
A robot that seems as intelligent, as attentive, and as faithful, as a dog.NET 2048This is so much harder than most people imagine it to be--many think we are already there; I say we are not at all there.
A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans.NIML

There have been many criticisms in the popular press over the last year about how AI is not living up to its hype. Perhaps the most influential such one is from the The Economist. The headline was: An understanding of AI’s limitations is starting to sink in, with a lede of After years of hype many people feel that AI has failed to deliver. Such rationality has not stopped breathless other stories in outlets that should know better, such as the AAAS journal Science, and sometimes even in Nature. The ongoing amount of hype is depressing. And it is mostly inaccurate.

Attention

In the table above I mentioned that both common sense and attention have surfaced as something that current ML methods are lacking and at which they do not do well.

Although learning everything from scratch has become a fetish in ML (see Rich Sutton and my response, both from 2019) there are advantages to using extra built-in machinery. And of course, when ever any ML-er shows off their system learning something new they have been the external built-in machinery. They, the human, chose the task, the reward, and the data set or simulator, with what was included and what was not. Then they say “see, it did it all by itself!”. Ahem.

We have seen reinforcement learning (RL) being used to learn manipulation tasks, but they are both contrived, by the humans writing the papers, and can be extraordinarily expensive in terms of cloud computation time. How do children learn to manipulate?  They do it in evolutionarily pre-guided ways. They learn simple grasps, then they learn the pinch between forefinger and thumb, and along the way they learn to direct force to those grasps from their arm muscles. And in some seemingly magical way, they are able to use these earlier skills as building blocks.

By the time they are 18 months (and I just observed this in my own grandson), presented with a door locked with an unknown mechanism (four different instances in this particular anecdote), children use affordances to know where to grasp and how to grasp. They ignore applying arbitrary forces to the wood of the door, except to try to pull it open, testing whether they have gotten the lock in the unlocked state, and instead apply seemingly random forces and motions (just like a machine RL learner!) to just the lock mechanism, experimenting with different grasps and forces. They direct their manipulation attention to the visible lock mechanism, rather than treating the whole thing as one big point cloud as a machine RL learner might. They know where to pay attention to.

Of course this strategy might fail, and that is precisely the allure of secret lock wooden boxes. But usually it works. And the search space for the learner is a couple of orders of magnitude smaller than the “learn everything” approach. When the bill for cloud computer time is tens of millions of dollars, as it is for some “advances” promoted by large companies, an order of magnitude or two can make a real cash difference.

We need to figure out the right mechanisms for attention and common sense and build those into our learning systems if we are to build general purpose systems.

GPT-3 (or the more things change the more they are the same…)

Will Bridewell has likened GPT-3 to a ouija board, and I think that is very appropriate. People see in it what they wish, but there is really nothing there.

<rant>

GPT-3 (Generative Pre-Trained Transformer 3) is a BFNN (that’s a technical term) that has been fed about 100 billion words of text from many sources. It clusters them, and then when you give it a few words it rambles on completing what you have said. Reporters have seen brilliance in these rambles (one should look at the reports about Eliza from the 1960’s), and some have gone as far as taking various output sentences, putting them together in an order chosen by the journalist, and claiming that the resultant text is what GPT-3 said.  Disappointingly, The Guardian did precisely this getting GPT-3 to generate 8 blocks of text of about 500 words each, then the journalists edited them together to about 1,100 words and said that GPT-3 had written the essay. Which was true in only a very sort-of way. And some of these journalists (led on by the PR people at Open AI, a misnamed company that produced GPT-3) go all tingly about how it is so intelligent that it is momentous and dangerous all at once; oh, please, the New York Times, not you too!! This is BS (another technical term) plain and simple. Yes, I know, the Earth is flat, vaccines will kill you, and the world is run by pedophiles. And if I wasn’t such a bitter pessimist I would hop on the GPT-3 train of wonder, along with all those glorious beliefs promoted by a PR department.

Not surprisingly, given the wide variety of sources of its 100 billion training words, GPT-3’s language is sometimes sexist, racist, and otherwise biased. But people who believe we are on the verge of fully general AI see the words it generates, which are rehashes of what it has seen, and believe that it is intelligence. A ouija board. Some of its text fragments have great poetry to them, but they are often unrelated to reality, and if anyone is even a little skeptical of it, they can input a series of words where its output clearly shows it has zero of the common human level understanding of those words (see Gary Marcus and Ernie Davis in Technology Review for some examples). It just has words and their relationships with no model of the world. The willfully gullible (yes, my language is harsh here, but the true believers or journalists who rave about GPT-3 are acting like cult followers–it is a mirage, people!) see true intelligence, but its random rants that it produces (worse than my random rants!) will not let it be part of any serious product.

And finally, GPT-n was actually invented, using neural networks, back in 1956. It just needed more and bigger computers to fool more of the people more of the time. In that year, Claude Shannon and John McCarthy published a set of papers (by thirteen mostly very famous authors including themselves, John von Neumann, and Marvin Minsky) titled Automata Studies, under the imprint of Princeton University Press (Annals of Mathematical Studies, Number 34).

I love that the first paragraph of their preface begins and ends:

Among the most challenging scientific questions of our time are the corresponding analytic and synthetic problems. How does the brain function? Can we design a machine which will simulate a brain? … Currently it is fashionable to compare the brain with large scale electronic computing machines.

Apart from the particular sparse style of their language, an equivalent passage could easily be thought to have been written by any number of modern researchers, sixty four years later.

On page vi of their preface they criticize a GPT-n like model of intelligence, one that is based on neural networks (the words used in their volume) and able to satisfy the Turing Test (not yet quite called that) without really being a thinking machine:

A disadvantage of the Turing definition of thinking is that it is possible, in principle, to design a machine with a complete set of arbitrarily chosen responses to all possible input stimuli (see, in this volume, the Culbertson and the Kleene papers). Such a machine, in a sense, for any given input situation (including past history) merely looks up in a “dictionary” the appropriate response. With a suitable dictionary such a machine would surely satisfy Turing’s definition but does not reflect our usual intuitive concept of thinking.

The machine they refer to here, a neural network, is just like the trained GPT-3 network, though Culbertson and Kleene, in 1956, had not figured out how to train it.

Steven Kleene was a very famous logician and academic Ph.D. sibling of Alan Turing, under Alonzo Church. James Culbertson is a forgotten engineer who speculated on how to build intelligent machines using McCulloch-Pitts neurons, here and elsewhere–I rehabilitate him a little in my upcoming book.

S. C. Kleene, Representation of Events in Nerve Nets and Finite Automata, pp 3-41.

James T. Culbertson, Some Uneconomical Robots, pp 99-116.

</rant>

AlphaFold

Finally, I want to mention AlphaFold, which right at the end of the year got incredible press, breathless incredible press I would say, about “solving” an outstanding question in science.

Given a sequence of amino acids making up a protein, how will that protein fold due to physical forces between parts of nearby acids? This is a really important question. The three dimensional structure, what is inside, and what is on the outside, and what adjacencies are there on the surface of the folded molecule, determines how that protein will interact with other molecules. Thus it gives a clue as to how drugs will interact with a protein.  People in Artificial Intelligence have worked on this since at least the early 1990’s (e.g., see the work of Tomás Lozano-Pérez and Tom Dietterich).

AlphaFold (from DeepMind, the subsidiary of Google that is losing the better part of a billion dollars per year) is a real push forward. But it does not solve the problem of predicting protein folds.  It is very good at some cases, and very poor at other cases, and you don’t know which it is unless you know the real answer already. You can see a careful (and evolving) analysis here. It is long so you can skip to the four paragraphs of conclusion to hear about both the goods and the bads. It likely will not revolutionize drug discovery without a lot more work on it, and perhaps never.

While AlphaFold is another interesting “success” for machine learning, it does not advance the fields of either AI or ML at all. And its long term impact is not yet clear.

Space

In 2020 SpaceX came back from the explosion of a Crewed Dragon capsule that happened during a ground test on April 20th, 2019.  They were able to launch their first two people to space aboard Demo-2 on May 30th, 2020.  Bob Behnken and Doug Hurley, both NASA astronauts, docked with the ISS on May 31st, and stayed there until their return and splashdown on August 2nd.  On November 16th, Crew-1 launched with three NASA and one JAXA astronauts on board, and is currently docked to ISS where the astronauts are working.

Boeing’s entry into manned space flight, the CST-100, is behind SpaceX. A software problem in December 2019 meant that an unmanned flight did not make it to the ISS, though it did return safely to Earth. In April of 2020 Boeing announced it would refly the unmanned test in the fourth quarter of 2020. That has now slipped to the first quarter of 2021 due to further software issues. I think Boeing is one glitch away from not to getting a manned flight in 2021. So my optimism for them is going to be tested.

There was only one attempt at a manned suborbital flight in 2020 but it did not get to space. On December 12th the rocket motor of Virgin Galactic’s SpaceShipTwo Unity did not ignite after the craft was dropped as planned from its mothership at over 40,000 feet in altitude.

On October 13th, 2020, Blue Origin’s New Shepard 3 had its 7th successful unmanned test flight. A new craft, New Shepard 4, was slated to undergo an unmanned test flight in the last two months of the year, as the 14th test flight of a New Shepard system. That did not happen. New Shepard 4 is currently expected to fly the first manned mission for Blue Origin.

Prediction
[Space]
Date2018 CommentsUpdates
Next launch of people (test pilots/engineers) on a sub-orbital flight by a private company.
BY 2018
20190101 Virgin Galactic did this on December 13, 2018.
20200101 On February 22, 2019, Virgin Galactic had their second flight, this time with three humans on board, to space of their current vehicle. As far as I can tell that is the only sub-orbital flight of humans in 2019. Blue Origin's new Shepard flew three times in 2019, but with no people aboard as in all its flights so far.
20210101 There were no manned suborbital flights in 2020.
A few handfuls of customers, paying for those flights.
NET 2020
20210101 Things will have to speed up if this is going to happen even in 2021. I may have been too optimistic.
A regular sub weekly cadence of such flights.NET 2022
BY 2026
Regular paying customer orbital flights.NET 2027Russia offered paid flights to the ISS, but there were only 8 such flights (7 different tourists). They are now suspended indefinitely.
Next launch of people into orbit on a US booster.
NET 2019
BY 2021
BY 2022 (2 different companies)
Current schedule says 2018.20190101 It didn't happen in 2018. Now both SpaceX and Boeing say they will do it in 2019.
20200101 Both Boeing and SpaceX had major failures with their systems during 2019, though no humans were aboard in either case. So this goal was not achieved in 2019. Both companies are optimistic of getting it done in 2020, as they were for 2019. I'm sure it will happen eventually for both companies. 20200530 SpaceX did it in 2020, so the first company got there within my window, but two years later than they predicted. There is a real risk that Boeing will not make it in 2021, but I think there is still a strong chance that they will by 2022.
Two paying customers go on a loop around the Moon, launch on Falcon Heavy.
NET 2020
The most recent prediction has been 4th quarter 2018. That is not going to happen.20190101 I'm calling this one now as SpaceX has revised their plans from a Falcon Heavy to their still developing BFR (or whatever it gets called), and predict 2023. I.e., it has slipped 5 years in the last year.
Land cargo on Mars for humans to use at a later date
NET 2026SpaceX has said by 2022. I think 2026 is optimistic but it might be pushed to happen as a statement that it can be done, rather than for an pressing practical reason.
Humans on Mars make use of cargo previously landed there.NET 2032Sorry, it is just going to take longer than every one expects.
First "permanent" human colony on Mars.NET 2036It will be magical for the human race if this happens by then. It will truly inspire us all.
Point to point transport on Earth in an hour or so (using a BF rocket).NIMLThis will not happen without some major new breakthrough of which we currently have no inkling.
Regular service of Hyperloop between two cities.NIMLI can't help but be reminded of when Chuck Yeager described the Mercury program as "Spam in a can".

As noted in the table SpaceX originally predicted that they would send two paying customers around the Moon in 2018, using Falcon 9 Heavy as their launch vehicle. During 2018 they revised that to a new vehicle, now known as Starship, with a date of 2023. Progress is being made on Starship, and the 55m long second stage flew within the atmosphere for more than a simple hop in 2020. It did controlled aerodynamic flight as it came into land, and impressively and successfully flipped to vertical for a landing. There were some fuel control issues and it did not decelerate enough for a soft landing, and blew up. This is real and impressive progress.

So far we haven’t seen the 64m first stage, though there are rumors of a first flight in 2021. This will be a massive rocket, and given the move fast and blow up strategy that SpaceX has very successfully used for the second stage, we should expect its development to have some fiery moments.

A 55m long vehicle to loop around the Moon seems like a bit of overkill, but that could be the point. However, I will not be surprised at all if the mission slips beyond 2023.  If fantastic progress happens in 2021 I will get more confident about 2023, but 2021 will have to be really spectacular.

 

An Analogy For The State Of AI

rodneybrooks.com/an-analogy-for-the-state-of-ai/

In surveys of AI “experts” on when we are going to get to human level intelligence in our AI systems, I am usually an outlier, predicting it will take ten or twenty times longer than the second most pessimistic person surveyed. Others have a hard time believing that it is not right around the corner given how much action we have seen in AI over the last decade.

Could I be completely wrong?  I don’t think so (surprise!), and I have come up with an analogy that justifies my beliefs.  Note, I started with the beliefs, and then found an analogy that works.  But I think it actually captures why I am uneasy with the predictions of just two, or three, or seven decades, that abound for getting to human level intelligence.  It’s a more sophisticated and detailed version of the story about how building longer and longer ladders will not get us to the Moon.

The analogy is to heavier than air flight.

All the language that follows is expressing that analogy.

Starting in 1956 our AI research got humans up in the air in hot air balloons, in tethered kites, and in gliders. About a decade ago Deep Learning cracked the power to weight ratio of engines that let us get to heavier than air powered flight.

If we look at the arc of the 20th century, heavier than air flight transformed our world in major ways.

It had a major impact on war in just its first two decades, and four decades in it completely transformed how wars were fought, and continued that transformation for the rest of the century.

From the earliest days it changed supply chains for goods with high ratios of value to mass.  First it was with mail–the speed of sending long messages (telegraphs were good only for short messages), and then later the speed of getting bank instructions and letters of credit, and receipts, all paper based, around the globe transforming commerce into a global enterprise. Later it could be used for the supply chains of intrinsically high value goods, including digital devices, and even bluefin tuna.

It also transformed human mobility, giving rise to global business even in the PZ1 era. But it also gave the richest parts of the world a new level of personal mobility and a new sort of leisure for them. Airplanes caused overcrowding in Incan Citadels and palaces of divine kings, steadily getting worse as more countries and larger populations reached the wealth threshold of personal air travel. Who knew?  I doubt that this implication was on the mind of either Wilbur or Orville.

Note that for the bulk of the 20th century most heavier than air flight used a human pilot on board, and even today that is still true for the majority of flying vehicles.  The same for AI.  Humans will still be in the loop for AI applications, at one level or another, for many decades.

This analogy between AI and heavier than air flight embodies two things that I firmly believe about AI.

First, there is tremendous potential for our current version of AI. It will have enormous economic, cultural, and geopolitical impact. It will change life for ordinary people over the next many decades. It will create great riches for some. It will lead to different world views for many.  Like heavier than air flight it will transform our world in ways in which we can not yet guess.

Second, we are not yet done with the technology by a long shot. The airplanes of 1910 (and AI of today) in hindsight look incredibly dangerous, their use was really only for daredevils without too much regard for fatal consequences, and no one but a complete nutcase flies a 1910 technology airplane any more. But the airplanes did get better, and the technology underwent radical changes. No modern airplane engine works at all like those of 1903 or even 1920.  Deep Learning, too, will be replaced over time. It will seem quaint, and barely viable in retrospect.  And woefully energy inefficient.

But what about human level intelligence?  Well, my friends, I am afraid that in this analogy that lies on the Moon. No matter how much we improve on our basic architecture of our current AI over the next 100 years, we are just not going to get there. We’re going to need to invent something else. Rockets. To be sure, the technologies used in airplanes and rockets are not completely disjoint. But rockets are not just over developed heavier than air flying machines. They use a different set of principles, equations, and methodologies.

And now the real kicker for this analogy.  We don’t yet know how deep is our gravity well.  We don’t yet know how to build human level intelligence.  Our current approaches to AI (and neuroscience) might be so far off that in retrospect it will seen to have been so wacky that it will be classified as “not even wrong”. (For those who do not know the reference that is a severe diss.)

Our gravity well?  For a fixed density of its innards the surface gravity of a planet is proportional to its radius or diameter. If our Earth was twice its actual diameter, gravity would be twice what it is for us, and today’s chemical rockets probably wouldn’t have enough oomph to get people into orbit. Our space faring aspirations would be much harder to achieve, even at the tiny deployed level we have so far, 117 years after our first manned heavier than air flights. But could it be worse than two times? Earth with a 32,000 mile diameter, rather than our 8,000 mile diameter and gosh, chemical rockets might never be enough (and we would probably be much shorter and squatter…).  That is how little we know today.  We don’t know what the technology of human level intelligence is going to look like, so we can’t estimate how hard it is going to be to achieve. Two hundred years ago no one could have given a convincing argument on whether chemical powered rockets would get us to the Moon or not.  We didn’t know the relationships between chemical reactions and Earth’s gravity that now give us the answer.

That human level intelligence up there on the Moon is going to be out of reach for a long, long time. We just do not know for how long, but we surely know that we are not going to fly there on winged aircraft.



1 Pre Zoom.

How Much Things Can Change

rodneybrooks.com/how-much-things-can-change/

This post is about how much things can change in the world over a lifetime. I’m going to restrict my attention to science, though there are many parallels in technology, human rights, and social justice.

I was born in late 1954 so I am 65 years old. I figure I have another 30 years, with some luck, of active intellectual life. But if I look backward and forward within my family, I knew as an adult some of my grandparents who were born late in the nineteenth century, and I expect that both I will know some of my own grandchildren when they are adults, and that they will certainly live into the twenty second century. My adult to adult interactions with members of my own direct genetic line will span five generations and well over two hundred years from the beginning to the end of their collective lives, from the nineteenth to the twenty second century.

I’m going to show how shocking the changes have been in science throughout just my lifetime, how even more shocking the changes have been since my grandparents were born, and by induction speculate on how much more shock there will be during my grandchildren’s lifetimes. All people who I have known.

Not everything will change, but certainly some things that we treat as truth and obvious today will no longer seem that way by early next century. We can’t know exactly which of them will be discarded, but I will put up quite a few candidates. I would be shocked if at least some of them have not fallen by the wayside a century from now.

How Has Science Changed Since Relatives I knew were born?

My oldest grandparent was born before the Michelson-Morley experiment of 1887 that established that the universe was not filled with aether, but rather with vacuum.

Even when all four of my grandparents were first alive neither relativity nor quantum mechanics had been thought of.  Atoms with a nucleus of protons and neutrons, surrounded by a cloud of electrons were unknown. X rays and other radiation had not been detected.

The Earth was thought to be just 20 to 40 million years old, and it wasn’t until after I was born that the current estimates were first published.

It was two months after my father was born that Edwin Hubble described galaxies and declared that we lived in one of many, in our case the Milky Way. While my father was young the possibility that some elements might be fissile was discovered, along with the idea that some might be fusible. He was an adult back from the war when the big bang theory of the universe was first proposed, and it is only in the last 30 years that alternatives were largely shouted down.

In my lifetime we started out with nine planets, went down to eight, and now have observed thousands of them in nearby star systems. Plate tectonics, which first revealed that continents were not historically statically in place and explained both earthquakes and volcanoes were first hypothesized after I was born.

Crick and Watson determined the structure of DNA just the year before I was born. I was a toddler when Crick hypothesized the DNA to RNA translation and transcription mechanism, in school when the first experimental results showing how it might work came in, and in college before it was mostly figured out. Then it was realized that most animal and plant DNA does not code for proteins, and so it was labeled as junk DNA. Gradually over time other functions for that DNA have been discovered and it is now called  non-coding DNA. All its functions have still not been worked out.

I was in graduate school when it was figured out that the split of all life on Earth into prokaryotes (cells without a nucleus) and eukaryotes (cells with a nucleus) was inadequate. All animals, plants, and fungi, belong to the latter class. But in fact there are two very distinct sorts of prokaryotes, both single celled, the bacteria and the archaea. The latter were completely unknown until the first ones were found in 1977. Now the tree of life has archaea and eukaryotes branching off from a common point on a different branch than the bacteria. We are more closely related to the unknown archaea than we are to bacteria. We had completely missed a major type of living organism; the archaea on Earth have a combined mass of more than three times that of all animals. We were just plain unaware of them–they are predominantly located in deep subsurface environments, so admittedly they were not hiding in plain sight.

In just the last few years we have realized that human bodies contain ten times more cells that are bacteria, than they contain cells that have our DNA in them, though the bacterial mass is only about 3% of our body weight. Before that we thought we were mostly us.

The physical structure of neurons and the way they connect with each other was first discovered when my grandparents were already teenagers and young adults. The rough way that neurons operate was elucidated in the decade before my birth, but the first paper that laid out a functional explanation of how neurons could operate in an ensemble did not occur until I was already in kindergarten (in the “What the Frog’s Eye Tells the Frog’s Brain paper”). Half the cells in brains, the glia, were long thought to be physical support and suppliers of nutrients to the neurons, playing no direct role in what neural systems did. In the second half of the twentieth century we have come to understand that they play a role in neurotransmission, and modulate all sorts of behavior of the neurons. There is still much to be learned. More recently the role of small molecules diffusing locally in the brain have been shown to also affect how neurons operate.

What is known in science about cosmology, physics, the mechanisms of life, and neuroscience had changed drastically since my grand parents were born, and has continued to change right up until today. Our scientific beliefs have not been static, and have constantly evolved.

Will science continue to change?

It seems entirely unlikely to me that my grandchildren will one day be able to say that up until they were born scientific ideas came and went with accepted truth regularly changing, but since they were born science has been very stable in its set of accepted truths.

Things will continue to change. Below I have put a few things that I think could change from now into the beginning of the next century. I am not saying that any particular one of these will be what changes. And I would be very surprised if more than half of these will be adopted. But I have selected the ideas that currently gnaw at me and do not feel as solid as some other ideas in science. Some will no doubt become more solid. But it will not surprise me so much if any individual one of these turns into accepted wisdom.

Cosmology:

  • There is no dark matter.
  • The Universe is not expanding.
  • The big bang was wrong.

Physics:

  • There is a big additional part of quantum mechanics to be understood.
  • String theory is bogus.
  • The many worlds interpretation is decided to be confused and discarded.

Life:

  • We discover that there is a common ancestor to archaea, bacteria, and eukarya, a fourth domain of life, that still exists in some places on Earth–and it is clearly a predecessor to the three that we know about now in that it does not have the full modern DNA version of genetics, but instead is a mixture of DNA and RNA based genetics, or purely RNA, or perhaps purely PNA, and has a simplified ancestral coding scheme.
  • We discover life elsewhere in the solar system and it is clearly not related to life on Earth. It is different, with different components and mechanisms, and its abiogenesis was clearly independent of the one on Earth.
  • We detect life on a planet that we can observe in a nearby solar system.
  • We detect an unambiguously artificial signal from much further away.

Neuroscience:

  • We discover the principles of a rich control system in plants that is not based on neurons, but nevertheless explains the complex behaviors of plants that we can observe when we speed up videos of them operating in the world. So much for electro-centrism!
  • We move away from computational neuroscience with a new set of metaphors that turn out to have both better explanatory power and provide tools that explain otherwise obtuse aspects of neural systems.
  • We find not just a different metaphor, but actual new mechanisms in neural systems of which we have not previously been aware, and they become dominant in our understanding of the brain.
How to have impact in the world

If you want to be a famous scientist then figure one of these out. You will redirect major intellectual pursuits of mankind. But, it will be a long lonely road.

 

Peer Review

rodneybrooks.com/peer-review/

This blog is not peer reviewed at all.  I write it, I put it out there, and people read it or not. It is my little megaphone that I alone control.

But I don’t think anyone, or at least I hope that no-one, thinks that I am publishing scientific papers here.  They are my opinion pieces, and only worthwhile if there are people who have found my previous opinions to have turned out to be right in some way.

There has been a lot of discussion recently about peer review. This post is to share some of my experiences with peer review, both as an author and as an editor, from three decades ago.

In my opinion peer review is far from perfect. But with determination new and revolutionary ideas can get through the peer review process, though it may take some years. The problem is, of course, that most revolutionary ideas are wrong, so peer review tends to stomp hard on all of them. The alternative is to have everyone self publish and that is what is happening with the arXiv distribution service. Papers are getting posted there with no intent of ever undergoing peer review, and so they are effectively getting published with no review. This can be seen as part of the problem of populism where all self proclaimed experts are listened to with equal authority, and so there is no longer any expertise.

My Experience with Peer Review as an Author

I have been struggling with a discomfort about where the herd has been headed in both Artificial Intelligence (AI) and neuroscience since the summer of 1984. This was a time between my first faculty job at Stanford and my long term faculty position at MIT. I am still concerned and I am busy writing a longish technical book on the subject–publishing something as a book gets around the need for full peer review, by the way…

When I got to MIT in the fall of 1984 I shifted my research based on my concerns. A year later I was ready to talk about the what I was doing, and submitted a journal paper describing the technical idea and an initial implementation. Here is one of the two reviews.

It was encouraging, but both it and a second review recommended that the paper not be published. That would have been my first rejection.  However, the editor, George Bekey, decided to publish it anyway, and it appeared as:

Brooks, R. A. “A Robust Layered Control System for a Mobile Robot, IEEE Journal of Robotics and Automation, Vol. 2, No. 1, March 1986, pp. 14–23; also MIT AI Memo 864, September 1985.

Google Scholar reports just under 12,000 citations of this paper, my most cited paper ever. The approach to controlling robots, the subsumption architecture that it proposed led directly to the Roomba, a robot vacuum cleaner, which with over 30 million sold is the most produced robot ever. Furthermore the control architecture was formalized over the years by a series of researchers, and its descendant, behavior trees, is now the basis for most video games. (Both Unity and Unreal use behavior trees to specify behavior.) The paper still has multi billion dollar impact every year.

Most researchers who stray, believing the herd is wrong, end up heading off in their own wrong direction. I was extraordinarily lucky to choose a direction that has had incredible practical impact.

However, I was worried at a deeper intellectual level, and so almost simultaneously started writing about the philosophical underpinnings of research in AI, and how my approach differed. There the reviews were more brutal, as is shown in a review here:

This was a a review of lab memo AIM-899, Achieving Artificial Intelligence through Building Robots which I had submitted to a conference.This paper was the first place that I talked about the possibility of robot vacuum cleaners as an example of how the philosophical approach I was advocating could lead to new practical results.

The review may be a little hard to read in the image above. It says:

This paper is an extended, wandering complaint that the world does not view the author’s work as the salvation of mankind.

There is no scientific content here; little in the way of reasoned argument, as opposed to petulant assertions and non-sequiturs; and ample evidence of ignorance of the literature on these questions. The only philosopher cited is Dreyfus–but many of the issues raised have been treated more intelligibly by others (the chair definition problem etc. by Wittgenstein and many successors; the interpreted toy proscription by Searle; the modularity question by Fodor; the multiple behaviors ideas by Tinbergen; and the constructivist approach by Glymour (who calls it computational positivism). The argument about evolution leaks all over, and the discussion on abstraction indicates the author has little understanding of analytic thought and scientific investigation.

Ouch! This was like waving a red flag at a bull. I posted this and other negative reviews on my office door where they stayed for many years. By June of the next year I had added to it substantially, and removed the vacuum cleaner idea, but kept in all the things that the reviewer did not like, and provocatively retitled it Intelligence Without Representation. I submitted the paper to journals and got further rejections–more posts for my door. Eventually its fame had spread to the point that the Artificial Intelligence Journal, the mainstream journal of the field, published it unchanged (Artificial Intelligence Journal (47), 1991, pp. 139–159) and it now has 6,900 citations. I outlasted the criticism and got published.

That same year at the major international conference IJCAI: International Joint Conference on Artificial Intelligence I was honored to win the Computers and Thought award, quite a surprise to me, and I think to just about every one else. With that honor came an invitation to have a paper in the proceedings without the six page limit that applied to everyone else, and without the peer review process that applied to everyone else. My article was twenty seven pages long, double column, a critical review article of the history of AI, also with a provocative and complementary title, Intelligence Without Reason, (Proceedings of 12th Int. Joint Conf. on Artificial Intelligence, Sydney, Australia, August 1991, pp. 569–595). It now has over 3,100 citations.

My three most cited papers were either rejected under peer review or accepted with no peer review.  So I am not exactly a poster child for peer reviewed papers.

My Experience With Peer Review As an Editor

In 1987 I co-founded a journal, the International Journal of Computer Vision. It was published by Kluwer as a hardcopy journal for many years, but now it is run by Springer and is totally online. It is now in its 128th volume, and has had many hundreds of issues. I co-edited the first seven volumes which together had a total of twenty eight issues.

The journal has a very strong reputation and consistently ranks in the top handful of places to publish in computer vision, itself a very hot topic of research today.

As an editor I soon learned a lot of things.

  1. If a paper was purely theoretical with lots of equations and no experiments involving processing an image it was much more likely to get accepted than a paper which did have experimental results. I attributed this to people being unduly impressed by mathematics (I had a degree in pure mathematics and was not as easily impressed by equations and complex notation). I suspected that many times the reviewers did not fully read and understand the mathematics as many of them had very few comments about the contents of such papers. If, however, a paper had experiments with real images (and back then computers were so slow it was rarely more than a handful of images that had been processed), the same reviewers would pick apart the output, faulting it for not being as good as they thought it should be.
  2. I soon learned that one particular reviewer would always read the mathematics in detail, and would always find things to critique about the more mathematical papers. This seemed good. Real peer review. But soon I realized that he would always recommend rejection. No paper was ever up to his standard. Reject! There were other frequent rejecters, but none as dogmatic as this particular one.
  3. Likewise I found certain reviewers would always say accept. Now it was just a matter of me picking the right three referees for almost any paper and I could know whether the majority of reviewers would recommend acceptance or rejection before I had even sent the paper off to be reviewed. Not so good.
  4. I came to realize that the editor’s job was real, and it required me to deeply understand the topic of the paper, and the biases of the reviewers, and not to treat the referees as having the right to determine the fate of the paper themselves. As an editor I had to add judgement to the process at many steps along the way, and to strive for the process to improve the papers, but also to let in ideas that were new. I now came to understand George Bekey and his role in my paper from just a couple of years before.

Peer reviewing and editing is a lot more like the process of one on one teaching than it is of processing the results of a multiple choice exam. When done right it is about coaxing the best out of scientists, and encouraging new ideas to flourish and the field to proceed.

The UPSHOT?

Those who think that peer review is inherently fair and accurate are wrong. Those who think that peer review necessarily suppresses their brilliant new ideas are wrong. It is much more than those two simple opposing tendencies.

Peer review grew up in a world where there were many fewer people engaging in science than today. Typically an editor would know everyone in the world who had contributed to the field in the past, and would have enough time to understand the ideas of each new entrant to the field as they started to submit papers. It relied on personal connections and deep and thoughtful understanding.

That has changed just due to the scale of the scientific endeavor today, and is no longer possible in that form.

There is a clamor for double blind anonymous review, in the belief that that produces a level playing field. While in some sense that is true, it also reduces the capacity for the nurturing of new ideas. Clamorers need to be careful what they wish for–metaphorically it reduces them to competing in a speed trial, rather than being appreciated for virtuosity. What they get in return for zeroing the risk of being rejected on the basis of their past history or which institution they are from is that they are condemned to forever aiming their papers at the middle of a field of mediocrity, with little chance for achieving greatness.

Another factor is that the number of new journals has changed. Institutions, and sometimes whole countries, decide that the way for them to get a better name for themselves is to have a scientific journal, or thirty. They set them up and put one of their local people who has no real understanding of the flow of ideas in the particular field at the global scale, as editor. Now editing becomes a mechanical process, with no understanding of the content of the paper or the qualifications of who they ask to do the reviews. I know this to be true as I regularly get asked to review papers in fields in which I have absolutely no knowledge, by journal editors that I have never heard of, nor of their journal, nor its history. I have been invited to submit a review that can not possibly be a good review. I must induce that other reviews may also not be very good.

I don’t have a solution, but I hope my observations here might be interesting to some.

What Networks Will Co-Evolve With AI and Robotics?

rodneybrooks.com/what-networks-will-co-evolve-with-ai-and-robotics/

Again and again in human history networks spanning physical geography have both enabled and been enabled by the very same innovations. Networks are the catalysts for the innovations and the innovations are the catalysts for the networks. This is autocatalysis at human civilization scale.

The Roman empire brought for people within its expanding borders long distance trade, communication, peace and stability. Key to this was the network of roads, many of which survive as the routes of modern transportation systems, and ports. And the stability of that network was made possible by the things that the empire brought.

The Silk Road, a network of trade routes, enabled many civilizations that themselves supported the continued existence of those trade routes.

In the eighteenth century England’s network of canals enabled both the delivery of raw materials, coal for power, and access to ports for the finished goods, enabling the industrial revolution with the invention of factories. The canals were built on the wealth of the factory owners who formed syndicates to build those canals.

Later the train network enhanced and replaced the canal network in England. And building a train network in the United States enabled large scale farming in the mid-west to have access to markets on the east coast, and later to ports on both coasts to make the US a major source of food. At the same time, a second network, the telegraph, was overlaid on the same physical route system, first to operate the train network itself and later to form the basis of new forms of communications.

As the later telephone networks were built they ushered in the world of commerce and general business became the principal industry instead of farming. And as business grew the need for more extensive telephone networks with more available lines grew with it.

When Henry Ford started mass producing automobiles he realized that a network of roads was necessary for the masses to have somewhere to drive. And as there were more and more roads the demand for automobiles increased. As a side effect the roads came to replace much of the rail network for moving goods around the country.

The personal computer of the 1980’s was not ubiquitous in ordinary households until it was coupled to the second generation data packet network that had started out as a reliable communications network for the military and for sharing scarce computer resources in academia. The pull on network bandwidth lead to rapid growth of the Internet, and that enabled the World Wide Web, a network of information overlaid on the data packet network, that gave a real impetus for more people to own their own computer.

As commerce started to be carried out on the Web, demand rose even more, and ultimately large data centers needed to be built as the backend of that commerce system. Then those data centers got offered to other businesses and cloud computing became a network of computational resources, on top of what had been a network for moving data from place to place.

Cloud computing enabled the large scale training needed for deep networks, a computational technique very vaguely inspired by the network of neurons in the brain. Deep networks are what many people call AI today. Those networks and their demands for computation for training are driving the growth of the cloud computing network, and a world wide network of low paid piece workers who label data needed to drive the training, using the substrate of the Web network to move the data around, and to get paid.

Are we at the end game for AI driving networks? Or when we can get past the very narrow capabilities of deep networks to new AI technologies will there be new networks that arise and are autocatalytic with the new AI?

And what about robotics?

The disruptions to world supply chains from COVID-19 are only just beginning to be seen–there will be turbulence in many areas later in 2020. The exceptionally lean supply chains we have lived with over the last few years (relying on a network of shipping routes that rely on the standardization of shipping containers to grow and interact) are likely to feel pressure to get a little fatter. That is likely to increase the demand for robotics and automation in those supply chains, a phenomenon that we have already see starting over the last few years.

Another lesson which may be drawn from the current pandemic is that more automation is needed in healthcare, as trained medical professions have been pushed to their limits of endurance, besides being in personal mortal peril at times.

So what might be the new networks that arise over the next few years, demanded by the way we change automation, and supported by that very change?

Here are a few ideas, none of which seem particularly compelling at the moment, certainly not in comparison to Roman roads or to the Internet itself:

A commerce network of data sets, and of sets of weights for networks trained on large data sets.

A physical network of supply points, down to the city or county level, for major robot components; mobile bases for indoors or outdoors, legged tracked and wheeled; various sensor packages; human-robot interaction displays and sensors; and all sorts of arms with different characteristics. These can be assembled plug and play to produced appropriate robots as needed to respond to all sorts of emergency needs.

A network of smart sensors embedded in almost everything in our lives, which lives on top of the current Internet–this is already getting built and is called IoT (Internet of Things).

A new supply network of both partially finished goods (e.g., standard embedded processor boards) and materials (seventy eight different sorts of raw stock for 3D printers a generation or two out) so that much more manufacturing can be done closer to end customers, using automation and robots.

An automated distribution network down to the street corner level in cities, with short term storage units on the sidewalk (probably a little bigger than the green storage units that the United States Postal Service has on street corners throughout US cities). Automated vehicles would supply these, perhaps at off peak traffic times, and then smaller neighborhood sidewalk robots would distribute to individual houses, or people could come to pick up.

I’m not particularly proud of or happy with any of these ideas. But based on history over the last 2,000 plus years I am confident that some sort of new networks will soon arise. Please do not let my lack of imagination dissuade you from the idea that new forms of networks will be coming.

Predictions Scorecard, 2020 January 01

rodneybrooks.com/predictions-scorecard-2020-january-01/

On January 1st, 2018, I made predictions (here) about self driving cars, Artificial Intelligence and machine learning, and about progress in the space industry. Those predictions had dates attached to them for 32 years up through January 1st, 2050.

I made my predictions because at the time I saw an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.

As part of self certifying the seriousness of my predictions I promised to review them, as made on January 1st, 2018, every following January 1st for 32 years, the span of the predictions, to see how accurate they were.

On January 1st, 2019, I posted my first annual self appraisal of how well I did. This post, today, January 1st, 2020, is my second annual self appraisal of how well I did–I have 30 more annual appraisals ahead of me. I think in the two years since my predictions, there has been a general acceptance that certain things are not as imminent or as inevitable as the majority believed just then. So some of my predictions now look more like “of course”, rather than “really, that long in the future?” as they did then.

This is a boring update. Despite lots of hoopla in the press about self driving cars, Artificial Intelligence and machine learning, and the space industry, this last year, 2019, was not actually a year of big milestones. Not much that will matter in the long run actually happened in 2019.

Furthermore, this year’s summary indicates that so far none of my predictions have turned out to be too pessimistic. Overall I am getting worried that I was perhaps too optimistic, and had bought into the hype too much. There is only one dated prediction of mine that I am currently worried may have been too pessimistic–I won’t name it here as perhaps I will turn out to be right after all.

Repeat of Last Year’s Explanation of Annotations

As I said last year, I am not going to edit my original post, linked above, at all, even though I see there are a few typos still lurking in it. Instead I have copied the three tables of predictions below from last year’s update post, and have simply added a total of six comments to the fourth column. As with last year I have highlighted dates in column two where the time they refer to has arrived.

I tag each comment in the fourth column with a cyan colored date tag in the form yyyymmdd such as 20190603 for June 3rd, 2019.

The entries that I put in the second column of each table, titled “Date” in each case, back on January 1st of 2018, have the following forms:

NIML meaning “Not In My Lifetime, i.e., not until beyond December 31st, 2049, the last day of the first half of the 21st century.

NET some date, meaning “No Earlier Than” that date.

BY some date, meaning “By” that date.

Sometimes I gave both a NET and a BY for a single prediction, establishing a window in which I believe it will happen.

For now I am coloring those statements when it can be determined already whether I was correct or not.

I have started using LawnGreen (#7cfc00) for those predictions which were entirely accurate. For instance a BY 2018 can be colored green if the predicted thing did happen in 2018, as can a NET 2019 if it did not happen in 2018 or earlier. There are five predictions now colored green, the same ones as last year, with no new ones in January 2020.

I will color dates Tomato (#ff6347) if I was too pessimistic about them. No Tomato dates yet. But if something happens that I said NIML, for instance, then it would go Tomato, or if in 2020 something already had happened that I said NET 2021, then that too would have gone Tomato.

If I was too optimistic about something, e.g., if I had said BY 2018, and it hadn’t yet happened, then I would color it DeepSkyBlue (#00bfff). None of these yet either. And eventually if there are NETs that went green, but years later have still not come to pass I may start coloring them LightSkyBlue (#87cefa).

In summary then: Green splashes mean I got things exactly right. Red means provably wrong and that I was too pessimistic. And blueness will mean that I was overly optimistic.

So now, here are the updated tables.

Self Driving Cars

No predictions have yet been relevant for self driving cars, but I have augmented one comment from last year in this first table.  Also, see some comments right after this title.

Prediction
[Self Driving Cars]
Date2018 CommentsUpdates
A flying car can be purchased by any US resident if they have enough money.NET 2036There is a real possibility that this will not happen at all by 2050.
Flying cars reach 0.01% of US total cars.NET 2042That would be about 26,000 flying cars given today's total.
Flying cars reach 0.1% of US total cars.NIML
First dedicated lane where only cars in truly driverless mode are allowed on a public freeway.
NET 2021This is a bit like current day HOV lanes. My bet is the left most lane on 101 between SF and Silicon Valley (currently largely the domain of speeding Teslas in any case). People will have to have their hands on the wheel until the car is in the dedicated lane.
Such a dedicated lane where the cars communicate and drive with reduced spacing at higher speed than people are allowed to driveNET 2024
First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.NET 2022The pick up and drop off points will not be parking spots, but like bus stops they will be marked and restricted for that purpose only.20190101 Although a few such services have been announced every one of them operates with human safety drivers on board. And some operate on a fixed route and so do not count as a "taxi" service--they are shuttle buses. And those that are "taxi" services only let a very small number of carefully pre-approved people use them. We'll have more to argue about when any of these services do truly go driverless. That means no human driver in the vehicle, or even operating it remotely.

20200101
During 2019 Waymo started operating a 'taxi service' in Chandler, Arizona, with no human driver in the vehicles. While this is a big step forward see comments below for why this is not yet a driverless taxi service.
Such "taxi" services where the cars are also used with drivers at other times and with extended geography, in 10 major US citiesNET 2025A key predictor here is when the sensors get cheap enough that using the car with a driver and not using those sensors still makes economic sense.
Such "taxi" service as above in 50 of the 100 biggest US cities.NET 2028It will be a very slow start and roll out. The designated pick up and drop off points may be used by multiple vendors, with communication between them in order to schedule cars in and out.
Dedicated driverless package delivery vehicles in very restricted geographies of a major US city.NET 2023The geographies will have to be where the roads are wide enough for other drivers to get around stopped vehicles.
A (profitable) parking garage where certain brands of cars can be left and picked up at the entrance and they will go park themselves in a human free environment.NET 2023The economic incentive is much higher parking density, and it will require communication between the cars and the garage infrastructure.
A driverless "taxi" service in a major US city with arbitrary pick and drop off locations, even in a restricted geographical area.
NET 2032This is what Uber, Lyft, and conventional taxi services can do today.
Driverless taxi services operating on all streets in Cambridgeport, MA, and Greenwich Village, NY. NET 2035Unless parking and human drivers are banned from those areas before then.
A major city bans parking and cars with drivers from a non-trivial portion of a city so that driverless cars have free reign in that area.NET 2027
BY 2031
This will be the starting point for a turning of the tide towards driverless cars.
The majority of US cities have the majority of their downtown under such rules.NET 2045
Electric cars hit 30% of US car sales.NET 2027
Electric car sales in the US make up essentially 100% of the sales.NET 2038
Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.NIMLThere might be some small demonstration projects, but they will be just that, not real, viable mass market services.
First time that a car equipped with some version of a solution for the trolley problem is involved in an accident where it is practically invoked.NIMLRecall that a variation of this was a key plot aspect in the movie "I, Robot", where a robot had rescued the Will Smith character after a car accident at the expense of letting a young girl die.

Chandler is a suburb of Phoenix and is itself the 84th largest city in the US. With apologies to residents of Chandler, I do not think that it comes to mind as a major US city for most Americans. Furthermore, the service has so far not been open to the public, but instead started with just a few hundred people (out of a population of about one quarter of a million residents) who had previously been approved to use the service when there was a human safety driver on board. These riders are banned from talking about when things go wrong so we really don’t know how well the systems works. Over 2019 the number of riders has grown to 1,500 monthly users, and a total of about 100,000 rides. Recently there has been an announcement that a phone app will make the service available to more users.

BUT, while there is no human driver in the taxi there is a remote human safety driver for all rides, as detailed in this story. While the humans can monitor more than one vehicle at a time, obviously there is a scaling issue, and the taxis are not truly autonomous. To make them so would be a big step. Also the taxis do not operate when it is raining. That would be the peak usage time for taxis in most cities. But they just don’t operate in the rain.

So… no self driving taxi service yet, even in a relatively small city with a population density many times less than that of major US cities.

The last twelve months have seen a real shakeout in expectations for deployment of self driving cars.  Companies are realizing that it is much harder than the came to believe for a while, and that there are many issues beyond simply “driving”, that need to be addressed.  I previously talked about a some of those issues in on this blog in January and June of 2017.

To illustrate how predictions have been slipping, here is a slide that I made for talks based on a snapshot of predictions about driverless cars from March 27, 2017. The web address still seems to give the same predictions with a couple more at the end that I couldn’t fit on my slide. In parentheses are the years the predictions were made, and in blue are the dates for when the innovation was predicted to happen.

Recently I had added some arrows to this slide. The skinny red arrows point to dates that have passed without the prediction coming to pass. The fatter orange arrows point to cases where company executives have since come out with updated predictions that are later than the ones given here. E.g., in the fourth line from the bottom, the Daimler chairman had said in 2014 that fully autonomous vehicles could be ready by 2025. In November of 2019 the chairman announced a reality check on self driving cars, as one can see in numerous online stories. Here is the first paragraph of one report on his remarks:

Mercedes-Benz parent Daimler has taken a “reality check” on self-driving cars. Making autonomous vehicles safe has proven harder than originally thought, and Daimler is now questioning their future earnings potential, CEO Ola Kaellenius told Reuters and other media.

Other reports of the same story can be found here and here.

None of the original predictions have come to pass, and those still standing are getting rather sparse.

<rant>

At the same time, however, there have been more outrageously optimistic predictions made about fully self driving cars being just around the corner. I won’t name names, but on April 23rd of 2019, i.e., less than nine months ago, Elon Musk said that in 2020 Tesla would have “one million robo-taxis” on the road, and that they would be “significantly cheaper for riders than what Uber and Lyft cost today”. While I have no real opinion on the veracity these predictions, they are what is technically called bullshit. Kai-Fu Lee and I had a little exchange on Twitter where we agreed that together we would eat all such Tesla robo-taxis on the road at the end of this year, 2020.

</rant>

Artificial Intelligence and Machine Learning

I had not predicted any big milestones for AI and machine learning for the current period, and indeed there were none achieved.

We have seen certain proponents be very proud of how much more compute they have, growing at many times what Moore’s Law at its best would provide. I think it is fair to say that the results of all that computing since 2012 are not very impressive when compared to what a single human brain, powered at just 20 Watts has been able to achieve in the same time frame — one just has to look at someone who’s 20th birthday is today, January 1st, 2020, and compare what they know now and what they can achieve now to what they could do in 2012.

And there has even been a little backlash about the carbon footprint that modern ML data sets cause in training. There are even tools and best practices for cutting down the carbon footprint of your ML research. People can argue about the details, but no one can make a case that the energy usage is not many orders of magnitude more than used by the meat machine inside people’s heads, and that human performance is way more impressive than any machine performance to date. People get fooled all the time by the slick marketing around each new achievement by the machine learning companies, but when you poke them you see that the achievements are rather pathetic compared to human performance.

Without any retraining make a Go playing program compete against a human on a 25 by 25 board, or even an 18 by 18 board. Or change all the colors of the pixels in a Quake Three Arena, or change the screen resolution, and humans will adapt seamlessly while the ML trained systems will have to start from zero again.

While ML conference attendance has gone up by a factor of 20 or so, the results are not so interestingly more powerful in terms of impact they have on the real world.

Right after the Artificial Intelligence and machine learning table I have some links to back up today’s assertion in it that there are more blog posts pushing back on DL as being all we will need to get to human level (whatever that might mean) Artificial Intelligence.

Prediction
[AI and ML]
Date2018 CommentsUpdates
Academic rumblings about the limits of Deep Learning
BY 2017
Oh, this is already happening... the pace will pick up.20190101 There were plenty of papers published on limits of Deep Learning. I've provided links to some right below this table. 20200101
Go back to last year's update to see them.
The technical press starts reporting about limits of Deep Learning, and limits of reinforcement learning of game play.
BY 2018
20190101 Likewise some technical press stories are linked below. 20200101
Go back to last year's update to see them.
The popular press starts having stories that the era of Deep Learning is over.BY 202020200101 We are seeing more and more opinion pieces by non-reporters saying this, but still not quite at the tipping point where reporters come at and say it. Axios and WIRED are getting close.
VCs figure out that for an investment to pay off there needs to be something more than "X + Deep Learning".NET 2021I am being a little cynical here, and of course there will be no way to know when things change exactly.
Emergence of the generally agreed upon "next big thing" in AI beyond deep learning.NET 2023
BY 2027
Whatever this turns out to be, it will be something that someone is already working on, and there are already published papers about it. There will be many claims on this title earlier than 2023, but none of them will pan out.
The press, and researchers, generally mature beyond the so-called "Turing Test" and Asimov's three laws as valid measures of progress in AI and ML.NET 2022I wish, I really wish.
Dexterous robot hands generally available.NET 2030
BY 2040 (I hope!)
Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
A robot that can navigate around just about any US home, with its steps, its clutter, its narrow pathways between furniture, etc.Lab demo: NET 2026
Expensive product: NET 2030
Affordable product: NET 2035
What is easy for humans is still very, very hard for robots.
A robot that can provide physical assistance to the elderly over multiple tasks (e.g., getting into and out of bed, washing, using the toilet, etc.) rather than just a point solution.NET 2028There may be point solution robots before that. But soon the houses of the elderly will be cluttered with too many robots.
A robot that can carry out the last 10 yards of delivery, getting from a vehicle into a house and putting the package inside the front door.Lab demo: NET 2025
Deployed systems: NET 2028
A conversational agent that both carries long term context, and does not easily fall into recognizable and repeated patterns.Lab demo: NET 2023
Deployed systems: 2025
Deployment platforms already exist (e.g., Google Home and Amazon Echo) so it will be a fast track from lab demo to wide spread deployment.
An AI system with an ongoing existence (no day is the repeat of another day as it currently is for all AI systems) at the level of a mouse.NET 2030I will need a whole new blog post to explain this...
A robot that seems as intelligent, as attentive, and as faithful, as a dog.NET 2048This is so much harder than most people imagine it to be--many think we are already there; I say we are not at all there.
A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans.NIML

There are outlets now for non-journalists, perhaps practitioners in a scientific field, to write position papers that get widely referenced in social media. These position papers are often forerunners of what the popular press will soon start reporting.

During 2019 we saw many, many well informed such position papers/blogposts. We have seen explanations on how machine learning  has limitations on when it makes sense to be used and that it may not be a universal silver bullet.  There have been posts that deep learning may be hitting limits as it has no common sense. We have seen questions about the practical value of the results of deep learning on game playing as game playing is precisely where we have massive amounts of completely relevant data–problems in the real world more commonly have very little data and reasoning from other domains is imperative to figuring out how to make progress on the problem. And we have seen warnings that all the over-hype of machine and deep learning may lead to a new AI winter when those tens of thousands of jolly conference attendees will no longer have grants and contracts to pay for travel to and attendance at their fiestas.

I am very concerned about what will happen when the current machine/deep learning bubble bursts. We have seen the bursting of hype bubbles decimate AI research before. The self driving cars bubble and its bubble bursting having a potential negative impact in AI research also worries me.

Space

There were no target dates that have been hit or missed in the last year in the space launch domain, but I have made a couple of update comments in the following table, and then follow it with details in the text below.

Prediction
[Space]
Date2018 CommentsUpdates
Next launch of people (test pilots/engineers) on a sub-orbital flight by a private company.
BY 2018
20190101 Virgin Galactic did this on December 13, 2018.

20200101 On February 22, 2019, Virgin Galactic had their second flight, this time with three humans on board, to space of their current vehicle. As far as I can tell that is the only sub-orbital flight of humans in 2019. Blue Origin's new Shepard flew three times in 2019, but with no people aboard as in all its flights so far.
A few handfuls of customers, paying for those flights.NET 2020
A regular sub weekly cadence of such flights.NET 2022
BY 2026
Regular paying customer orbital flights.NET 2027Russia offered paid flights to the ISS, but there were only 8 such flights (7 different tourists). They are now suspended indefinitely.
Next launch of people into orbit on a US booster.
NET 2019
BY 2021
BY 2022 (2 different companies)
Current schedule says 2018.20190101 It didn't happen in 2018. Now both SpaceX and Boeing say they will do it in 2019.

20200101 Both Boeing and SpaceX had major failures with their systems during 2019, though no humans were aboard in either case. So this goal was not achieved in 2019. Both companies are optimistic of getting it done in 2020, as they were for 2019. I'm sure it will happen eventually for both companies.
Two paying customers go on a loop around the Moon, launch on Falcon Heavy.
NET 2020
The most recent prediction has been 4th quarter 2018. That is not going to happen.20190101 I'm calling this one now as SpaceX has revised their plans from a Falcon Heavy to their still developing BFR (or whatever it gets called), and predict 2023. I.e., it has slipped 5 years in the last year.
Land cargo on Mars for humans to use at a later date
NET 2026SpaceX has said by 2022. I think 2026 is optimistic but it might be pushed to happen as a statement that it can be done, rather than for an pressing practical reason.
Humans on Mars make use of cargo previously landed there.NET 2032Sorry, it is just going to take longer than every one expects.
First "permanent" human colony on Mars.NET 2036It will be magical for the human race if this happens by then. It will truly inspire us all.
Point to point transport on Earth in an hour or so (using a BF rocket).NIMLThis will not happen without some major new breakthrough of which we currently have no inkling.
Regular service of Hyperloop between two cities.NIMLI can't help but be reminded of when Chuck Yeager described the Mercury program as "Spam in a can".

During a ground test of the SpaceX Crewed Dragon capsule, on April 20th, 2019, it exploded catastrophically. This delayed the SpaceX program so that no manned test could be done in 2019. SpaceX traced the problem to a valve failure when starting up the capsule abort engines, needed during launch if the booster rocket is undergoing failure. They currently have a test scheduled for early 2020 where these engines will be ignited during a launch so that the capsule can safely fly away from the launch vehicle.

In December of 2019 Boeing had a major test of its CST-100 Starliner capsule, and ended up with both a failure and a success for the mission. It was supposed to be the final unmanned test of the vehicle, and was planned to dock with the International Space Station (ISS) and then do a soft landing on the ground. It launched on December 20th and achieved orbit, but due to software failures it was the wrong orbit and there was not enough fuel left to get it to the ISS. This was a major failure. On the other hand it achieved a major success in doing a soft landing in New Mexico on December 22nd.

Other Hype Magnets

I have not felt qualified to talk about the hype impact for both quantum computing and block chain. Just at the end of 2019 there was a very interesting blog post by Scott Aaronson, a true expert and theoretical contributor to the field of quantum computing, on how to read announcements about quantum computing results. I recommend it.

Guest Post by Phillip Alvelda: Pondering the Empathy Gap

rodneybrooks.com/guest-post-by-phillip-alveda-pondering-the-empathy-gap/

[Phillip Alvelda is an old friend from MIT, and CEO of Brainworks.]

Pondering how to close what seems to be a rapidly widening empathy gap here in the U.S. and globally.

I used to just be resigned to the fact that many of my white friends who had never felt, or experienced discrimination directed at themselves seem incapable of seeing or recognizing implicit, or even explicit, bias directed at others. I didn’t used to think of these people as mean or racist…just oblivious through lack of direct experience.

But now, with a nation inflamed by our own government inciting and validating hatred and bigotry, with brown asylum seekers and children dying in mass US internment camps, and LGBTQ and women’s’ rights under mounting assault, the discrimination has literally turned lethal. And the empathy gap is enabling these crimes against humanity to continue and grow in the US now, just like the silent majority in Weimar Germany allowed the Jewish genocide to advance.

I’ve come to see supporters of this corrupt and criminal administration as increasingly complicit in the ongoing crimes. It is no longer just a matter of not seeing discrimination that doesn’t impact your family directly.

Trump supporters and anyone who supports any of his Republican enablers must now find some way to look past the growing reports of discrimination, minority voter suppression and gerrymandering, hate crimes, repression, the roll back of women’s and LGBTQ rights, a measurable biased justice system, mass internment camps, and now even the murder of the weak and vulnerable kidnapped children that commit no crime other than to follow our own ancestors to seek freedom and opportunity in the US….. This growing mass of willfully blind conservatives have abandoned fair morality, and are direct enablers of evil.

We are now in an era I never thought to see in the US, when government manufactured propaganda is purposely driving the dehumanization of women, LGBTQ people, and people of color. The US empathy gap is widening rapidly. How can we fight these dark divisive forces and narrow the gap, when our polarized society can’t even agree on measurable objective realities like the climate crisis?

Otherwise, I fear the U.S. is on a path to dissolve into at least two countries, divided along a border between those states who value empathy and seek an inclusive and pluralistic future society, and those who seek to retreat to tribal protectionism of historical rights for a shrinking privileged majority.

That this struggle rises now really baffles me. Consider the world’s obviously increasing wealth and abundance, with declining poverty and starvation and increasing access to virtually unlimited renewable energy. The need for tribal dominance to horde resources is dissapearing. The need for borders to protect resources that are no longer scarce, is vanishing.

Just imagine if all of our military and arms spending, all of the money we spend enforcing borders and limiting access to food and medicine and energy and education were instead directed towards sharing this abundance!

Pluralism and empathy are clearly the answer. How can we get more people to realize this despite the onslaught of vitriol and tribal Incitement from the likes of Fox News?

AGI Has Been Delayed

rodneybrooks.com/agi-has-been-delayed/

very recent article follows in the footsteps of many others talking about how the promise of autonomous cars on roads is a little further off than many pundits have been predicting for the last few years. Readers of this blog will know that I have been saying this for over two years now. Such skepticism is now becoming the common wisdom.

In this new article at The Ringer, from May 16th, the author Victor Luckerson, reports:

Elon Musk, the driverless car is always right around the corner. At an investor day event last month focused on Tesla’s autonomous driving technology, the CEO predicted that his company would have a million cars on the road next year with self-driving hardware “at a reliability level that we would consider that no one needs to pay attention.” That means Level 5 autonomy, per the Society of Automotive Engineers, or a vehicle that can travel on any road at any time without human intervention. It’s a level of technological advancement I once compared to the Batmobile.

Musk has made these kinds of claims before. In 2015 he predicted that Teslas would have “complete autonomy” by 2017 and a regulatory green light a year later. In 2016 he said that a Tesla would be able to drive itself from Los Angeles to New York by 2017, a feat that still hasn’t happened. In 2017 he said people would be able to safely sleep in their fully autonomous Teslas in about two years. The future is now, but napping in the driver’s seat of a moving vehicle remains extremely dangerous.

When I saw someone tweeting that Musk’s comments meant that a million autonomous taxis would be on the road by 2020, I tweeted out the following:

Let’s count how many truly autonomous (no human safety driver) Tesla taxis (public chooses destination & pays) on regular streets (unrestricted human driven cars on the same streets) on December 31, 2020. It will not be a million. My prediction: zero. Count & retweet this then.

I think these three criteria need to be met before someone can say that we have autonomous taxis on the road.

The first challenge, no human safety driver, has not been met by a single experimental deployment of autonomous vehicles on public roads anywhere in the world. They all have safety humans in the vehicle. A few weeks ago I saw an autonomous shuttle trial along the paved beachside public walkways at the beach on which I grew up, in Glenelg, South Australia, where there were two “two onboard stewards to ensure everything runs smoothly” along with eight passengers. Today’s demonstrations are just not autonomous. In fact in the article above Luckerson points out that Uber’s target is to have their safety drivers intervene only once every 13 miles, but they are way off that capability at this time. Again, hardly autonomous, even if they were to meet that goal. Imagine having a breakdown of your car that you are driving once every 13 miles–we expect better.

And if normal human beings can’t simply use these services (in Waymo’s Phoenix trial only 400 pre-approved people are allowed to try them out) and go anywhere that they can go in a current day taxi, then really the things deployed will not be autonomous taxis. They will be something else. Calling them taxis would be redefining what a taxi is. And if you can just redefine words on a whim there is really not much value to your words.

I am clearly skeptical about seeing autonomous cars on our roads in the next few years. In the long term I am enthusiastic. But I think it is going to take longer than most people think.

In response to my tweet above, Kai-Fu Lee, a very strong enthusiast about the potential for AI, and a large investor in Chinese AI companies, replied with:

If there are a million Tesla robo-taxis functioning on the road in 2020, I will eat them. Perhaps @rodneyabrooks will eat half with me?

I readily replied that I would be happy to share the feast!

Luckerson talks about how executives, in general, are backing off from their previous predictions about how close we might be to having truly autonomous vehicles on our roads.  Most interestingly he quotes Chris Urmson:

Chris Urmson, the former leader of Google’s self-driving car project, once hoped that his son wouldn’t need a driver’s license because driverless cars would be so plentiful by 2020. Now the CEO of the self-driving startup Aurora, Urmson says that driverless cars will be slowly integrated onto our roads “over the next 30 to 50 years.”

Now let’s take note of this. Chris Urmson was the leader of Google’s self-driving car project, which became Waymo around the time he left, and is the CEO of a very well funded self-driving start up. He says “30 to 50 years”. Chris Urmson has been a leader in the autonomous car world since before it entered mainstream consciousness. He has lived and breathed autonomous vehicles for over ten years. No grumpy old professor is he. He is a doer and a striver. If he says it is hard then we know that it is hard.

I happen to agree, but I want to use this reality check for another thread.

If we were to have AGI, Artificial General Intelligence, with human level capabilities, then certainly it ought to be able to drive a car, just like a person, if not better. Now a self driving car does not need to have general human level intelligence, but a self driving car is certainly a lower bound on human level intelligence.  Urmson, a strong proponent of self driving cars says 30 to 50 years.

So what does that say about predictions that AGI is just around the corner? And what does it say about it being an existential threat to humanity any time soon. We have plenty of existential threats to humanity lining up to bash us in the short term, including climate change, plastics in the oceans, and a demographic inversion. If AGI is a long way off then we can not say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and when it does show up it will be in a world that we can not yet predict.

Do people really say that AGI is just around the corner? Yes, they do…

Here is a press report on a conference on “Human Level AI” that was held in 2018. It reports that 37\% of respondents to a survey at that conference said they expected human level AI to be around in 5 to 10 years. Now, I must say that looking through the conference site I see more large hats than cattle, but these are mostly people with paying corporate or academic jobs, and 37\% of them think this.

Ray Kurzweil still maintains, in Martin Ford’s recent book that we will see a human level intelligence by 2029–in the past he has claimed that we will have a singularity by then as the intelligent machines will be so superior to human level intelligence that they will exponentially improve themselves (see my comments on belief in magic as one of the seven deadly sins in predicting the future of AI). Mercifully the average prediction of the 18 respondents for this particular survey was that AGI would show up around 2099.  I may have skewed that average a little as I was an outlier amongst the 18 people at the year 2200. In retrospect I wish I had said 2300 and that is the year I have been using in my recent talks.

And a survey taken by the Future of Life Institute (warning: that institute has a very dour view of the future of human life, worse than my concerns of a few paragraphs ago) says were are going to get AGI around 2050.

But that is the low end of when Urmson thinks we will have autonomous cars deployed. Suppose he is right about his range. And suppose I am right that  autonomous driving is a lower bound on AGI, and I believe it is a very low bound. With these very defensible assumptions then the seemingly sober experts in Martin Ford’s new book are on average wildly optimistic about when AGI is going to show up.

AGI has been delayed.