Rodney Brooks

Robots, AI, and other stuff

[FoR&AI] Domo Arigato Mr. Roboto

rodneybrooks.com/forai-domo-arigato-mr-roboto/

[An essay in my series on the Future of Robotics and Artificial Intelligence.]

Friday March 11th, 2011, was a bad day for Japan. At 2:46pm local time a magnitude 9.1 earthquake occurred 72 kilometers offshore, east of the Oshika Peninsula which is in the Tohoku region of Japan. A great tsunami was triggered with maximum wave height believed to be 42.5 meters (133 feet) and a few minutes after the earthquake it hit the town of Miyako, 432 kilometers (300 miles) north of Tokyo. Hundreds of kilometers of the coastal region was devastated with almost 16,000 deaths, over 2,500 people missing, and three quarters of a million buildings either collapsed, partially collapsed, or were severely damaged.

The following week things got worse. Japan has been forever changed by what happened in March and April of that year.

A little before 8am on Friday April 25th, 2014, I met up with a small number of robotics researchers from the United States in the Ueno train station in Tokyo. It was a somber rendezvous, but I did not yet realize the sobering emotions I would feel later in the day.

As a technologist I have had more than my fair share of what I think of as “science fiction” days, most of them quite uplifting and exciting. Science fiction days for me are days where I get to experience for real something that heretofore most people have only ever experienced by watching a movie. For instance on July 4, 1997, I was at JPL (the Jet Propulsion Laboratory in Pasadena, California) watching live images come in from the surface of Mars soon after the soft landing of the Pathfinder mission. A little later in the afternoon, to hearty cheers, the Sojourner robot rover deployed onto the surface of Mars, the first mobile ambassador from Earth. Dan Goldin, the administrator of NASA, congratulated all the JPL technologists on the first “faster, cheaper, better” mission. That phrase was a cleaned up version of a title of a paper⁠1 I had written in 1989 with Anita Flynn: “Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System”, where we had proposed the idea of small rovers to explore planets, and explicitly Mars, rather than large ones that were under development at that time. The rover that landed in 1997 was descended from a project at JPL that Colin Angle, then a brand new graduate of M.I.T., and I had helped get started that same year, 1989. The day of the landing was a great science fiction day, and it was related to the one I was about to experience almost seventeen years later.

Really though, April 25th, 2014 was for me two science fiction days rolled into one. Both of them were dystopian.

The group that formed up in Ueno station was lead by Gill Pratt. Gill had been a faculty member in the M.I.T. Artificial Intelligence Laboratory when I had been its director in the late 1990s. He had lead the “leg laboratory”, within the AI Lab, working on making robots that could walk and run. Now he was a program manager at DARPA, the Defense Advanced Research Projects Agency, part of the US Defense Department, leading the DARPA Robot Challenge, a competition whose final was to be held the next year to push forward how robots could help in disaster situations. We robotics researchers were in Japan that week to take part in a joint US/Japan robotics workshop that was held as a satellite event for a summit in Tokyo between Prime Minister Abe and President Obama.

On that Friday morning we took an express train to Iwaki, and from there a fifty minute minibus ride to the “J-village”.  Now things started to get a little surreal. J.League is the Japan Professional Football League, and the J-village was, until the earthquake and tsunami, the central training facility for that league, with multiple soccer pitches, living quarters, a gym, swimming pool, and large administrative buildings. Now three of the pitches were covered in cars, commuter lots for clean up crews. Trucks and minibuses coming from the north were getting scanned for radiation, while all northbound traffic had to go through security gates from the soccer facility. The J-village was now the headquarters of the operation to deal with the radiation released from the Fukushima Daiichi nuclear power plant, when the tsunami had hit it, ultimately leading to three of its six reactors melting down. The J-village was right on the border of a 20 kilometer radius exclusion zone established around that plant, and was being operated by TEPCO, the Tokyo Electric Power Company which owned Fukushima Daiichi, along with Fukushima Daini, also in the exclusion zone, whose four reactors were able to be shut down safely without significant damage.

Inside the main building the walls signaled professional soccer, decorated with three meter high images of Japanese stars of the game. But everything else looked makeshift and temporary. We were met by executives from TEPCO and received our first apology from them for their failures at Daiichi right after the tsunami. We would receive more apologies during the day. This was clearly a ritual for all visitors as none of use felt we were owed any sort of apology. As had happened the day before in a meeting with a government minister, and again rather embarrassingly, I was singled out for special thanks.

After Colin Angle and I had helped get the small rover program at JPL going, where it was led by David Miller and Rajiv Desai, we got impatient about getting robots to other places in the solar system. So, joined by Helen Greiner a friend of Colin’s and for whom I had been graduate counsellor at M.I.T., we started a space exploration robot company originally called IS Robotics. In a 2002 book⁠2 I told the story of our early adventures with that company and how our micro-rovers being tested at Edwards Air Force Base as a potential passenger on a Ballistic Missile Defense Organization (BMDO, popularly known as “Star Wars”) mission to the Moon forced NASA’s hand into adding the Sojourner rover to the Pathfinder mission. By 2001 our company had been renamed to be iRobot, and on the morning of September 11 of that year we got a call to send robots to ground zero in New York City. Those robots scoured nearby evacuated buildings for any injured survivors that might still be trapped inside. That led the way for our Packbot robots to be deployed in the thousands in Afghanistan and Iraq searching for nuclear materials in radioactive environments, and dealing with road side bombs by the tens of thousands. By 2011 we had almost ten years of operational experience with thousands of robots in harsh war time conditions.

A week after the tsunami, on March 18th 2011, when I was still on the board of iRobot, we got word that perhaps our robots could be helpful at Fukushima. We rushed six robots to Japan, donating them, and not worrying about ever getting reimbursed–we knew the robots were on a one way trip. Once they were sent into the reactor buildings they would be too contaminated to ever come back to us.  We sent people from iRobot to train TEPCO staff on how to use the robots, and they were soon deployed even before the reactors had all been shut down.

The oldest of the reactors had been operating for 40 years, and the others shared the same design. None of them had digital monitoring installed, so as they overheated and explosions occurred and they released high levels of radiation there was no way to know what was going on inside the reactor buildings.  The four smaller robots that iRobot sent, the Packbot 510, wieighing 18kg (40 pounds) each with a long arm, were able to open access doors, enter, and send back images. Sometimes they needed to work in pairs so that the one furtherest away from the human operators could send back signals via an intermediate robot acting as a wifi relay. The robots were able to send images of analog dials so that the operators could read pressures in certain systems, they were able to send images of pipes to show which ones were still intact, and they were able to send back radiation levels. Satoshi Tadokoro, who sent in some of his robots later in the year to climb over steep rubble piles and up steep stairs that Packbot could not negotiate, said⁠3 “[I]f they did not have Packbot, the cool shutdown of the plant would have [been] delayed considerably”. The two bigger brothers, both were the 710 model, weighing 157kg (346 pounds) with a lifting capacity of 100kg (220 pounds) where used to operate an industrial vacuum cleaner, move debris, and cut through fences so that other specialized robots could access particular work sites.

Japan has been consistently grateful for that help; we were glad that our technology could be helpful in such a dire situation.

In 2014, at the J-village, after a briefing on what was to come for us visitors, we were issued with dosimeters and we put on disposable outer garments to catch any radioactive particles. We then entered a highly instrumented minibus, sealed from unfiltered external air circulation, and headed north on the Rikuzenhama Highway.  The first few villages we saw were deserted but looked well kept up. That was because by that time the owners of the houses were allowed to come into the exclusion zone for a few hours each day to tend to their  properties. Further in to the zone everything started to look abandoned. After we passed the Fukushima Daini plant which we could see in the distance, we got off the highway and headed down into the town of Tomioka. The train station, quite close to the coast had been washed away, with just the platform remaining, and a single toilet sitting by itself still attached to the plumbing below. Most of the houses had damage to their first floors and from our minibus driving by we could see people’s belongings still inside. At one point we had to go around a car upside down on its roof in the middle of the road. Although it was three years after the event, Tomioka was frozen in time, just as it had been left by the tsunami. This was the first science fiction experience of the day. For all the world it looked like the set of a post-apocalyptic Hollywood movie. But this was a real post-apocalyptic location.

Back on the highway we continued north to the Fukushima Daiichi plant for science fiction experience number two. There are about six thousand people who work at the site cleaning up the damage to the power plant from the tsunami. Only a much smaller number are there on any given day, as continued exposure to the radiation levels is not allowed. We entered a number of buildings higher up the hill than the four reactors that were directly hit by the tsunami. All of them had makeshift piping for air, a look of temporary emergency setups, and all inside were wearing light protective garments as were we. Those who were outside had much more substantial protective clothing, including filtered breathing masks. Most times as we transitioned into and out of buildings we had to go through elaborate security gates where we entered machines that scanned us for radiation in case we had gotten some radioactive debris attached to us. Eventually we got to a control center, really just a few tables with lap tops on them, where the iRobot robots were still being operated from. We watched remotely as one was inside one of the reactor buildings measuring radiation levels–in some cases levels so high that a person could only spend just a few minutes per year in such an area.

Outside we drove around in our sealed bus. We saw where the undamaged fuel rods that had been inside the reactor buildings, but not inside the reactors, were being brought for temporary storage. That task was expected to be completed by the end of this decade. We saw almost (at that time) 1,000 storage tanks, each with about 1,000 tons of contaminated ground water that came down the hill during rainfall and then would be contaminated as it seeped through the ground around the reactor buildings. We saw where they were trying to freeze the ground down to many meters in depth to stop water flowing underground from the hill to the reactor buildings. We saw where along the ocean side of the reactor buildings workers had installed a steel wall from interlocking pylons driven into the seabed, holding back the ocean but more importantly stopping any ground water from leaking into the ocean. Everywhere were people in white protective suits with breathing equipment, working for short periods of time and then being cycled out so that their radiation exposure levels were not unsafe. Eventually we drove down to right near reactor number four, and saw the multi-hundred ton superstructure that had been installed over the building by remotely operated cranes so that the undamaged fuel rods could be lifted out of the damaged water pools where they were normally stored. We wanted to stay a little longer but the radiation level was creeping up, so soon it was decided that we should get out of there. And finally we received a briefing about the research plans on how to develop new robots that starting around the year 2020 would be able to begin the decades long clean up the three melted down reactors.

That really was a science fiction experience.

Robots

Robots were essential to the shutdown of Fukushima Daiichi, and will be for the next thirty or more years as the cleanup continues. The robots that iRobot sent were controlled by operators who looked at images sent back to decide where they should go, whether they should try to climb a pile of debris or not, and give the robots detailed instructions on how to handle unique door handles. In the sequence of three images below a pair of Packbot 510’s first open a door using a large rotary handle, push it open, and then proceed through.

[These are photographs of the operators’ console and in some cases you might just be able to make out the reflection of the operators in protective suits wearing breathing equipment.] Below we see a 510 model confronted by relatively light debris that it will be able to get over fairly safely.

In the image below a 710 model is set up to go and vacuum up radioactive material.

But the robots we sent to Fukushima were not just remote control machines. They had an Artificial Intelligence (AI) based operating system, known as Aware 2.0, that allowed the robots to build maps, plan optimal paths, right themselves should they tumble down a slope, and to retrace their path when they lost contact with their human operators. This does not sound much like sexy advanced AI, and indeed it is not so advanced compared to what clever videos from corporate research labs appear to show, or painstakingly crafted edge-of-just-possible demonstrations from academic research labs are able to do when things all work as planned. But simple and un-sexy is the nature of the sort of AI we can currently put on robots in real, messy, operational environments.

But wait! What about all those wonderful robots we have seen over the years, in the press, the ones that look like Albert Einstein, or the humanoids that have been brought into science museums around the United States for shows, or the ones we see brought out whenever a US President visits Japan4. You have seen them. Like the humanoid ones that walk on two legs, though with bended knees, which does look a little weird, turning to the audience and talking from behind a dark glass visor, sometimes seeming to interact with people, taking things from them, handing things to them, chatting, etc. What about them? They are all fake! Fake in the sense that though they are presented as autonomous they are not. They are operated by a team of usually six people, off stage. And everything on stage has been placed with precision, down to the millimeter. I have appeared on stage before those robots many times and been warned not to walk near or touch or any of the props, for example staircases, as that will make the robot fail, and when it does fail it is not aware that it has.

Corporate marketers had oversold a lot of robots, and confused many people about current robots’ true capabilities. Corporate marketing robots had no chance at all of helping in Fukushima.

Those robots are not real5.

Reality is hard.

Reality

Robotics, including self driving cars, is where Artificial Intelligence (AI) collides with the un-sanitized natural world. Up until now the natural world has been winning, and will probably continue to do so most of the time for quite some time.

We have come to expect our technology to be 100% reliable. We expect our car to start every morning and for the wheels to drive it forward when we push down on the gas pedal. We expect the plane that we board to both take off and land safely, even if, through experience, we tolerate it being late. We expect the internet to provide the web pages we go to on our smart phones. We expect our refrigerators and microwave ovens to work every day so that we can eat and survive.

AI has gotten a pass on providing 100% reliability as so many of its practical applications are mediated by a functioning cognitive human who naturally fills in the missing pieces for the AI system. Us humans do this all the time for small children and for the very elderly. We are wired to be accommodating to other intelligences that we think of as less than us.  Most of our AI technology is very much less than us, so we accommodate.

The demands of having robots interact with the un-sanitized natural world cancel that free pass. The natural world usually does not care that it is a robot rather than a person, and so the natural world is not accommodating. In my opinion there is a mismatch between what is popularly believed about AI and robotics, and what the reality is for the next few decades.

I have spent the last forty years as part of the Artificial Intelligence (and more general computer science) research groups at either Stanford or M.I.T. as a student, post-doc, faculty (both places), or more recently, emeritus professor. Through companies that I have co-founded, iRobot and Rethink Robotics (and yes, I was also once a co-founder and consultant for eight years to a silicon valley AI software company–it eventually failed, and then there was the robotics VC firm I co-founded, and then, etc., etc.), I have been involved in putting a lot of robots to work in five different domains–less than a handful of robots on other planets, tens of millions of robots vacuuming people’s floors, thousands of robots in the military for forward reconnaissance and for handling improvised explosive devices, robots in thousands of factories around the world working side by side with people, and many hundreds of robots in research labs all over the world, used for experiments in manipulation. I think it is fair to say that companies that I have cofounded have put more AI into more robots, and in more domains of application, than anyone else, ever.

All of these robots have had some level of AI, but none come remotely close to what the popular press seems to believe about robots and what many prognosticators warn about, sometimes as imminent dangers from AI and robots.

There seems to me to be a misconnect here.

Right now I believe we are in an AI bubble. And I believe that should it not burst, it will certainly deflate before too long. The existence of this bubble makes it hard for all sorts of people to know what to believe about the near term for AI and robotics. For some people the questions surrounding AI and robotics are simply an intellectual curiosity. For some it is a very real question about what their job prospects might look like in just a few years. For executives at companies and for those in leadership positions in governments and the military, it is a fraught time understanding the true promise and dangers of AI. Most of what we read in the headlines, and from misguided well meaning academics, including from physicists and cosmologists, about AI is, I believe, completely off the mark.

In debates about how quickly we will realize in practice the AI and robotics of Hollywood I like to see myself as the voice of reason. I fear that I am often seen as the old fuddy-duddy who does not quite get how powerful AI is, and how quickly we are marching towards super intelligence, whatever that may be. I am even skeptical about how soon we will see self driving6 cars on our roads, and have been repeatedly told that I “just don’t understand”.

Concomitantly I am critical of what I see as scare mongering about how powerful AI will soon be, especially when it is claimed that we as humans must start taking precautions against it now. Given the sorts of things I saw in and around Fukushima Daiichi, I understand a general fear of technology that is not easily understood by non-experts, but I do want us humans to be realistic about what is scary, and what is only imagined to be scary.

I am even more critical about some of the arguments about ethical decisions that will robots will face in just the next few years. I do believe that there are plenty of ethical decisions facing us humans as we deploy robots in terms of the algorithms they should be running, but it is far, far, from having robots making ethical decisions on the fly. There will be neither benevolent AI nor malevolent AI in the next few decades where AI systems have any internal understanding of those terms. This is a research dream, and I do not criticize people thinking about it as research. I do criticize them thinking that this research is going to turn into reality any time soon, and talking about “regulations” on what sort of AI we can build or even research as a remedy for imagined pitfalls. Instead we will need to worry about people using technology, including AI, for malevolent purposes, and we should encourage the use of technology, including AI, for benevolent purposes. AI is nowhere near ready enough to make any fundamental difference in this regard.

Why am I seen as an outlier in my opinions of where AI is going?  In my dealings with individual researchers in academia and industrial research laboratories, and in my discussions with C-level executives at some of the best known companies that use AI, I find much common ground, and general agreement with my positions. I am encouraged by that. I want to share with readers of this blog what the basis is for my estimations of where we are in deploying AI and robotics, and why it is still hard; I will expand on these arguments in the next few essays that I  post after this one.

Perhaps after reading my forthcoming essays you will conclude that I am an old fuddy-duddy. Or, perhaps, I am a realist. I’ll be posting a lot of long form essays about these topics over the next few months. People who read them will get to decide for themselves where I fall in the firmament of AI.

Now, at the same time, we have only been working on Artificial Intelligence and robotics for just a few decades. Already AI has started to have real impact on our lives. There will be much more that comes from AI and robotics. For those who are able to see through what is hype and what is real there are going to be great opportunities.

There are great opportunities for researchers who concentrate on the critical problems that remain, and are able to prioritize what research will have the greatest impact.

For those who want to start companies in AI and robotics, understanding what is practical and matches what the market will eagerly accept, there is again great opportunity. We should expect to see many large and successful companies rise up in this space over the next couple of decades.

For those are willing to dare greatly, and who want to make scientific contributions for the ages there is so much at such a deep level that we do not yet understand that there is plenty of room for a few more Ada Lovelaces, Alan Turings, Albert Einsteins, and Marie Curies to make their marks.

Hubris and humility

For fours years starting in 1988 I co-taught the M.I.T. introductory Artificial Intelligence (AI) class, numbered then and still now 6.034, with Professor Patrick Henry Winston. He still teaches that class which has only gotten better, and it is available online⁠7 for all to experience.

Back then Patrick used to start the first class with a telling anecdote. Growing up in Peoria, Illinois, Patrick had at one time had a pet raccoon. Like others of its species Patrick’s raccoon was very dexterous, and so hard to keep in a cage as it was usually clever enough to find a way to open the door unless it was really locked tight. Patrick would regale the class with how intelligent his raccoon had been. Then he would dead pan “but I never expected it to be smart enough to build a copy of itself”.

This was caution for humility. Then, as now, there was incredible promise, and I would say hype around Artificial Intelligence, and so this was Patrick’s cautionary note. We might think we were just around the corner from building machines that were just as smart, by whatever measure, as people, but perhaps we were really no more than dexterous raccoons with computers.

At that time AI was not a term that ever appeared in the popular press, IBM went out of its way to say that computers could not think, only people could think, and AI was not thought of appropriate stature to be part of many computer science departments. Patrick’s remarks led me to wonder out loud whether we were overwhelming ourselves with our own hubris. I liked to extend Patrick’s thinking and wondered about super-intelligent aliens (biological or otherwise) observing us from high orbit or further afield. I imagined them looking down at us, like we might look at zoo animals, and being amused by our cleverness, but being clear about our limitations. “Look at those ones in their little AI Lab at M.I.T.! They think they are going to be able to build things as smart as themselves, but they have no idea of the complexities involved and their little brains, not even with help from their computers (oh, they have so got those wrong!), are just never going to get there. Should we tell them, or would that be unkind to dash their little hopes? Those humans are never going to develop nor understand how intelligence works.”

Still, today, Patrick Winston’s admonition is is a timely caution for us humans. A little humility about the the possible limits of our capabilities is in order. Humility about the future of AI and also Machine Learning (ML) is in desperate short supply. Hubris, from some AI researchers, from venture capitalists (VCs), and from some captains of technology industries is dripping thick and fast. Often the press manages to amplify the hubris as that is what makes a good attention grabbing story.



1 “Fast, Cheap and Out of Control: A Robot Invasion of the Solar System”, Rodney A. Brooks and Anita M. Flynn, Journal of the British Interplanetary Society, 42(10)478–485, October 1989.

2 Flesh and Machines, Rodney A. Brooks, Pantheon, New York, 2002.

3 “The Day After Fukushima”, Danielle DeLatte, Space Safety Magazine, (7)7–9, Spring 2013.

4 During the 2014 Abe/Obama summit, as usual, a Japanese humanoid robot built by a Japanese auto maker was brought out to interact with the US President. The robot kicked a soccer ball towards President Obama and he kicked it back, to great applause. Then President Obama turned to his hosts and asked, not so innocently, so is that robot autonomous or tele-operated from behind the scenes? That is an educated and intellectually curious President.

5 Partially in response to the Fukushima disaster the US Defense Advanced Research Projects Agency (DARPA) set up a challenge competition for robots to operate in disaster areas. Japanese teams were entered in this competition; the first time there had been significant interaction between Japanese roboticists and DARPA–they were a very strong and welcome addition. The competition ran from late 2011 to June 5th and 6th of 2015 when the final competition was held. The robots were semi-autonomous with communications from human operators over a deliberately unreliable and degraded communications link. This short video focusses on the second place team but also shows some of the other teams, and gives a good overview of the state of the art in 2015. For a selection of greatest failures at the competition see this link. I was there and watched all this unfold in real time–it was something akin to watching paint dry as there were regularly 10 to 20 minute intervals when absolutely nothing happened and a robot just stood there frozen. This is the reality of what our robots can currently do in unstructured environments, even with a team of researchers communicating with them when the can.

6 I have written two previous blog posts on self driving cars. In the first I talked about the ways such cars will need to interact with pedestrians in urban areas, and many of the problems that will need to be solved.  In the second I talked about all the uncommon cases that we as drivers in urban situations need to face, such as blocked roads, temporarily banned parking, interacting with police, etc. In a third post I plan to talk about how self driving cars will change the natures of our cities. I do not view self driving cars as doomed by any of these problems, in fact I am sure they will become the default way for cars to operate in the lifetimes of many people who are alive today. I do, however, think that the optimistic forecasts that we have seen from academics, pundits, and companies are wildly off the mark. In fact that reality is starting to set in. Earlier this month the brand new CEO of Ford said that the previous goal of commercial self driving cars by 2021 was not going to happen. No new date was announced.

You can find the 24 lectures of 6.034 online. I particularly recommend lectures 12a and 12b on neural networks and deep neural networks to all those who want to understand the basics of how deep learning works–the only prerequisite is a little multi-variable differential calculus. Earlier in 2017 I posted a just slightly longer introduction than this paragraph.

One comment on “[FoR&AI] Domo Arigato Mr. Roboto”

Comment on this

Your email address will not be published. Required fields are marked *