Rodney Brooks

Robots, AI, and other stuff

Rodney Brooks’ Three Laws of Artificial Intelligence

rodneybrooks.com/rodney-brooks-three-laws-of-artificial-intelligence/

I have recently blogged about my three laws of robotics. Here I talk about my three laws of Artificial Intelligence, about how people perceive AI systems, about how they operate in the world and how difficult it is to make them general purpose in any sense.

  1. When an AI system performs a task, human observers immediately estimate its general competence in areas that seem related. Usually that estimate is wildly overinflated.
  2. Most successful AI deployments have a human somewhere in the loop (perhaps the person they are helping) and their intelligence smooths the edges.
  3. Without carefully boxing in how an AI system is deployed there is always a long tail of special cases that take decades to discover and fix. Paradoxically all those fixes are AI-complete themselves.

I very briefly elaborate on these three laws.

Over Estimating Competence

I talked about  the tendency people have to extrapolate general competence in some area from an observation of a more specific performance in my blog post titled The Seven Deadly Sins of Predicting the Future of AI:

People hear that some robot or some AI system has performed some task. They then take the generalization from that performance to a general competence that a person performing that same task could be expected to have. And they apply that generalization to the robot or AI system.

The competence of AI systems tends to be very narrow and we humans don’t have the right model for estimating that competence. The overhyping of LLMs in the last two years is a case in point.  LLMs cannot reason at all, but otherwise smart people are desperate to claim that they can reason.  No, you are wrong. LLMs do not reason, by any reasonable definition of reason. They are not doing what humans are doing when we say they are reasoning, and applying that word to LLMs saying it is “reasoning but different” simply leads to gross failures in predicting how well the technology will work.

Person In the loop

People get further confused by the fact that there is usually a person in the loop with deployed AI systems. This can happen in two different ways:

  1. The person who is using the system bears some of the responsibility for what the system does and consciously, or unconsciously, corrects for it. In the case of search engines, for instance, the AI system offers a number of search results and the person down selects and does the final filtering on the results. The search engine does not have to be as intelligent as if it was going to apply the results of its search directly to the real world. In the case of Roombas there is a handle on the robot vacuum cleaner and the owner steps in and gets the robot out of trouble by picking it up and moving it.
  2. The company that is deploying the AI system is operating on a “fake it until you make it” strategy, and there is a person involved somewhere but that fact is deliberately obfuscated.  It turns out that both of the large scale deployments of autonomous vehicles in San Francisco had and have remote people ready to invisibly help the cars get out of trouble, and they are doing so every couple of minutes. A major online seller recently shut down its scanless supermarkets where a customer could walk in, pick up whatever they wanted and walk out and have their credit card charged the correct amount. Every customer was consuming an hour or more of remote humans (in India as it happened) watching and rewatching videos of the customer to determine what it was they had put in their shopping baskets. Likewise for all the campus delivery robot systems–there the companies are selling Universities the appearance of being at the forefront of technology adoption, and not actually providing a robot service at all.
The Long Tail

Claude Shannon first outlined how a computer might be programmed to play chess in an article in Scientific American back in February 1950. He even suggested some mechanisms that might be used to make it learn to play better. In 1959 Arthur Samuel described an implemented program that worked that way and used those learning methods to play the game of Checkers. It was the first time the phrase “machine learning” was used in print. His Checkers program went on to become quite good and to play at an expert human level, even with the tiny computational power of the early 1960s.

But Chess turned out to be much harder. Decades of work went into both improving the learning capabilities of chess and making the look ahead search of possible moves push out further and further. Moore’s Law was the winner and by the 1990s a program had beaten Garry Kasparov, the world champion. There was a lot of controversy over whether the program, Deep Blue from IBM, was a general purpose chess program or a dedicated Kasparov style play beater. By the early 2000s the doubts were gone. Chess programs had gotten good enough to beat any human player.

Chess programs got rid of having to deal individually with special cases through brute force. But Chess is a perfect information game, with no uncertainties and no surprises on the possible pieces that can be on a chess board. It is a closed world.

The places where we really want AI to work well or in more general open worlds.  Our roads where cars drive are an open world. There can be all sorts of things that happen infrequently but need to be handled when they do happen even if the circumstances are different from every thing experienced by AI driving systems before. There are tornados, blizzards, hurricanes, wind borne tumble weeds, and plastic bags, and sheets of styrofoam, and a million other things we could enumerate. Humans use general capabilities, non-driving capabilities, to figure out what to do in unexpected circumstances when driving. Unless an AI system has such general capabilities, every one of the million “long tail events” needs to be trained for.

The things we are asking, or believing the hype about, our AI systems to do have lots and lots of special cases that are not subject to the simple brute force of increasing compute power (despite the hucksters claiming precisely that — just give me $10 billion dollars more of cloud services and our system will learn it all).

No, instead we need to put boxes around our AI systems and products and control where they are applied. The alternative is to have brittle systems that will end up causing economic and perhaps safety upheavals.

Comment on this

Your email address will not be published. Required fields are marked *