Edge Cases For Self Driving Cars

rodneybrooks.com/edge-cases-for-self-driving-cars/

Perhaps through this essay I will get the bee out of my bonnet^{\big 1} that fully driverless cars are a lot further off than many techies, much of the press, and even many auto executives seem to think. They will get here and human driving will probably disappear in the lifetimes of many people reading this, but it is not going to all happen in the blink of an eye as many expect. There are lots of details to be worked out.

In my very first post on this blog I talked about the unexpected consequences of having self driving cars. In this post I want to talk about about a number of edge cases, which I think will cause it to be a very long time before we have level 4 or level 5 self driving cars wandering our streets, especially without a human in them, and even then there are going to be lots of problems.

First though, we need to re-familiarize ourselves with the generally accepted levels of autonomy that every one is excited about for our cars.

Here are the levels from the autonomous car entry in Wikipedia which attributes this particular set to the SAE (Society of Automotive Engineers):

  • Level 0: Automated system has no vehicle control, but may issue warnings.
  • Level 1: Driver must be ready to take control at any time. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.
  • Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.
  • Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks, but must still be prepared to take control when needed.
  • Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
  • Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decision.

There are many issues with level 2 and level 3 autonomy, which might make them further off in the future than people are predicting, or perhaps even  forever impractical due to limitations on how quickly humans can go from not paying attention to taking control in difficult situations. Indeed as outlined in this Wired story many companies have decided to skip level 3 and concentrate on levels 4 and 5. The iconic Waymo (formerly Google) car has no steering wheel or other conventional automobile controls–it is born to be a level 4 or level 5 car. [This image is from Wikipedia.]

So here I am going to talk only about level 4 and level 5 autonomy, and not really make a distinction between them.  When I refer to an “autonomous car” I’ll be talking about ones with level 4 or level 5 autonomy.

I will make distinctions between cars with conventional controls so that they are capable of being driven by a human in the normal way, and cars like the Waymo one pictured above with no such controls, and I will refer to that as an unconventional car. I’ll use those two adjectives, conventional, and unconventional, for cars, and then distinguish what is necessary to make them practical in some edge case circumstances.

I will also refer to gasoline powered driverless cars versus all electric driverless cars, i.e., gasoline vs. electric.

Ride-sharing companies like Uber are putting a lot of resources into autonomous cars. This makes sense given their business model as they want to eliminate the need for drivers at all, thus saving their major remaining labor cost. They envision empty cars being summoned by a customer, driving to wherever that customer wants to be picked up, with absolutely no one in the car. Without that, having the autonomy technology doesn’t make sense to this growing segment of the transportation industry. I’ll refer to such an automobile, with no-one in it as a Carempty. In contrast, an autonomous car which has a conscious person in it, whether it is an unconventional car and they can’t actually drive it in the normal way, or whether it is a conventional car but they are not at all involved in the driving, perhaps sitting in the back seat, as Careless, as presumably that person shouldn’t have to care less about the driving other than indicating where they want to go.

So we have both an unconventional and a conventional Carempty and Careless, and perhaps they are gasoline or electric.

Many of the edge cases I will talk about here are based on the neighborhood in which I live, Cambridgeport in Cambridge, Massachusetts. It is a neighborhood of narrow one way streets, packed with parked cars on both sides of the road so that it is impossible to pass by if a car or truck stopped in the road. A few larger streets are two way, and some of them have two lanes, one in each direction, but at least one nearby two way street only has one lane–one car needs to pull over, somehow, if two cars are traveling on the opposite direction (the southern end of Hamilton Street in the block where the “The Good News Garage” of the well known NPR radio brothers “Click and Clack” is located).

HOW MUCH DRIVING CAN A NON-DRIVER DO?

In a conventional Careless a licensed human can take over the driving when necessary, unless say it is a ride sharing car, and in that case humans might be locked out of using the controls directly. For an unconventional Careless, like one of the Waymo cars pictured above, the human can not take over directly either. So a passenger in a conventional ride-sharing car, or in an unconventional car are in the same boat. But how much driving can that human do?

In both cases the human passenger needs to be able to specify the destination. For a ride-sharing service that may have been done on a smart phone app when calling for the service. But once in the car the person may want to change their mind, or demand that the car take a particular route–I certainly often do that with less experienced drivers who are clearly going a horrible way, often at the suggestion of their automated route planners. Should all this interaction be via an app? I am guessing, given the rapid improvements in voice systems, such as we see in the Amazon Echo, or the Google Home, we will all expect to be able to converse by voice with any autonomous car that we find ourselves in.

We’ll ignore for the moment a whole bunch of teenagers each yelling instructions and pranking the car. Let’s just think about a lone sensible mature person in the car trying to get somewhere.

Will all they be able to do is give the destination and some optional route advice, or will they be able to give more detailed instructions when the car is clearly screwing up, or missing some perceptual clue that the occupant can clearly recognize? The next few sections give lots of examples from my neighborhood that are going to be quite challenging for autonomous cars for many years to come, and so such advice will come in handy.

In some cases the human might be called upon to, or just wish to, give quite detailed advice to the car. What if they don’t have a driver’s license? Will the be guilty of illegally driving a car in that case? How much advice should they be allowed to give (spoiler alert, the car might need a lot in some circumstances)? And when should the car take the advice of the human? Does it need to know if the person in the car talking to it has a driver’s license?

Read on.

WHAT TO DO ABOUT A BLOCKED ROAD

In my local one-way streets the only thing to do if a car or other vehicle is stopped in the travel lane is to wait for it to move on. There is no way to get past it while it stays where it is.

The question is whether to toot the horn or not at a stopped vehicle.

Why would it be stopped? It could be a Lyft or an Uber waiting for a person to come out of their house or condominium. A little soft toot will often get cooperation and they will try to find a place a bit further up the street to pull over.  A loud toot, however, might cause some ire and they will just sit there. And if it is a regular taxi service then no amount of gentleness or harshness will do any good at all. “Screw you” is the default position.

Sometimes a car is stopped because the driver is busy texting, most usually when they are at an intersection, had to wait for some one to the cross in front of them, their attention wandered, they started reading a text, and now they are texting and have forgotten that they are in charge of an automobile. From behind one can often tell what they are up to by noticing their head inclination, even from inside the car behind. A very gentle toot will usually get them to move; they will be slightly embarrassed at their own illegal (in Massachusetts) behavior.

And sometimes it is a car stopped outside an eldercare residence building with someone helping a very frail person into or out of the car.  Any sort of toot from a stopped car behind is really quite rude in these circumstances, distressing for the elderly person being helped, and rightfully ire raising for the person taking care of that older person.

Another common case of road blockage is that a garbage truck stopped to pick up garbage. There are actual two varieties, one for trash, and one for recyclables. It is best to stop back a bit further from these trucks than from other things blocking the road, as people will be running around to the back of the truck and hoisting heavy bins into it. And there is no way to get these trucks to move faster than they already are. Unlike other trucks, they will continue to stop every few yards. So the best strategy is to follow, stop and go, until the first side street and take that, even if, as it is most likely a one-way street, it sends you off in a really inconvenient direction.

Yet a third case is a delivery truck. It might be a US Postal Service truck, or a UPS or Fedex truck, or sometimes even an Amazon branded truck. Again tooting these trucks makes absolutely no difference–often the driver is getting a signature at a house, or may be in the lobby of a large condominium complex. It is easy for a human driver to figure out that it is one of these sorts of trucks. And then the human knows that it is not so likely to stop again really soon, so staying behind this truck once it moves rather than taking the first side street is probably the right decision.

If on the other hand it is a truck from a plumbing service, say, it is worth blasting it with your horn. These guys can be shamed into moving on and finding some sort of legal parking space. If you just sit there however it could be many minutes before they will move.

A Careless automobile could ask its human occupant whether it should toot. But should it make a value judgement if the human is spontaneously demanding that it toot its horn loudly?

A Carempty automobile could just never toot, though the driver in a car behind it might start tooting it, loudly. Not tooting is going to slow down Carempties quite a bit, and texting drivers just might not care at all if they realize it is a Carempty that is showing even a little impatience. And should an autonomous car be listening for toots from a car behind it, and change its behavior based on what it hears? We expect humans to do so. But are the near future autonomous cars going to be so perfect already that they should take no external advice?

Now if Carempties get toot happy, at least in my neighborhood that will annoy the residents having tooting cars outside their houses at a much higher level than at the moment, and they might start to annoy the human drivers in the neighborhood.

The point here is that there is a whole lot of perceptual situations that a an autonomous vehicle will need to recognize if it is to be anything more than a clumsy moronic driver (an evaluation us locals often make of each other in my neighborhood…). As a class, autonomous vehicles will not want to get such a reputation, as the humans will soon discriminate against them in ways subtle and not so subtle. 

Maps DOn’T Tell the Whole Story

Recently I pulled out of the my garage and turned right onto the one way street that runs past my condominium building, and headed to the end of my single block street, expecting to turn right at a “T” junction onto another one way street. But when I got there, just to the right of the intersection the street was blocked by street construction, cordoned off, and with a small orange sign a foot or so off the ground saying “No Entry”.

The only truly legal choice for me to make was to stop. To go back from where I had come I needed to travel the wrong way on my street, facing either backwards or forwards, and either stopping at my garage, or continuing all the way to the street at the start of my street. Or I could turn left and go the wrong way on the street I had wanted to turn right onto, and after a block turn off onto a side street going in a legal direction.

A Careless might inform its human occupant of the quandry and ask for advice on what to do. That person might be able to do any of the social interactions needed should the Careless meet another car coming in the legal direction under either of these options.

But Carempty will need some extra smarts for this case.  Either hordes of empty cars eventually pile up at this intersection or each one will need to decide to break the law and go the wrong way down one of the two one way streets–that is what I had to do that morning.

The maps that a Carempty has won’t help it a whole lot in this case, beyond letting it know the minimum distance it is going to have to be in a transgressive state.

Hmmm.  It is OK for a Carempty to break the law when it decides it has to? Is it OK for a Careless to break the law when its human occupant tells it to? In the situation I found myself in above, I would certainly have expected my Careless to obey me and go the wrong way down a one way street. But perhaps the Careless shouldn’t do that if it knows that it is transporting a dementia patient.

The Police

How are the police supposed to interact with a Carempty?

While we have both driverful and driverless cars on our roads I think the police are going to assume that as with driverful cars they can interact with them by waving them through an intersection perhaps through a red light, stopping them with a hand signal at a green light, or just to allow someone to cross the road.

But besides being able to understand what an external human hand signaling them is trying to convey, autonomous cars probably should try to certify in some sense whether the person that is giving them those signals is supposed to be doing so with authority, with politeness, or with malice. Certainly police should be obeyed, and police should expect that they will be. So the car needs to recognize when someone is a police officer, no matter what additional weather gear they might be wearing. Likewise they should recognize and obey school crossing monitors. And road construction workers. And pedestrians giving them a break and letting them pass ahead of them. But should they obey all humans at all times? And what if in a Careless situation their human occupant tells them to ignore the taunting teenager?

Sometimes a police officer might direct a car to do something otherwise considered illegal, like drive up on to a sidewalk to get around some road obstacle. In that case a Carempty probably should do it. But if it is just the delivery driver whose truck is blocking the road wanting to get the Carempty to stop tooting at them, then probably the car should not obey, as then it could be in trouble with the actual police. That is a lot of situational awareness for a car to have to have.

Things get more complicated when it is the police and the car is doing something wrong, or there is an extraordinary circumstance which the car has no way of understanding.

In the previous section we just established that autonomous cars will sometimes need to break the law. So police might need to interact with law breaking autonomous cars.

One view of the possible conundrum is this cartoon from the New Yorker. There are two instantly recognizable Waymo style self driving cars, with no steering wheels or other controls, one a police car that has just pulled over the other car. They both had people in them, and the cop is asking the guy in the car that has just been pulled over, “Does your car have any idea why my car pulled it over?”.

If an autonomous car fails to see a temporary local speed sign and gets caught in a speed trap, how is it to be pulled over? Does it need to understand flashing blue lights and a siren, and does it do the pull to the side in a way that we have all done, only to be relieved when we realize that we were not the actual target?

And getting back to when I had to decide to go the wrong way down a one way street, what if a whole bunch of Carempties have accumulated at that intersection and a police officer is dispatched to clear them out? For driverful cars a police officee might give a series of instructions and point out in just a few seconds who goes first, who goes second, third, etc. That is a subtle elongated set of gestures that I am pretty sure no deep learning network has any hope at the moment of intpreting, of fully understanding the range of possibilities that a police officer might choose to use.

Or will it be the case that the police need to learn a whole new gesture language to deal with driverless cars? And will all makes all understand the same language?

Or will we first need to develop a communication system that all police officers will have access to and which all autonomous cars will understand so that police can interact with autonomous cars? Who will pay for the training? How long will that take, and what sort of legislation (in how many jurisdictions) will be required?

Getting Towed

A lot of cars get towed in Cambridge. Most streets get cleaned on a regular schedule (different sides of the same street on different days), and if your car is parked there at 7am you will get towed–see the sign in the left image. And during snow emergencies, or without the right sticker/permit you might get towed at any time. And then there are pop-up no parking signs, partially hand written, that are issued by the city on request for places for moving vans, etc. Will our autonomous cars be able to read these? Will they be fooled by fake signs that residents put up to keep pesky autonomous cars from taking up a parking spot right outside their house?

If an unconventional Carempty is parked on the street, one assumes that it might at any time start up upon being summoned by its owner, or if it is a ride-share car when its services are needed. So now imagine that you are the tow truck operator and you are supposed to be towing such a car. Can you be sure it won’t try driving away as you are crawling under it connect the chains, etc., to tow it?  If a human runs out to move their car at the last minute you can see when things are going to start and adjust. How will it work with fully autonomous cars?

And what about a Carempty that has a serious breakdown, perhaps in its driving system, and it just sits there and can no longer safely move itself. That will need to be towed most likely. Can the tow truck operator have some way to guarantee that it is shut down and will not jump back to life, especially when the owner has not been contactable, to put it in safe mode remotely? What will be the protocols and regulations around this?

And then if the car is towed, and I know this from experience, it is going to be in a muddy lot full of enormous potholes in some nearby town, with no marked parking areas or driving lanes. The cars will have been dumped at all angles, higgledy-piggledy. And the lot is certainly not going have its instantaneous layout mapped by one of the mapping companies, providing the maps that autonomous cars rely on for navigation. To retrieve such a car a human is likely going to have to go do it (and pay before getting it out), but if it is an unconventional car it is certainly going to require some one in it to talk it through getting out of there without angering the lot owner (and again from experience, that is a really easy thing to do–anger the lot owner). Yes, in some distant future tow lots in Massachusetts will be clean, and flat with no potholes deeper than six inches, and with electronic payment systems, and all will be wonderful for our autonomous cars to find their way out.

Don’t hold your breath.

OTHER TRICKY SITUATIONS

What happens when a Carempty is involved in an accident? We know that many car companies are hoping that their cars will never be involved in an accident, but humans are dumb enough that as long as there are both human drivers and autonomous cars on the same streets, sometimes a human is going to drive right into an autonomous car.

Autonomous cars will need to recognize such a situation and go through some protocol. There is a ritual when a fender bender happens between two driverful cars. Both drivers stop and get out of their cars, perhaps blocking traffic (see above) and go through a process of exchanging insurance information. If one of the cars is an autonomous vehicle the the human driver can take a photo on their phone (technology to the rescue!) of the autonomous car’s license plate. But how is a Carempty supposed to find out who hit it? In the distant future when all the automobile stock on the road have transponders (like current airplanes) that will be relatively easy (though we will need to work through horrendous privacy issues to get there), but for the foreseeable future this is going to be something of a problem.

And what about refueling? If a ride-sharing car is gasoline powered and out giving rides all day, how does it get refueled? Does it need to go back to its home base to have a human from its company put in more gasoline? Or will we expect to have auto refueling stations around our cities? The same problem will be there even if we quickly pass beyond gasoline powered cars. Electric Carempties will still need to recharge–will we need to replace all the electric car recharging stations that are starting to pop up with ones that require no human intervention?

Autonomous cars are likely to require lots of infrastructure changes that we are just not quite ready for yet.

Impacts on the Future of Autonomous Cars

I have exposed a whole bunch of quandaries here for both Carempties and Carelesses. None rise to the moral level of the so called trolley problem (do I kill the one nun or seven robbers?) but unlike the trolley problem variants of these edge cases are very likely to arise, at least in my neighborhood. There will be many other edge case conundrums in the thousands, perhaps millions, of unique neighborhoods around the world.

One could try to have some general purpose principles that cars could reason from in any circumstances, perhaps like Asimov’s Three Laws^{\big 2}, and perhaps tune the principles to the prevailing local wisdom on what is appropriate or not. In any case there will need to be a lot of codifying of what is required of autonomous cars in the form of new traffic laws and regulations. It will take a lot of trial and error and time to get these laws right.

Even with an appropriate set of guiding principles there are going to be a lot of perceptual challenges for both Carempties and Carelesses that are way beyond those that current developers have solved with deep learning networks, and perhaps a lot more automated reasoning that any AI systems have so far been expected to demonstrate.

I suspect that to get this right we will end up wanting  our cars to be as intelligent as a human, in order to handle all the edge cases appropriately.

And then they might not like the wage levels that ride-sharing companies will be willing to pay them.



^{\big 1}But maybe not.  I may have one more essay on how driverless cars are going to cause major infrastructure changes in our cities, just as the original driverful cars did. These changes will be brought on by the need for geofencing–something that I think proponents are underestimating in importance.

^{\big 2}Recall that Isaac Asimov used these laws as a plot device for his science fiction stories, by laying out situations where these seemingly simple and straightforward laws led to logical fallacies that the story proponents, be they robot or human, had to find a way through.

Is War Now Post Kinetic?

rodneybrooks.com/is-war-now-post-kinetic/

When the world around us changes, often due to technology, we need to change how we interact with it, or we will not do well.

Kodak was well aware of the digital photography tsunami it faced but was not able to transform itself from a film photography company until too late, and is no more. On the other hand, Pitney Bowes started its transformation early from a provider of mail stamping machines to an eCommerce solutions company and remains in the S&P 500.

Governments and politicians are not immune from the challenges that technological change produces on the ground, and former policies and vote getting proclamations may lag current realities^{\big 1}.

I do wonder if war is transforming itself around us to being fought in a non-kinetic way, and which nations are aware of that, and how that will change the world going forward. And, importantly for the United States, what does that say about what its Federal budget priorities should be?

A Brief History of Kinetic War

The technology of war has always been about delivering more kinetic energy, faster, more accurately and with more remote standoff from the recipient of the energy, first to human bodies, and then to infrastructure and supply chains.

New technologies caused changes in tactics and strategies, and many of them eventually made old technologies obsolete, but often a new technology would co-exist with one that it would eventually supplant for long periods, even centuries.

One imagines that the earliest weapons used in conflicts between groups of people were clubs and axes of various sorts. These early wars were fought in close proximity, delivering kinetic blows directly to another’s body.

By about 4,400 years ago the first copper daggers appeared, and by 3,600 years ago, bronze swords appeared, allowing for an attack at a slightly longer distance, perhaps out of direct reach of the victim. Even today our infantries are equipped with bayonets on the ends of guns to deliver direct kinetic violence to another’s body through the use of human muscles. With daggers and swords the kinetic blows could be much more deadly as they needed less human energy to cause bleeding.

Simultaneously the first “stand off” weapons were developed; bows and arrows 12,000 years ago, most likely with a very limited range. The Egyptians had bows with a range of 100 meters a little less than 4,000 years ago. A bow stores the energy from human muscle in a single drawing motion, and then delivers it all in a fraction of a second. These weapons did not eliminate hand to hand combat, but they did allow engagement from a distance. With the introduction of horses and later chariots, there was added the element of speed of closing from too far away to engage to being in engagement range very quickly. These developments were all aimed at getting bleed-producing kinetic impacts on humans from a distance.

A little less than 3,000 years ago war saw a new way to use kinetic energy; thermally. No longer was it just the energy of human muscles that rained down on the enemy, but that from fire. First from burning crops, but soon by delivering  burning objects via catapults and other throwing devices. Those throwing devices started out just delivering heavy weights, though the muscle energy of many people stored over many minutes of effort. But once burning objects were being thrown they could deliver the thermal energy stored in the projectile, as well as unleash more thermal energy by setting things on fire in the landing area.

During the 8th to 16th century, hurled anti-personnel weapons, those aimed at individual people, were developed where projectiles full of hot pitch, oil, or resin, were thrown by mechanical devices, again with stored human energy, intended to maim and disable an individual human that they might hit.

The arrival of chemical explosives ultimately changed most things about warfare, but there was a surprisingly long coexistence with older weapons. The earliest form of gunpowder was developed in 9th century China, and it reached Europe courtesy of the Mongols in 1241. The cannon, which provided a way of harnessing that explosive power to deliver high amounts of kinetic energy in the form of metal or stone balls provided both more distant standoff and more destructive kinetics, and was well developed by the 14th century, with the first man portable versions coming of age in the 15th century.

But meanwhile the bow and arrow made a come back, with the English longbow, traditionally made from yew (and prompting a European wide trade network in that wood), having a range of 300 meters in the 14th and 15th centuries. It was contemporary with the cannon, but the agility of it being carried by a single bowman led to it being the major reason for victory in a large scale battle as late as the Battle of Agincourt in 1415.

The cannon changed the nature of naval warfare, and naval warfare itself was about logistics and supply lines, and later being a mobile platform to pound installations on the coast from the safety of the sea. Ships also changed over time due to new technologies for their propulsion, from oars, to sails, to steam, and ultimately to nuclear power, making them faster and more reliable. Meanwhile the mobile cannon was developed into more useful sorts of weapons, and with the invention of bullets (which combined the powder and projectile into a compact pre-manufactured expendable device), guns and then machine guns became the preferred weapon of the ground soldier.

Each of these technological developments improved upon the delivery of kinetic energy to the enemy, over time, in fits and starts making that delivery faster, more accurate, more energetic, and with more distant standoff.

Rarely were the new technologies adopted quickly and universally, but over time they often made older technologies completely obsolete. One wonders how quickly people noticed the new technologies, how they were going to change war completely, and how they responded to those changes.

Latter Day WAR

In the last one hundred or so years, from the beginning of the Great War, also known as World War I, we have seen continued technological change in how kinetic energy is delivered during conflict. In the Great War we saw both the introduction of airplanes, originally as intelligence gathering machine conveyances, but later as deliverers of bullets and bombs, and the introduction of tanks. Even with mechanization, the United Army still had twelve horse regiments, each of 790 horses, at the beginning of World War II. They were no match for tanks, and hard to integrate with tank units, so eventually they were abolished.

By the end of World War II we had seen both the deployment of missiles (the V1 and V2 by Germany), and nuclear weapons (by the United States). Later married together, nuclear tipped missiles became the defining, but unused, technology that redefined the nature of war between superpowers. Largely that notion is obsolete, but North Korea, a small poor country, is actively flirting with it again these very days.

Another innovation in World War II, practiced by both sides, was massive direct kinetic hits on the civilian populations of the enemy, delivered through the air. For the first time kinetic energy could be delivered far inside territory still held by the enemy, and damage to infrastructure and morale could be wrought without the need to invade on the ground. Kinetically destroying large numbers of civilians was also part of the logic of MAD, or Mutually Assured Destruction, of the United States and the USSR pointing massive numbers of nuclear tipped missiles at each other during the cold war.

Essentially now war is either local engagements between smaller countries, or asymmetric battles between large powers and smaller countries or non-state actors. The dominant approach for the United States is to launch massive ship and air based volleys of Tomahawk Cruise Missiles, with conventional kinetic war heads, to degrade the war fighting infrastructure in the target territory, and then boots on the ground. The other side deploys harassing explosives both as booby traps, and to target both the enemy and local civilians through using human suicide bombers as a stand off mechanism for those directing the fight. As part of this asymmetry the non-state actors continually look for new ways to deliver kinetic explosions on board civilian aircraft which has had the effect of making air travel worldwide more and more unpleasant for the last 16 years.

In slow motion each class of combatant changes their behavior to respond to new, and past, technologies deployed or threatened by the other side.

But over the whole history of war, rulers and governments have had to face the issue of what war to prepare for and where to place their resources. When should a country stop concentrating on sources of yew and instead invest more heavily in portable cannons? When should a country give up on supporting regiments of horses? When should a country turn away from the ruinous expense of yet higher performance fighter planes whose performance is only needed to engage other fighter planes and instead invest more heavily in cruise missiles and drones with targeted kinetic capabilities?

How should a country balance its portfolio of spending on the old technologies of war, and putting enough muscle behind the new technologies so that it can ride up the curve of the new technology, defending against it adequately, and perhaps deploying it itself.

BUT HAS A NEW FORM OF WAR ARRIVED?

In the late nineteenth century fortunes were made in chemistry for materials and explosives. In the early part of the twentieth century extraordinary wealth for a few individuals came from coal, oil, automobiles, and airplanes. In the last thirty years that extraordinary wealth has come to the masters of information technology through companies such as Microsoft, Apple, Oracle, Google, and Facebook. Information technology is the cutting edge. And so, based on history, one should expect that technology to be where warfare will change.

Indeed, we saw in WW II the importance of cryptography and the breaking of cryptography, and the machines built at Bletchley Park in service of that gave rise to digital computers.

In the last few years we have seen how our information infrastructure has been attacked again and again for criminal reasons, with great amounts of real money being stolen, solely in cyberspace. Pacifists^{\big 2} might say that war is just crime on an international scale, so one should expect that technologies that start out as part of criminal enterprises will be adopted for purposes of war.

We have seen over the last half dozen years how non-state actors have used social media on the Internet to recruit young fighters from across the world to come and partake in their kinetic wars where those recruiters reside, or to wage kinetic violence inside countries far removed physically from where the recruiters reside. The Internet has been a wonderful new stand off tool, allowing distant ring-masters to burrow in to distant homelands and detonate kinetic weapons constructed locally by people the ring-masters have never met in person. This has been an unexpected and frightening evolution of kinetic warfare.

In the early parts of this decade a malicious computer worm named Stuxnet, most probably developed by the US and Israel, was deployed widely though the Internet. It infected Microsoft operating systems, and sniffed out whether they were talking to Siemens PLCs (Programmable Logic Controllers), and whether they were controlling nuclear centrifuges. Then it slowly degraded those centrifuges while simulating reports that said all was well with them. It is believed that this attack destroyed one fifth of Iran’s centrifuges. Here a completely cyber attack, with standoff all the way back to an office PC, was able to introduce a kinetic (slow though it may have been) attack in the core of an adversary’s secret facilities. And it was aimed at the production of the ultimate kinetic weapon, nuclear bombs. War is indeed evolving rapidly.

But now in the 2016 US presidential election, and again in the 2017 French presidential election we have seen, and all the details are not yet out, a glimpse of a future warfare where kinetic warfare is not used at all. Nevertheless it has been acts of war. US intelligence services announced in 2016 that there had been Russian interference in the US election.  The whole story is still to come out, but in both the US and French elections there were massive dumps of cyber-stolen internal emails from one candidate’s organization, timed exquisitely in both cases down to just a few minutes’ window of maximum impact. This was immediately, minutes later, followed by seemingly unrelated thousands of people looking through those emails claiming clues to often ridiculous malevolence. In both elections the mail dumps included faked emails which had sinister interpretations, uncovered by the armies of people looking through the emails for a smoking gun. These attacks most probably changed the outcome of the US election, but failed in France. This is post kinetic war waged in a murky world where the citizens of the attacked country can never know what to believe.

Let us be clear about the cleverness and monumental nature of these attacks. An adversary stands off, thousands of miles away, with no physical intrusion, and changes the government of its target to be more sympathetic to it than the people of the target country wanted. There are no kinetic weapons. There are layers of deception and layers of deniability. The political system of the attacked country has no way to counteract the outcome desired and produced by the enemy. The target country is dominated by the attacking adversary. That is a successful post kinetic war.

Technology changes how others act and how we need to act. Perhaps the second amendment to the US Constitution, allowing for an armed civilian militia to fight those who would destroy our Republic, is truly obsolete. Perhaps the real need is to equip the general population of the United States with tools of privacy and cyber security, both at a personal level, and in the organizations where they work. Just as WW II showed the obsolescence of physical borders to protect against kinetic devices raining from the sky, so too now we have seen that physical borders no longer protect our fundamental institutions of civil society and of democracy.

We need to learn how to protect ourselves in a new era of post kinetic war.

We see a proposed 2018 US Federal budget building up the weapons of kinetic war way beyond their current levels. Kinetic war will continue to be something we must protect against–it will remain an avenue of attack for a long time. We saw above how the English long bow was still a credible weapon, coexisting with cannon and other uses of gun powder for centuries, though now its utility is well gone.

However, we must not give up worrying about kinetic war, but we must start investing in strength and protection against a new sort of post kinetic war that has really only started in the last twelve months. With $639B slated for defense in the proposed 2018 budget, and even $2.6B for a border fence, surely we can spend a few little billions, maybe even just one or two, on figuring out how to protect the general population from this newly experienced form of post kinetic war. I have recommendations^{\big 3}.

We don’t want the United States to have its own Kodak moment.



^{\big 1}For instance, in just six months from this last October to April, more jobs were lost in retail in the US than the total number of US coal jobs. Not only did natural gas, wind, and solar technology decimate coal mining, jobs never to return, but information technology has enabled fulfillment centers, online ordering, and delivery to the home, completely decimating the US retail sector, a sector that is many times bigger than coal.

^{\big 2}I do not count myself as a pacifist.

^{\big 3}Where in the Federal Government should such money be spent? The NSA (National Security Agency) has perhaps the most sophisticated group of computer scientists and mathematicians working on algorithms to wage and protect against cyber war. But it is not an agency that shares that protection with the general population and businesses, just as the US Army does not protect individual citizens or even recommend how they should protect themselves. No, the agency that does this is NIST, the National Institute of Standards and Technology, part of the Department of Commerce.  It provides metrology standards which enable businesses to have a standard connection to the SI units of measurement.  But it also has (with four Nobel prizes under its belt) advanced fundamental physics so that we can measure time accurately (and hence have working GPS), it has been a key contributor, through its measurements of radio wave propagation. to the 3G, 4G, and coming 5G standards for our smart phones, and it is contributing more and more to biological measurements necessary for modern drug making.  But for the purpose of this note its role in cybersecurity is omni important. NIST has provided a Cybersecurity Framework for businesses, now followed by half of US companies, giving them a set of tools and assessments to know whether they are making their IT operations secure. And, NIST is now the standards generator and certifier for cryptography methods.  The current Federal budget proposal makes big cuts to NIST’s budget (in the past its total budget has been around $1B per year).  Full disclosure: I am a member of NIST’s Visiting Committee on Advanced Technology (VCAT). That means I see it up close. It is vitally important to the US and to our future. Now is not the time to cut its budget but to support it as we find our way in our future of war that is post kinetic.