by Andy Slote - Sep 01 2021
When I was a junior-high age kid living in rural New England, some kids in my neighborhood came up with a plan to lessen the boredom of a summer evening. Their brilliant idea was to tie monofilament (“fishing”) line to a pole on one side of a dark road and walk across the street while unreeling the spool to wrap it around another pole on the opposite side. Crossing back and forth multiple times built something resembling a spider web, especially with a vehicle’s headlight illumination.
Cars on this road were infrequent, traveling at speeds of around fifty miles per hour. Some drivers paid little heed to the web, driving through at consistent speed without reacting. At the other extreme, some “locked up” the brakes, coming to a screeching halt in a cloud of tire smoke.
This unexpected, potentially confusing visual was not particularly dangerous but did require a quick assessment for an appropriate reaction. Humans had widely divergent responses. One can only speculate how an autonomous vehicle would respond.
The recent characterization of vehicle AI by the executive of a defunct autonomous vehicle startup as “sophisticated pattern matching” captures the essence of how the logic works. It also raises questions about the challenges when the system encounters things never before seen.
When considering pranks (or outright sabotage), are autonomous vehicles highly vulnerable? Would the fishing line stunt cause an overreaction with the automatic braking, causing the driver and occupants to experience collective heart attacks as the car attempts an emergency stop? Or, could the system decide it was acceptable to sail through at a constant speed? Would these relatively lightweight strands even be discernible?
Street signs seem to be the easiest (and potentially the most consequential) targets. How would an autonomous vehicle react to a stop sign altered with something as simple as pieces of tape obscuring the lettering? Is it possible to use black tape to change the numbers on a speed limit sign to get a vehicle to go significantly faster or slower?
What about snow (or blizzard conditions, for that matter)? If a sudden flash flood crosses a roadway, should the vehicle stand still, retreat or power through before the depth makes it impossible?
We regularly hear about the considerable effort to train the models these vehicles use for them to be able to manage everything they could encounter. These examples may represent unique possibilities, but envisioning more is relatively easy. These and other events make it seem like the naysayers are correct when predicting that fully autonomous vehicles are far in the future (or possibly unattainable?).|