The_CUrE (./3872) :En ce moment, c'est plutôt le Boeing de la voiture.
Tesla c'est l'Apple de la bagnole ou je rêve?
The technique, dubbed GhostStripe [PDF], is undetectable to the human eye, but could be deadly to Tesla and Baidu Apollo drivers as it fools the type of sensors employed by both brands – specifically CMOS camera sensors.
It basically involves using LEDs to shine patterns of light on road signs so that the cars' self-driving software fails to understand the signs; it's a classic adversarial attack on machine-learning software.
Crucially, it exploits the rolling digital shutter of typical CMOS camera sensors. The LEDs rapidly flash different colors onto the sign as the active capture line moves down the sensor. For example, the shade of red on a stop sign could look different on each scan line to the car due to the artificial illumination.
The result is the camera capturing an image full of lines that don't quite match each other. The information is cropped and sent to a classifier within the car's self-driving software, usually based on deep neural networks, for interpretation. Because it's full of lines that don't match, the classifier doesn't recognize the image as a traffic sign.
So far, all of this has been demonstrated before.
Yet these researchers not only distorted the appearance of the sign as described, they did it repeatedly in a stable manner. This meant an unrecognizable image wasn't just a single anomaly among many accurately identified images captured by the vehicle software, but rather a constant unrecognizable image was presented to the classifier. That makes the attack practical against autonomous vehicles, it's said.
"A stable attack … needs to carefully control the LED's flickering based on the information about the victim camera's operations and real-time estimation of the traffic sign position and size in the camera's [field of view]," the researchers explained.