Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows

New research anticipates hijacking against AI systems in order to create defenses for a more secure future.

As a self-driving car cruises down a street, it uses cameras and sensors to perceive its environment, taking in information on pedestrians, traffic lights, and street signs. Artificial intelligence (AI) then processes that visual information so the car can navigate safely.

But the same systems that allow a car to read and respond to the words on a street sign might expose that car to hijacking attacks from bad actors. Text placed on signs, posters, or other objects can be read by an AI’s perception system and treated as instructions, potentially allowing attackers to influence an autonomous system’s behavior through the real world.

Keep Reading This Article at UCSC News