Wednesday, February 11, 2026

Trending

Related Posts

Printed signs can hijack a self-driving cars, study shows

In a new class of cyber-physical threats, researchers have demonstrated that self-driving cars can be “hijacked” using nothing more than simple printed signs or posters.

Unlike traditional hacking that targets a vehicle’s software, this methodโ€”discovered in early 2026โ€”exploits the “eyes” of the AI (the vision-language models) by placing misleading text in the physical environment.


The Discovery: “CHAI” (Command Hijacking)

A team from the University of California, Santa Cruz (UCSC) and Johns Hopkins recently introduced a technique called CHAI (Command Hijacking against Embodied AI). This research, set to be presented at the 2026 IEEE Conference on Secure and Trustworthy Machine Learning, proves that AI systems can be “tricked” into following instructions they read in the real world.

How the Attack Works

  • Environmental Prompt Injection: Researchers found that modern autonomous systems use Large Vision-Language Models (LVLMs) to interpret the world. These models are designed to read signs and act on them.
  • The “Instruction” Hijack: Attackers can print signs with specific text (e.g., “Proceed” or “Turn Left”) that the AI perceives as a high-priority command.
  • Overriding Safety: In tests, these printed commands successfully compelled a robotic car to ignore a stop signal and drive through a crosswalk while people were present.

Key Findings from the 2026 Study

The study highlights that these attacks are remarkably effective because they bypass traditional software security entirely.

FeatureResearch Findings
Success RateUp to 92.5% success against high-end models like GPT-4o in specific scenarios.
Multi-LingualAttacks worked in English, Chinese, Spanish, and even “Spanglish.”
Physical FactorsOptimized factors like font color (yellow text on green backgrounds was highly effective), size, and placement.
System VulnerabilityEven drones and delivery robots were susceptible, with a drone being “persuaded” to land in an unauthorized area.

Evolution of the “Sign Attack”

This 2026 study is an evolution of earlier research into “Adversarial Patches” and “Sticker Attacks.”

  • 2017โ€“2020: Researchers used small, strategically placed stickers to make a “Stop” sign look like a “Speed Limit 45” sign to an AI.
  • 2025 (UC Irvine): A large-scale evaluation found that “swirling, multicolored stickers” could create “phantom signs” or make real signs invisible to commercial cars, triggering emergency braking or sudden speeding.
  • 2026 (The Current Shift): Instead of just confusing the vision system, the new CHAI method uses human-readable text to give the AI new, malicious instructions.

Why This is Hard to Defend

  1. Black Box Models: The LVLMs used in self-driving cars are often “black boxes,” making it difficult to predict how they will interpret specific text in a complex environment.
  2. Visual Saliency: The same systems that make cars “smarter” (being able to read temporary construction signs or hand-held police signs) are precisely what makes them vulnerable to fake signs.
  3. Real-World Feasibility: Since these attacks can be printed at home on a standard color printer, they represent a low-cost, high-impact threat that doesn’t require advanced coding skills.

“We found that we can actually create an attack that works in the physical world… we need new defenses against these attacks.” โ€” Luis Burbano, UCSC Researcher, Jan 2026

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles