Home Technology Artificial Intelligence Google launch new Gemini model that can run on robots locally

Google launch new Gemini model that can run on robots locally

0

Google DeepMind has unveiled two revolutionary robotics AI models—Gemini Robotics and Gemini Robotics‑ER, both based on Gemini 2.0—designed to run locally on robots and complete physical tasks without cloud dependency


🤖 What Is Gemini Robotics?

  • Vision‑Language‑Action model: Gemini Robotics interprets visual and language input to take smooth physical actions—folding origami, unscrewing bottle caps, or packing lunchboxes—all executed onboard the robot
  • Embodied reasoning: The Robotics‑ER variant maps spatial environments, plans trajectories, and even generates low-level code to control the robot’s movements
  • Tested on platforms like the bi‑arm ALOHA 2 and Apptronik’s Apollo humanoid robot

🌟 Key Capabilities: Generality, Interactivity & Dexterity

  1. Generality: Works with unfamiliar objects and tasks—robots can perform instructions they’ve never seen before, such as “slam dunk a toy basketball.” blog.google
  2. Interactivity: Adapts in real-time to changing commands or environments.
  3. Dexterity: High-precision manipulation enables fine motor skills like origami folding and precise object handling

🔒 Running Locally on Robots

A major advancement is that Gemini Robotics and Robotics‑ER can run on-device in real-time—no cloud needed—empowering robots with offline intelligence. They process vision, language, spatial reasoning, and action internally on their own onboard chips or accelerators


🔬 Safety & Responsible AI

Google emphasizes multi-layered safety:

  • Built-in evaluation to ensure safe action plans
  • Compliance benchmarks like their ASIMOV framework and “Robot Constitution” modeled on Asimov’s laws
  • Collaborations with trusted testers including Apptronik, Boston Dynamics, Agile Robotics, and more

🚀 Why This Matters Now

  • Accelerates robotics innovation: Startups and labs save development time and costs by using pre-trained embodied models .
  • Truly embodied AI: Models no longer confined to digital realms—they now sense and physically act in real environments .
  • Potential for real-world robotics: Could transform manufacturing, homes, logistics, healthcare, and hospitality.

🔮 What’s Next

  • Expanded partner access: Early access programs for additional robotics teams.
  • Benchmark improvements: Ongoing work to refine safety and performance metrics across diverse tasks.
  • Future commercialization: While no immediate product launch announced, this lays the foundation for more capable general-purpose robots in years ahead.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version