Home Technology Artificial Intelligence Google suggest AI should occasionally assign humans busywork so we do not...

Google suggest AI should occasionally assign humans busywork so we do not forget how to do our jobs

0

On Tuesday, February 24, 2026, a provocative research paper from Google DeepMind titled “Saving Skills in the Age of AI” proposed that AI systems should occasionally assign humans “intentional busywork.”

The proposal aims to solve the “Automation Paradox”—the risk that humans will lose the core skills and intuition required to intervene during critical AI failures because machines have taken over all routine practice.


The Strategy: “Forced Engagement”

Rather than maximizing for pure efficiency, the DeepMind researchers suggest that AI agents should optimize for a composite reward function that balances task speed with human skill maintenance.

  • Deliberate Inefficiency: The AI would periodically “hand back” tasks to humans that it could easily solve itself.
  • Targeted Frequency: Simulations suggest that making just 5–10% of interactions manual can significantly prevent “skill atrophy” without tanking overall productivity.
  • Skill Thresholds: The AI would monitor a user’s performance and only trigger “busywork” when it detects that the human’s proficiency in a specific area is beginning to slide.

Real-World Examples from the Paper

The researchers provided several scenarios where this “busywork protocol” could be applied:

FieldAI Intervention (Busywork)Goal
Software DevelopmentAI completes 95% of a script but requires the human to manually write the final validation steps.Preserve debugging acumen and syntax knowledge.
Autonomous DrivingThe system periodically disengages on a familiar, safe route to prompt the driver to navigate manually.Keep reflexes sharp for edge-case emergencies.
HealthcareA radiology AI might hide its findings until the doctor has manually marked potential abnormalities.Prevent “over-reliance” and loss of diagnostic intuition.
ManufacturingRobotic arms might pause during standard assemblies to require human oversight on routine joints.Ensure technicians remain familiar with the physical mechanics.

The “Moral Crumple Zone”

One of the most striking parts of the paper is the warning about “moral crumple zones.” This refers to a scenario where humans sit in a “delegation chain” simply to absorb legal liability when things go wrong, even if they no longer have the actual skills or control to prevent the error. By assigning busywork, DeepMind argues we can move from being “passive observers” back to being “competent partners.”

Reception and Criticism

The proposal has sparked a polarized debate among tech leaders:

  • The Efficiency Camp: Critics, including several venture capitalists, argue that “planned inefficiency” is a regressive tax on productivity that will only slow down innovation.
  • The Safety Camp: Supporters argue that as we move toward Artificial General Intelligence (AGI), the “deskilling” of the human race is a genuine existential threat that requires immediate structural safeguards.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version