Tuesday, November 11, 2025

Trending

Related Posts

Google warns of new AI malware that rewrite its own code while attack

Google has issued a serious alert: the cybersecurity landscape is entering a new phase where malware doesn’t simply execute fixed code—it can rewrite its own code during execution, using AI capabilities. This “AI malware rewrites its own code” phenomenon marks a major escalation in cyber-threats.


In this article, we explore the key details of Google’s warning, how this kind of malware works, why it matters especially now, and what steps organisations and individuals should take.


What Google discovered

  • Google’s Threat Intelligence Group (GTIG) found that adversaries are no longer using AI only for background support tasks—now they are deploying novel AI-enabled malware in active operations.
  • For the first time, malware families such as PROMPTFLUX and PROMPTSTEAL have been identified that use large language models (LLMs) during their execution to dynamically generate malicious code, obfuscate themselves, and evade detection.
  • Example: PROMPTFLUX is a VBScript dropper that uses the Gemini API to prompt the LLM to rewrite its own source code, then saves the updated, obfuscated version into the Startup folder to persist on the system.
  • Another example: PROMPTSTEAL employs a Python script that uses the Hugging Face API and an LLM to generate one-line Windows commands on demand—rather than relying on hard-coded instructions.

Why this matters

1. Evasion of traditional security tools

Because the malware rewrites itself – changing its code structure and behaviour in real-time – signature-based and static detection systems become much less effective.

2. Increased automation and adaptability

These threats demonstrate higher levels of automation: AI models serve as part of the attack’s logic, generating new code or functions on demand. This raises the bar for defenders.

3. Broadening attacker capability

Even less-skilled threat actors may gain access to sophisticated attack tools via underground AI-enabled tool-kits, lowering entry-barriers to advanced malware.

4. Implications for global cybersecurity

For organisations (in India and abroad) this means threat vectors are evolving: beyond phishing or exploits alone, now we have dynamic, AI-assisted malware that can change mid-attack, adapt, persist and morph.


What this means for Indian users & organisations

  • Indian enterprises (and government bodies) should treat this as a wake-up call: cybersecurity strategies must evolve from static protection to dynamic detection and response.
  • Small and medium-businesses must recognise that adversaries may use AI-power not just for large-scale attacks but also for targeted operations.
  • For personal users in India: keep your OS, apps and antivirus up to date; be cautious of unusual behaviours like unknown startup items, odd file-changes or apps asking for elevated privileges.
  • Indian cybersecurity vendors may need to invest more in behavioural-analytics, anomaly detection and AI-driven defence rather than relying solely on signature-databases.

What organisations should do—key recommendations

  • Adopt behavioural detection rather than just signature-based tools. Look for anomalous behaviours (code rewriting, unexpected processes, unknown startup entries).
  • Monitor use of AI or LLM APIs in your network: if malware is querying an LLM during execution, that is unusual and suspicious.
  • Ensure endpoint protection supports self-modification detection, persistent-start entries, unusual network calls.
  • Educate staff about evolving threats: social engineering now may tie into AI-generated prompts or lures.
  • For developers and IT teams: apply “Secure-by-Design” for AI and internal models; ensure your AI/ML platforms themselves aren’t abused by adversaries.
  • Collaborate with law-enforcement and cybersecurity intelligence services for threat-sharing—this new threat may evolve rapidly.

Caveats & what’s still unknown

  • While these malware samples are real, many are currently at an experimental stage and not yet widespread or fully destructive. Google says the strains identified so far are not yet major-network compromising tools
  • Full capability, scale, and deployment in the wild are still emerging. The timeline for when this becomes a broad threat is uncertain.
  • AI malware still needs infrastructure, command-and-control, and some human input; it’s not yet fully autonomous in many cases.
  • The balance of power between defenders and attackers is still evolving—defenders can leverage AI too to detect and mitigate these threats.

Conclusion

The warning that “AI malware rewrites its own code” marks a significant milestone in the evolution of cyber-threats. Google’s findings reveal a new frontier where malware is not static, but dynamic and self-modifying, powered by large language models.
For organisations and individuals—especially in India and emerging markets—it’s time to prepare: move beyond traditional protection models, adopt adaptive security strategies, monitor AI misuse, and invest in detection and response.
While the threat isn’t fully mature yet, the trajectory is clear—and the next wave of cyber defence will depend on staying ahead of AI-empowered adversaries.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles