Mozilla and its community have demanded that Meta disable the AI Discover feed within the Meta AI app, claiming it is “quietly turning private AI chats into public content” without giving users proper notice—leading to private conversations being shared publicly without explicit, informed consent
🛑 2. Privacy Risks Highlighted
- Blurred lines: Users expect privacy but may unknowingly publish deeply personal chats—e.g., medical or legal advice—believing their conversations remain private .
- Data collection: The AI Discover feed aggregates user prompts and shares them broadly, raising concerns about data misuse and lack of transparency in how Meta handles these interactions
📣 3. Mozilla’s Demands
The Mozilla community has outlined several key actions Meta should take:
- Shut down the Discover feed until strong privacy protections are in place
- Set AI conversations to private by default, with any public sharing requiring explicit user consent
- Reveal how many users may have unknowingly shared chats
- Offer a user-friendly opt-out system across all Meta platforms to prevent data use for AI training
- Notify affected users whose conversations might’ve been made public, and allow them to delete their content permanently threads.com
👥 4. Rights at Stake
Mozilla emphasizes that when users believe they’re talking privately, they have the right to know if those conversations become public. The current setup undermines trust and autonomy—key principles in Mozilla’s advocacy for a responsible digital ecosystem
🧭 5. Wider Implications
- Erosion of trust: Users losing faith in conversational AI because of unclear sharing defaults
- Regulatory risk: This controversy may spark new calls for making AI tools adhere to stricter privacy and transparency laws
- Setting a precedent: Other platforms may follow Meta’s stance, and Mozilla’s demands could influence industry privacy standards
✅ Summary
Firefox creator Mozilla is pushing back hard, urging Meta to disable the AI Discover feed and overhaul its consent and data-sharing practices. Their demands include private-by-default settings, clear opt-outs, transparency, and user notifications—seeking to protect individuals from unknowing public exposure of their private AI conversations.