Zepto has developed a cutting-edge multilingual spell correction system powered by Meta’s Llama 3–8B model, capable of accurately fixing misspelled queries typed in Latin script from Indian vernacular languages. The system enhances search experiences in quick-commerce by addressing real-world language challenges. Zepto Llama3 spell correction improves conversion rates and customer satisfaction.
🔍 How It Works
- Model selection & hosting
Zepto chose Llama 3–8B after benchmarking several LLMs for accuracy and efficiency. The model is self-hosted via Databricks, enabling scalable, high-throughput usage without relying on costly external APIs. - Instruct fine-tuning
The team used prompt engineering and “instruct tuning”—embedding system messages and few-shot examples—to guide the model in tasks like spelling correction, vernacular normalization, and translation, outputting structured JSON for easy integration. - Retrieval-Augmented Generation (RAG)
A vector DB retrieves contextually relevant product info (e.g. brand names) before passing prompts to Llama 3. This RAG layer reduces prompt size and boosts accuracy. - Self-learning from user edits
Zepto captures quick user search reformulations (e.g., “banan chips” → “banana chips”) to generate implicit feedback loops that fine-tune the model and enhance future prompt examples.
🧠 Results & Impact
- Conversion rate lift
The spell correction system increased conversions by 7.5% on misspelled multilingual queries, thanks to improved query understanding. - Scalable, cost-efficient deployment
Hosting via Databricks and leveraging instruct tuning & RAG helped Zepto achieve high performance at scale, without the expense of full model finetuning or API reliance. Moneycontrol
✅ Why It Matters
Benefit | Impact |
---|---|
Enhanced UX | Users search in their native languages—fixing typos and transliterations in real time. |
Faster local commerce | Accurate search results reduce cart-drop, directly aiding Zepto’s 10-min delivery promise. |
Modern AI integration | Combines LLM power with retrieval systems for real-world e-commerce solutions. |
Continuous learning | Implicit feedback loops keep models up-to-date with evolving user language trends. |
🔭 What’s Next
- Broader AI-powered features: Zepto may apply this multilingual LLM pipeline to recommendations, product discovery, and customer support.
- Expanding languages: Adding more vernacular support in Latin and native scripts to serve India’s diverse user base.
- Preparation for IPO: Advanced AI tools like this—alongside Zepto Atom analytics—strengthen the company’s tech stack ahead of a planned public listing.
✅ Summary
Zepto’s innovative use of Meta’s Llama 3 to build a multilingual, RAG-enhanced spell correction system is improving its search engine accuracy and boosting conversions by 7.5%. By combining instruct tuning with retrieval augmentation and user feedback loops, Zepto is setting a benchmark in AI-driven quick commerce—offering faster, smarter search for users across India’s many languages.