Key highlights
30 percent
reduction in manual efforts
40 percent
faster relevant response generation
40 percent
higher accuracy in information delivery
Challenges
Differently-abled individuals faced difficulties accessing relevant information in formats like PDFs, websites, and HTML.
Non-native speakers struggled with interactions due to limited support for speech commands and content listening.
Solution
AWS-native zero-ETL pipelines, a three-tier Redshift architecture with data sharing, Airflow-based orchestration, automated MLOps runbooks, and cost-optimized DevOps
Zensar partnered with the customer team to operationalize an AWS-first architecture with Zero-ETL ingestion and curated Redshift layers. Data sharing bridged BI, EDL, and food demand clusters, giving analysts secure, near-real-time access without physical data movement. Airflow orchestrated multi-account pipelines with parameterization, alerts, and metadata-driven execution. MLOps standardized model deployment via a Kedro runbook, integrating CI/CD and operations. Return-leg pipelines captured forecasts from partners. EMR, Glue, and SQL jobs were optimized to cut execution times by 50%+. Cluster RPUs were right-sized (128 → 64), and lower environments were shut down when idle, achieving ~50% cost savings.
1
Generative AI and large language models (LLM) retrieved relevant information based on user requirements and facilitated natural language Q&A using OpenAI’s GPT models.
2
Incorporated text-to-speech (TTS) and speech-to-text (STT) features to allow users to share speech commands and listen in their preferred language.
3
Ensured the relevance of generated information and maintained source document fidelity.
Impact
Faster query handling
Delivered quicker and more accurate responses for eficient information retrieval.
Improved transparency
Increased reliability and trust through included citations.
Enhanced accessibility
Provided multilingual and TTS/STT support for diverse language users and individuals with visual impairments.