AI Core
AI Core & Intelligence Layer
The AI core integrates multiple leading large language models (LLMs) via APIs, including Grok, Gemini, Perplexity, Claude, and the OpenAI API, orchestrated through a custom routing layer that selects the optimal model based on query type, cost, and performance metrics.
Multi-LLM approach: Superior to single-model reliance, as it mitigates risks like API downtime or model-specific biases.
Grok: Excels in creative reasoning for history and language subjects.
Gemini: Multimodal processing for visual aids.
Perplexity: Real-time knowledge retrieval for science queries.
OpenAI API: General conversational depth with fine-tuned educational prompts.
Dynamic switching: With load balancers and fallback mechanisms, Ayah achieves 99.9% uptime and reduces inference costs by 30-40% through competitive pricing across providers.
Custom prompt engineering & RAG: Retrieval-Augmented Generation ensures responses are accurate, age-appropriate, and pedagogically sound.
Adaptive algorithms: Powered by machine learning libraries in Node.js, these analyze user performance in real-time to adjust difficulty, employing spaced repetition systems (inspired by the Leitner algorithm) for optimal memory retention - proven to boost long-term recall by 200% in studies.
Last updated