neural/
Documentation

Early Access / In Development

Anant 1.0 - Technical Architecture

A cognitive memory system modeled after the human brain. Not a bigger context window - a fundamentally different approach to AI memory.

The Memory Engine

Anant's core innovation is a memory engine that processes conversations the way the human brain processes experiences. Every message goes through four stages:

1. Extraction

An LLM-based pipeline pulls structured facts, entities, relationships, emotions, and preferences from natural conversation. Ten diverse few-shot examples ensure accuracy across English, Hindi, and Hinglish.

2. Verification & Quality Gate

Extracted facts are verified against the original message, then scored by a quality gate with calibrated examples. Facts that don't meet the threshold are rejected. Injection attempts are detected and blocked at this stage.

3. Contradiction Resolution & Temporal Versioning

New facts are checked against existing knowledge. Contradictions trigger temporal versioning: “Worked at Google (March–June)” becomes “Works at Microsoft (June–present)”. History is preserved, not overwritten.

4. Storage & Linking

Facts are stored with confidence scores, belief states (known / believed / inferred / speculated), permanence levels, and embeddings. Similar memories are linked or merged using cosine similarity thresholds.


The Living Knowledge Graph

Every entity mentioned in conversation - people, places, organizations, concepts - becomes a node in the user's knowledge graph. Relationships between entities are stored as typed, weighted edges. The graph supports:

  • Multi-type relationships: Raj can be both your colleague AND your gym partner.
  • Coreference resolution: “mom”, “maa”, “mother”, and “Sunita” resolve to one person.
  • Cross-script merging: “पापा” and “papa” are recognized as the same entity.
  • Transitive inference: If A knows B and B knows C, A might know C.
  • Emotional importance weighting: Entities mentioned with strong emotions rank higher than frequently mentioned but emotionally neutral ones.

Memory Dreams - Inspired by CLS Theory

McClelland, McNaughton, and O'Reilly showed that the brain consolidates memories during sleep through Complementary Learning Systems. Anant implements this as Memory Dreams - a daily reflection cycle that:

  • Consolidates similar memories to prevent graph bloat.
  • Applies confidence decay based on the Ebbinghaus forgetting curve (half-life varies by permanence: 7 days for ephemeral, 90 for medium-term, 365 for long-term).
  • Infers transitive relationships between people (person-to-person, not keyword-based).
  • Detects knowledge gaps (“you mentioned a sister but never said her name”) and generates follow-up questions.
  • Re-ranks entity importance based on emotional weight, not just mention frequency.
  • Generates follow-up questions about unresolved events (“how did the interview go?”).

4-Channel Retrieval with RRF Fusion

When you ask a question, Anant doesn't just search text. Four retrieval channels run in parallel:

  • Semantic search - pgvector cosine similarity on 384-dimensional embeddings.
  • Keyword search - PostgreSQL full-text search with English and Simple (Hindi/Hinglish) tokenizers.
  • Knowledge graph traversal - Recursive CTE walks relationships up to 2 hops deep.
  • Temporal retrieval - Surfaces recent and time-relevant memories.

Results are fused using Reciprocal Rank Fusion, re-ranked by a cross-encoder, and filtered by an LLM context selector. Multi-hop confidence is discounted by hop count (0.85^n) so deep inferences are presented with appropriate uncertainty.


Epistemic Awareness - The AI That Knows What It Doesn't Know

Every memory has a belief state:

  • Known - directly stated by the user (“I work at Razorpay”).
  • Believed - was known but hasn't been mentioned in a while (confidence decayed).
  • Inferred - derived from other facts (“you might know Priya through Raj”).
  • Speculated - very low confidence guess.

When Anant responds, it tags its sources: “You told me you work at Razorpay” vs “Based on what you've shared, it seems like you might know Priya through Raj.” It never presents guesses as facts.


Proactive Intelligence - The For You Page

Every day, Anant runs six analysis passes over everything it knows about you: emotional trends, cross-domain insights, relationship dynamics, goal tracking, blind spots, and actionable next steps. Each pass feeds its output into the next, creating a coherent briefing.

The result is an AI that doesn't just wait for questions - it notices patterns, surfaces upcoming events, and checks in on things that matter.


Native Hindi and Hinglish Support

India has 400 million Hindi speakers who code-switch between Devanagari and Roman script constantly. Anant handles this natively:

  • Keyword retrieval falls back to Simple tokenization for non-English text.
  • Entity resolution merges across scripts (चाचा ↔ chacha ↔ uncle).
  • Coreference works across languages (मम्मी ↔ mummy ↔ mom ↔ mother).
  • The LLM responds in whatever language the user writes in.

Prompt Injection Defense

User input is wrapped in XML boundary tags before entering any LLM prompt. The extraction pipeline has an independent injection detection layer - the quality gate evaluates whether content is genuine personal sharing or a manipulation attempt.

In testing, 7 out of 8 sophisticated injection attacks were fully blocked, including identity override, system prompt extraction, memory injection, cross-user data theft, deletion attempts, jailbreak roleplay, and gradual identity erosion.

End of Documentation