π Complete Project Description: Dr.Doc
π Project Overview
Dr.Doc is a groundbreaking hybrid intelligence platform that fuses neural RAG (Retrieval-Augmented Generation) with symbolic MeTTa reasoning, creating the worldβs first developer-centric AI assistant that delivers both deep contextual understanding and precise technical accuracy.
π― Mission Statement
"To transform developer productivity by merging the power of neural and symbolic AI, offering accurate, contextual, and verifiable support for API documentation and development workflows."
π― Key Features & Innovations
π§ Hybrid Intelligence (Core USP)
- Neural RAG: BGE embeddings + PostgreSQL PgVector for semantic search and understanding
- Symbolic MeTTa: Structured reasoning for precise pattern matching and logical inference
- Unified Pipeline: Neural and symbolic layers work together for robust, well-grounded answers
π€ ASI:One Integration
- Fetch.ai ASI:One Mini: Advanced LLM powering response generation
- Context-Aware Prompting: Optimized prompts enriched by hybrid intelligence
- Real-Time Processing: Lightning-fast, sub-second response times
π» Developer-First Experience
- Modern Interface: Built with Next.js and Tailwind CSS, offering a sleek real-time chat UI
- Rich Formatting: Markdown support, syntax highlighting, and interactive citations
- Session Control: Persistent conversation history and backend health monitoring
π Cost-Optimized Architecture
- Free Embeddings: Local BGE model eliminates external API costs
- Self-Contained Stack: Minimal external dependencies beyond ASI:One
- Scalable Core: PostgreSQL + PgVector built for production-grade performance
ποΈ Project Architecture Overview
π High-Level System Design
ethNewDelhi2025 follows a layered architecture with hybrid intelligence combining neural RAG and symbolic MeTTa reasoning.
π― Core Architecture Principles
1. Hybrid Intelligence Design
- Neural Layer: Semantic understanding and contextual reasoning
- Symbolic Layer: Precise pattern recognition and logical reasoning
- Unified Interface: Combines both approaches for superior results
2. Microservices Architecture
- Frontend Service: Next.js user interface and chat management
- Backend Service: Python uAgents with HTTP API endpoints
- Data Service: PostgreSQL with PgVector for knowledge storage
- Intelligence Service: BGE embeddings + MeTTa reasoning
π§ System Components
Frontend Layer
- Technology: Next.js with React and TypeScript
- Purpose: User interface, real-time chat, session management
- Communication: HTTP API calls to backend
Backend Layer
- Technology: Python with uAgents framework
- Purpose: Agent orchestration, request processing, response generation
- Features: HTTP API endpoints, uAgent communication, hybrid intelligence coordination
Intelligence Layer
- Neural Component: BGE embeddings with vector similarity search
- Symbolic Component: MeTTa knowledge base with pattern matching
- Integration: Unified query processing combining both approaches
Data Layer
- Vector Database: PostgreSQL with PgVector extension
- Document Storage: Structured text and metadata storage
- Knowledge Base: MeTTa atoms and pattern definitions
π Data Flow Architecture
Ingestion Pipeline
- Document Processing: Markdown files converted to structured text
- Fact Extraction: MeTTa patterns extracted from documentation
- Embedding Generation: BGE model creates vector representations
- Storage: Data stored in PostgreSQL with vector indexing
Query Processing Pipeline
- User Input: Natural language questions from frontend
- Dual Processing: Both neural and symbolic systems activated
- Context Assembly: Retrieved documents and patterns combined
- Response Generation: ASI:One processes enhanced context
- Output Formatting: Structured response with citations
π§ Intelligence Architecture
Neural Intelligence (RAG)
- Embedding Model: BGE for semantic understanding
- Vector Search: Cosine similarity matching in 768-dimensional space
- Strengths: Contextual understanding, fuzzy matching, semantic search
Symbolic Intelligence (MeTTa)
- Knowledge Base: Structured facts and patterns as MeTTa atoms
- Pattern Matching: Logical reasoning and rule-based inference
- Strengths: Exact matching, logical consistency, verifiable facts
Hybrid Integration
- Parallel Processing: Both systems query simultaneously
- Context Fusion: Neural context combined with symbolic patterns
- Enhanced Prompting: LLM receives both types of intelligence
π Agent Architecture
ASI:One Agent
- Role: Primary agent for user interaction and response generation
- Integration: Direct access to hybrid intelligence systems
- Communication: HTTP API and uAgent messaging protocols
System Integration
- RAG System: BGE embeddings + PostgreSQL vector search
- MeTTa System: Hyperon MeTTa engine for symbolic reasoning
- Unified Processing: Combined neural and symbolic intelligence
π§ Infrastructure Architecture
Database Design
- Primary Database: PostgreSQL with PgVector extension
- Schema: Documents table with content, metadata, and vector embeddings
- Scalability: Horizontal scaling with connection pooling
Deployment Architecture
- Containerization: Docker containers for consistent deployment
- Service Discovery: Environment-based configuration
- Monitoring: Health checks and status monitoring
π― Scalability Design
Horizontal Scaling
- Stateless Services: Backend services can scale independently
- Load Distribution: Multiple agent instances for high availability
- Performance Optimization: Lazy loading, batch processing, connection pooling
π Integration Architecture
External Integrations
- ASI:One API: Fetch.ai's LLM service integration
- BGE Model: Local embedding model for cost efficiency
- Hyperon MeTTa: Symbolic reasoning engine integration
Internal Communication
- HTTP APIs: RESTful communication between services
- Agent Messaging: uAgent protocol for agent-to-agent communication
- Error Handling: Comprehensive error handling and reporting
πͺ User Experience Architecture
Real-Time Interaction
- WebSocket Support: Real-time chat interface
- Session Management: Persistent user sessions
- Status Monitoring: Live backend status and connection monitoring
Response Architecture
- Rich Formatting: Markdown rendering with syntax highlighting
- Citation System: Clickable links to source documentation
- Accessibility: Screen reader support and keyboard navigation
This architecture enables hybrid intelligence through a scalable, maintainable platform that combines neural and symbolic AI approaches.