Interactive Demo UI
Time to Run: 5 minutes | What You'll See: Live RAG agent in action
๐ฏ Why Run the Demo?โ
The Problem: Reading docs is helpful, but you learn best by trying it yourself. You want to:
- See RecoAgent in action with real queries
- Understand what hybrid retrieval actually looks like
- Experience the user interface
- Try evaluation metrics live
- See response times and quality
The Demo Solves: Interactive, hands-on experience with a working RAG system in your browser.
โก TL;DR - Run in 2 Minutesโ
cd docs/demo-app
pip install -r requirements.txt
export OPENAI_API_KEY="your-key" # Optional - works in mock mode too
python app.py
# Open browser: http://localhost:8080
What You'll See: Full Q&A interface with 3 knowledge bases, live evaluation, conversation history!
๐ฅ๏ธ What the Demo Looks Likeโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RecoAgent Interactive Demo โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Knowledge Base: [IT Support โผ] [Run Evaluation] โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Your Question: โ โ
โ โ How do I reset my password? โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ [Ask] โClick โ
โ โ
โ โโโโ Response (850ms, $0.012) โโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ To reset your password: โ โ
โ โ 1. Go to the login page โ โ
โ โ 2. Click "Forgot Password" โ โ
โ โ 3. Enter your email โ โ
โ โ 4. Check email for reset link โ โ
โ โ โ โ
โ โ ๐ Sources: IT_Support_Guide.pdf (p. 12) โ โ
โ โ ๐ฏ Confidence: 0.92 โ โ
โ โ ๐ Retrieved: 5 docs, Used: 2 docs โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ [View Conversation History] [Clear] โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ฎ Interactive Featuresโ
1. Live Q&A - Type and Get Instant Answersโ
Try these questions:
- "How do I reset my password?" (simple retrieval)
- "I have VPN and email issues, help!" (multi-step reasoning)
- "What is hybrid search?" (conceptual explanation)
You'll See:
- Response time (e.g., 850ms)
- Cost per query (e.g., $0.012)
- Source documents cited
- Confidence score
- Retrieved vs used document count
2. Three Knowledge Bases - Switch Between Domainsโ
Knowledge Base | Documents | Example Questions | Use Case |
---|---|---|---|
IT Support | 15 IT docs | Password reset, VPN issues | Internal helpdesk |
Product Docs | 12 RecoAgent docs | Feature questions, integration | Product documentation |
Technical FAQs | 20 technical docs | RAG concepts, deployment | Developer docs |
3. Live Evaluation - See RAGAS Metricsโ
Click "Run Evaluation" to test on 10 queries:
Evaluation Results:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Context Precision: 0.82 โ
Context Recall: 0.75 โ
Faithfulness: 0.88 โ
Answer Relevancy: 0.85 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Avg Response Time: 920ms
Total Cost: $0.09
Success Rate: 90%
4. Conversation History - Track Your Sessionโ
See all your questions and answers in a timeline view.
๐๏ธ Demo Architectureโ
The demo application demonstrates RecoAgent's full capabilities:
- RAG-based Question Answering - Ask questions about your knowledge base
- Hybrid Retrieval - Combines BM25 and vector search for better results
- Multi-step Reasoning - Complex questions that require multiple steps
- Custom Tools Integration - Extend agents with new capabilities
- Real-time Evaluation - Measure and improve system performance
- Safety Guardrails - Input/output filtering and PII detection
๐ฌ Demo Scenarios - Try These Queriesโ
Scenario 1: Simple Factual Questionโ
Try: "How do I reset my password?"
What Happens:
- Hybrid retrieval finds password reset documentation
- Agent extracts step-by-step procedure
- Returns answer with source citation
Expected: ~800ms response, high confidence (0.9+)
Scenario 2: Multi-Step Problemโ
Try: "I can't access shared drives and VPN keeps disconnecting"
What Happens:
- Agent recognizes two separate issues
- Retrieves docs for both problems
- Prioritizes troubleshooting steps
- Provides comprehensive answer
Expected: ~1.5s response, shows reasoning steps
Scenario 3: Conceptual Explanationโ
Try: "What is hybrid search and why is it better?"
What Happens:
- Retrieves conceptual documentation
- Synthesizes explanation from multiple sources
- Includes examples and comparisons
Expected: ~1.2s response, multiple sources cited
๐ What You'll Learnโ
By running the demo, you'll understand:
Concept | What You'll See | Why It Matters |
---|---|---|
Hybrid Retrieval | Same query returns different doc ranks with different ฮฑ values | Need both keyword + semantic |
Response Quality | Confidence scores vary by query type | Some queries are harder |
Source Attribution | Every answer cites specific documents | Verifiable, trustworthy |
Performance | Real latency and cost metrics | Plan your budget |
Evaluation | RAGAS scores show system quality | Measure improvements |
Running the Demoโ
Quick Startโ
-
Navigate to demo directory:
cd apps/demo_ui
-
Run the setup script:
./run_demo.sh
-
Open your browser: Navigate to
http://localhost:8080
Alternative Setupโ
If you prefer manual setup:
# Navigate to demo directory
cd apps/demo_ui
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create environment file
echo "OPENAI_API_KEY=your_api_key_here" > .env
echo "DEMO_MODE=true" >> .env
# Run the demo
python app.py
Demo Setupโ
The demo setup script will:
- โ Check prerequisites (Python, Node.js, Docker)
- โ Create demo directory structure
- โ Generate demo configuration
- โ Create sample datasets
- โ Set up the web application
- โ Install required dependencies
Configurationโ
The demo uses these configuration options:
# demo_config.json
{
"name": "RecoAgent Demo",
"version": "1.0.0",
"features": [
"rag",
"hybrid_search",
"multi_step",
"evaluation"
],
"datasets": [
"it_support",
"product_docs",
"technical_faqs"
],
"endpoints": {
"api": "http://localhost:8000",
"ui": "http://localhost:3000",
"demo": "http://localhost:8080"
}
}
Demo Architectureโ
The demo consists of several components:
๐ฅ๏ธ Web Interfaceโ
- Flask Application - Simple web server
- HTML Templates - Clean, responsive UI
- JavaScript - Interactive question handling
- Real-time Updates - Live response streaming
๐ค RecoAgent Backendโ
- RAG Agent - Core question answering
- Hybrid Retriever - BM25 + vector search
- Multi-step Reasoning - Complex question handling
- Evaluation Engine - Performance measurement
๐ Monitoring & Observabilityโ
- LangSmith Integration - Request tracing
- Performance Metrics - Latency and throughput
- Error Tracking - Failed requests and issues
- Usage Analytics - Popular questions and patterns
Demo Scenariosโ
Scenario 1: Simple Q&Aโ
Goal: Answer straightforward questions about the knowledge base
Example Questions:
- "How do I reset my password?"
- "What is RecoAgent?"
- "How does hybrid search work?"
Expected Behavior:
- Quick response (< 2 seconds)
- High confidence score (> 0.8)
- Relevant source citations
- Clear, actionable answers
Scenario 2: Multi-step Reasoningโ
Goal: Handle complex questions requiring multiple steps
Example Questions:
- "I'm having trouble with both VPN and email access. What should I do first?"
- "How do I evaluate my RAG system and what metrics should I focus on?"
- "What are the security implications of using RecoAgent in production?"
Expected Behavior:
- Multi-step reasoning process
- Tool usage for complex operations
- Intermediate reasoning steps
- Comprehensive final answer
Scenario 3: Evaluation & Improvementโ
Goal: Measure system performance and identify areas for improvement
Evaluation Metrics:
- Context Precision - How relevant are retrieved documents?
- Context Recall - Are all relevant documents retrieved?
- Faithfulness - How accurate are the generated answers?
- Answer Similarity - How similar are answers to ground truth?
Expected Results:
- Context Precision: > 0.7
- Context Recall: > 0.6
- Faithfulness: > 0.8
- Answer Similarity: > 0.7
Customizationโ
Adding New Datasetsโ
# Add your own dataset
custom_dataset = [
"Your custom document 1",
"Your custom document 2",
"Your custom document 3"
]
agent.add_documents(custom_dataset)
Custom Toolsโ
# Add custom tools to the agent
from recoagent.tools import Tool
def custom_search(query: str) -> str:
# Your custom search logic
return f"Custom search results for: {query}"
agent.add_tool(Tool(name="custom_search", function=custom_search))
Custom Evaluationโ
# Define custom evaluation metrics
def custom_metric(response, ground_truth):
# Your custom evaluation logic
return similarity_score
evaluator.add_metric("custom_metric", custom_metric)
Troubleshootingโ
Common Issuesโ
Demo won't start:
- Check that all prerequisites are installed
- Verify API keys are set correctly
- Ensure ports 8080, 8000, and 3000 are available
Questions return errors:
- Verify your LLM API key has sufficient quota
- Check that documents are loaded correctly
- Review the browser console for detailed errors
Evaluation fails:
- Ensure test questions are provided
- Check that the evaluation dataset is compatible
- Verify RAGAS dependencies are installed
Getting Helpโ
If you encounter issues:
- Check the logs - Review console output for error messages
- Verify configuration - Ensure all settings are correct
- Test components - Try individual features separately
- Community support - Contact support@recohut.com for assistance
Next Stepsโ
After exploring the demo:
- Tutorials - Learn the fundamentals
- How-To Guides - Set up your own instance
- Examples - See working code
- Reference - Detailed API documentation
Contributingโ
Want to improve the demo? We welcome contributions!
- Report Issues - Found a bug or have a suggestion?
- Submit PRs - Have a fix or improvement?
- Add Scenarios - Want to showcase a specific use case?
- Improve UI - Better user experience ideas?
๐ What You'll Accomplishโ
After 30 minutes with the demo:
โ
See RAG in action - Understand how retrieval + generation works
โ
Try hybrid search - Experience the quality difference
โ
Run evaluations - See actual RAGAS metrics
โ
Understand costs - See real $ per query
โ
Gauge performance - Know what latency to expect
โ
Plan your build - Know if RecoAgent fits your needs
๐ Quick Start Commandsโ
# Option 1: Automated setup (recommended)
cd apps/demo_ui && ./run_demo.sh
# Option 2: Manual setup
cd apps/demo_ui
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
export OPENAI_API_KEY="sk-your-key" # Optional - works without!
python app.py
# Open: http://localhost:8080
No API key? Demo works in mock mode - you can still explore the UI and understand the workflow!
๐ก Why This Demo Mattersโ
Before Demo | After Demo | Value |
---|---|---|
"Will RAG work for my use case?" | "Yes! I just tried it with my questions" | Confidence to proceed |
"How fast will it be?" | "850ms for simple, 1.5s for complex" | Set realistic SLAs |
"What will it cost?" | "$0.01-0.05 per query" | Budget planning |
"Is the quality good enough?" | "0.82 precision - better than expected!" | Quality validation |
"Can it handle complex queries?" | "Yes, saw multi-step reasoning work" | Feature validation |
Bottom Line: 30 minutes with the demo can save you weeks of uncertain development!