Skip to main content

Platform Components

This solution leverages our battle-tested platform components:

Core Technologies

PlatformCode PackagesPurposeLearn More
Conversational Search Platformpackages/conversational_search/NLU engine, API mapping, memory managementPlatform Details →
Caching Platformpackages/caching/NLU result caching, query pattern matchingPlatform Details →
LLM Providerspackages/llm/Cost-based routing, fallback strategiesPlatform Details →
Analytics Platformpackages/analytics/Query understanding tracking, performance monitoringPlatform Details →

Performance & Optimization

PlatformCode PackagesPurposeLearn More
Token Optimizationpackages/rag/token_optimization.pyContext compression, cost reductionPlatform Details →
Observabilitypackages/observability/Cost tracking, performance monitoringPlatform Details →
Rate Limitingpackages/rate_limiting/Cost throttling, queue managementPlatform Details →
Security Platformpackages/security/Input validation, security filteringPlatform Details →

Complete traceability - every platform maps to specific code packages with full documentation.

Detailed Components

🧠 Context-Aware NLU Engine

Location: packages/conversational_search/nlu_engine.py

Purpose: Enhanced natural language understanding with context awareness

Key Features:

  • Entity Extraction: Extracts price, color, size, brand, and custom entities
  • Intent Classification: Recognizes search, filter, compare, and help intents
  • Context Resolution: Resolves pronouns like "it", "that", "this" using conversation history
  • Progressive Enhancement: Uses existing Rasa components with smart fallbacks

Example Usage:

from recoagent.packages.conversational_search import ContextAwareNLUEngine

nlu = ContextAwareNLUEngine()
result = nlu.extract_filters("red dresses under $50", conversation_history)
# Returns: {"intent": "search", "filters": {"color": "red", "price": {"max": 50}}}

🗄️ Smart Memory Manager

Location: packages/conversational_search/memory.py

Purpose: Redis-based session storage with conversation context

Key Features:

  • Session Persistence: Stores conversation state across requests
  • Context Management: Maintains conversation history and entity context
  • TTL Management: Automatic cleanup of expired sessions
  • Fallback Storage: In-memory storage when Redis is unavailable

Example Usage:

from recoagent.packages.conversational_search import SimpleMemoryManager

memory = SimpleMemoryManager("redis://localhost:6379")
memory.save_state("session_123", {"last_query": "red dresses"})
context = memory.get_conversation_context("session_123")

🔄 API Mapping Engine

Location: packages/conversational_search/api_mapping.py

Purpose: Converts natural language to API requests and responses to natural language

Key Features:

  • Request Mapping: Converts extracted entities to API parameters
  • Response Processing: Transforms API responses to conversational language
  • Endpoint Selection: Chooses appropriate API endpoints based on intent
  • Error Handling: Graceful handling of API failures

Example Usage:

from recoagent.packages.conversational_search import SimpleAPIMappingEngine

mapping = SimpleAPIMappingEngine(config)
api_request = mapping.map_nl_to_request("red dresses", {"color": "red"})
nl_response = mapping.map_response_to_nl(api_response, "red dresses")

🚀 Resilient API Client

Location: packages/conversational_search/resilience.py

Purpose: Production-ready API client with resilience patterns

Key Features:

  • Circuit Breaker: Prevents cascade failures when APIs are down
  • Retry Logic: Exponential backoff for failed requests
  • Timeout Handling: Configurable timeouts for different operations
  • Error Classification: Categorizes errors for appropriate handling

Example Usage:

from recoagent.packages.conversational_search import ResilientAPIClient

client = ResilientAPIClient("http://api.example.com", timeout=30.0)
response = await client.get("/search", params={"color": "red"})

💾 Smart Cache Manager

Location: packages/conversational_search/smart_cache.py

Purpose: Intelligent caching for queries and responses

Key Features:

  • Query Caching: Caches frequent queries and their responses
  • Similarity Matching: Finds similar cached queries for better hit rates
  • TTL Management: Automatic expiration of stale cache entries
  • LRU Eviction: Removes least recently used entries when cache is full

Example Usage:

from recoagent.packages.conversational_search import SmartCacheManager

cache = SmartCacheManager(max_size=1000, default_ttl=3600)
cache.cache_query_response("red dresses", {"color": "red"}, response_data)
cached_response = cache.get_cached_response("red dresses")

Enhanced Components

🎯 Conversation Patterns

Location: packages/conversational_search/patterns.py

Purpose: Proven conversation patterns extracted from working notebooks

Patterns:

  • MultiTurnPattern: Manages conversation turns with limits
  • SlotFillingPattern: Progressive collection of required information
  • ConversationalSearchPattern: Combined pattern for product search

Example Usage:

from recoagent.packages.conversational_search import SlotFillingPattern

pattern = SlotFillingPattern(required_slots=["color", "price"])
graph = pattern.build_graph()
result = graph.invoke({"user_input": "I want red dresses"})

🛡️ Error Handling & Fallbacks

Location: packages/conversational_search/resilience.py

Purpose: Comprehensive error handling and fallback strategies

Components:

  • ErrorHandler: User-friendly error messages
  • FallbackManager: Cached responses for common issues
  • CircuitBreaker: Prevents system overload
  • RetryHandler: Automatic retry with backoff

Example Usage:

from recoagent.packages.conversational_search import ErrorHandler, FallbackManager

error_handler = ErrorHandler()
fallback_manager = FallbackManager()

# Add fallback for common issues
fallback_manager.add_fallback_response("help", {
"text": "I can help you find products!",
"suggestions": ["Search for products", "Browse categories"]
})

Integration Components

🔗 Existing RecoAgent Components

DialogueManager: packages/conversational/dialogue_manager.py

  • Purpose: Conversation state management
  • Usage: Tracks conversation context and dialogue states

EntityExtractor: packages/conversational/entity_extraction.py

  • Purpose: Entity extraction using Rasa
  • Usage: Extracts entities from natural language text

IntentRecognizer: packages/conversational/intent_recognition.py

  • Purpose: Intent classification using Rasa
  • Usage: Classifies user intents (search, filter, help, etc.)

📊 Monitoring & Health

Health Monitoring: Built into ConversationalSearchEngine

  • Performance Metrics: Response times, error rates, cache hit rates
  • Health Status: Component status and system health
  • Statistics: Detailed performance and usage statistics

Example Usage:

# Get health status
health = engine.get_health_status()
# Returns: {"status": "healthy", "error_rate": 0.02, "cache_hit_rate": 0.75}

# Get detailed stats
stats = engine.get_engine_stats()
# Returns: performance stats, cache stats, component status

Configuration

API Configuration

# config/api_mapping.yaml
base_url: "http://localhost:8000"
timeout: 30.0
api_endpoints:
search: "/api/search"
recommend: "/api/recommend"
request_mapping:
color:
api_param: "color"
price_max:
api_param: "max_price"
response_mapping:
results: "products"
total: "count"

Engine Configuration

engine = ConversationalSearchEngine(
api_config=config,
enable_caching=True, # Enable smart caching
enable_resilience=True # Enable resilience features
)

Performance Characteristics

Response Times

  • Cached Queries: <50ms
  • New Queries: <200ms
  • API Failures: <100ms (fallback responses)

Scalability

  • Concurrent Sessions: 1000+ (Redis-based)
  • Cache Hit Rate: 60-80% (typical)
  • Memory Usage: <100MB (with 1000 cached queries)

Reliability

  • Uptime: 99.9% (with circuit breaker)
  • Error Recovery: <1s (automatic retry)
  • Graceful Degradation: Always available (fallback responses)

Next Steps

  1. Implementation Guide → - Set up your conversational search
  2. Industry Applications → - See real-world use cases
  3. Case Studies → - Learn from successful implementations