LLM Configuration
LLM configuration system for managing language model settings, parameters, and optimization.
Overview
The LLM configuration system provides comprehensive management of language model settings, including model selection, parameter tuning, and optimization strategies.
Core Features
- Model Selection: Choose from multiple LLM providers
- Parameter Tuning: Configure temperature, max tokens, and other parameters
- Optimization: Automatic parameter optimization
- Cost Management: Track and optimize costs
- Performance Monitoring: Monitor model performance
Usage Examples
Basic LLM Configuration
from recoagent.llm.config import LLMConfigManager
# Create configuration manager
config_manager = LLMConfigManager()
# Create LLM configuration
llm_config = config_manager.create_config(
provider="openai",
model="gpt-4",
temperature=0.7,
max_tokens=1000
)
# Use configuration
llm = config_manager.get_llm(llm_config)
response = llm.invoke("Hello, world!")
Advanced Configuration
# Create advanced configuration with optimization
advanced_config = config_manager.create_advanced_config(
provider="anthropic",
model="claude-3",
parameters={
"temperature": 0.5,
"max_tokens": 2000,
"top_p": 0.9,
"frequency_penalty": 0.1
},
optimization={
"auto_tune": True,
"cost_optimization": True,
"performance_monitoring": True
}
)
API Reference
LLMConfigManager Methods
create_config(provider: str, model: str, **kwargs) -> LLMConfig
Create LLM configuration
Parameters:
provider(str): LLM provider namemodel(str): Model name**kwargs: Additional configuration parameters
Returns: LLMConfig object
get_llm(config: LLMConfig) -> BaseLanguageModel
Get configured LLM instance
Parameters:
config(LLMConfig): LLM configuration
Returns: Configured LLM instance
See Also
- LLM Providers - Multi-LLM provider factory