ml_research_tools.core#
Core components for ML Research Tools.
- class ml_research_tools.core.Config(logging=<factory>, redis=<factory>, llm_presets=<factory>)[source]#
Bases:
object
Global application configuration.
- Parameters:
logging (LoggingConfig)
redis (RedisConfig)
llm_presets (LLMPresets)
-
logging:
LoggingConfig
#
-
redis:
RedisConfig
#
-
llm_presets:
LLMPresets
#
- ml_research_tools.core.get_config(args=None)[source]#
Get configuration from file and command line arguments.
- class ml_research_tools.core.LLMConfig(base_url='https://api.openai.com/v1', model='gpt-3.5-turbo', max_tokens=None, temperature=0.01, top_p=1.0, retry_attempts=3, retry_delay=5, api_key=None, tier='standard')[source]#
Bases:
object
LLM (Language Model) API configuration.
- Parameters:
- class ml_research_tools.core.LLMPresets(default='standard', presets=<factory>)[source]#
Bases:
object
Collection of LLM configurations with presets and tiering.
- ml_research_tools.core.setup_logging(log_level, log_file=None)[source]#
Set up logging configuration.
- class ml_research_tools.core.LLMClient(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None, redis_cache=None)[source]#
Bases:
object
Client for interacting with Language Models with preset configurations and caching.
This class provides a unified interface for making LLM API calls with: - Configuration management (presets, tiers, parameter overrides) - Automatic retries with exponential backoff - Result caching - Simple and chat-based interfaces
- config#
The LLM configuration to use for API calls
Initialize an LLM client with the specified configuration.
- Parameters:
preset (
Optional
[str
]) – Name of the preset configuration to usetier (
Optional
[str
]) – Tier of model to use (e.g., “standard”, “premium”)config (
Union
[Config
,LLMConfig
,LLMPresets
,None
]) – Configuration object (Config, LLMConfig, or LLMPresets)redis_cache (
Optional
[RedisCache
]) – Redis cache instance for caching results
- __init__(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None, redis_cache=None)[source]#
Initialize an LLM client with the specified configuration.
- Parameters:
preset (
Optional
[str
]) – Name of the preset configuration to usetier (
Optional
[str
]) – Tier of model to use (e.g., “standard”, “premium”)config (
Union
[Config
,LLMConfig
,LLMPresets
,None
]) – Configuration object (Config, LLMConfig, or LLMPresets)redis_cache (
Optional
[RedisCache
]) – Redis cache instance for caching results
- call(messages, *, model=None, temperature=None, top_p=None, max_tokens=None, prefix='llm_chat', use_cache=True)[source]#
Call an LLM API with a complete chat history.
- get_openai_client()[source]#
Get the raw OpenAI client.
- Return type:
OpenAI
- Returns:
The underlying OpenAI client instance
- property model#
Get the model name from the configuration.
- simple_call(text, system_prompt, *, model=None, temperature=None, top_p=None, max_tokens=None, prefix='llm', use_cache=True)[source]#
Call an LLM with a simple system prompt + user text pattern.
- ml_research_tools.core.create_llm_client(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None, redis_cache=None)[source]#
Create an LLMClient instance with the specified configuration.
This is a factory function that creates an LLMClient with the appropriate configuration.
- Parameters:
preset (
Optional
[str
]) – Name of the preset configuration to usetier (
Optional
[str
]) – Tier of model to use (e.g., “standard”, “premium”)config (
Union
[Config
,LLMConfig
,LLMPresets
,None
]) – Configuration object (Config, LLMConfig, or LLMPresets)redis_cache (
Optional
[RedisCache
]) – Redis cache instance
- Return type:
- Returns:
An initialized LLMClient instance
- ml_research_tools.core.generate_completion_params(*, llm_client, **additional_params)[source]#
Generate parameters for completion API calls based on configuration.
This function resolves LLM configuration based on presets/tiers and returns a dictionary of parameters suitable for passing to the OpenAI completion API calls.
- class ml_research_tools.core.ServiceProvider(config)[source]#
Bases:
object
A service provider that manages dependencies and services.
This class implements the service locator pattern, allowing services to be registered and retrieved. It supports lazy initialization of services and singleton instances.
Initialize the service provider with a configuration.
- Parameters:
config (
Config
) – The application configuration
- __init__(config)[source]#
Initialize the service provider with a configuration.
- Parameters:
config (
Config
) – The application configuration
- ml_research_tools.core.register_common_services(service_provider, default_llm_preset=None, default_llm_tier=None)[source]#
Register common services with the service provider.
- Parameters:
service_provider (
ServiceProvider
) – The service provider to register services with- Return type:
- ml_research_tools.core.create_redis_cache(config)[source]#
Create a Redis cache instance from configuration.
- Parameters:
config (
Config
) – Application configuration- Return type:
- Returns:
Redis cache instance or None if Redis is disabled
- ml_research_tools.core.create_default_llm_client(config, redis_cache=None)[source]#
Create a default LLM client from configuration.
- Parameters:
config (
Config
) – Application configurationredis_cache (
Optional
[RedisCache
]) – Optional Redis cache for caching results
- Return type:
- Returns:
LLM client instance
- ml_research_tools.core.setup_services(config, default_llm_preset=None, default_llm_tier=None)[source]#
Set up a service provider with common services.
- Parameters:
config (
Config
) – Application configuration- Return type:
- Returns:
Configured service provider
Submodules#
BaseTool
LoggingConfig
RedisConfig
LLMConfig
LLMPresets
Config
load_config_file()
add_config_args()
get_config()
Message
get_llm_config()
LLMClient
create_llm_client()
generate_completion_params()
get_console()
setup_logging()
get_logger()
register_common_services()
create_redis_cache()
create_default_llm_client()
setup_services()
ServiceProvider