Language Model (LLM) Utilities for ML Research Tools.
This module provides a set of utilities for interacting with Large Language Models:
LLMClient Class: Complete client for LLM interactions with: - Configuration management (presets, tiers) - Automatic retries with exponential backoff - Result caching - Simple and chat-based interfaces
Factory Function: Easy client creation through create_llm_client
Example
# Create client with default preset
client = create_llm_client()
response = client.simple_call(
text="Summarize the following paper: [paper text]",
system_prompt="You are a helpful academic assistant."
)
# Use a specific preset
client = create_llm_client(preset="premium")
response = client.simple_call(
text="Explain this complex concept...",
system_prompt="You are a helpful academic assistant."
)
# Chat interface
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing well, thank you! How can I help you today?"},
{"role": "user", "content": "Can you explain quantum computing?"}
]
client = create_llm_client(tier="premium")
response = client.call(messages=messages)
# For raw OpenAI client access
openai_client = create_llm_client().get_openai_client()
# Generate parameters for OpenAI API calls
params = generate_completion_params(
config=config,
messages=messages,
stream=True
)
- class ml_research_tools.core.llm_tools.Message[source]#
Bases:
TypedDict
Type definition for a chat message.
- ml_research_tools.core.llm_tools.get_llm_config(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None)[source]#
Get an LLM configuration by resolving preset/tier and applying overrides.
This factory function selects the appropriate LLM configuration based on: 1. Preset name or tier 2. Individual parameter overrides 3. Default configuration if nothing else is specified
- Parameters:
- Return type:
- Returns:
LLMConfig object with all parameters resolved
- Raises:
ValueError – If no valid configuration can be determined
- class ml_research_tools.core.llm_tools.LLMClient(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None, redis_cache=None)[source]#
Bases:
object
Client for interacting with Language Models with preset configurations and caching.
This class provides a unified interface for making LLM API calls with: - Configuration management (presets, tiers, parameter overrides) - Automatic retries with exponential backoff - Result caching - Simple and chat-based interfaces
- config#
The LLM configuration to use for API calls
Initialize an LLM client with the specified configuration.
- Parameters:
preset (
Optional
[str
]) – Name of the preset configuration to usetier (
Optional
[str
]) – Tier of model to use (e.g., “standard”, “premium”)config (
Union
[Config
,LLMConfig
,LLMPresets
,None
]) – Configuration object (Config, LLMConfig, or LLMPresets)redis_cache (
Optional
[RedisCache
]) – Redis cache instance for caching results
- __init__(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None, redis_cache=None)[source]#
Initialize an LLM client with the specified configuration.
- Parameters:
preset (
Optional
[str
]) – Name of the preset configuration to usetier (
Optional
[str
]) – Tier of model to use (e.g., “standard”, “premium”)config (
Union
[Config
,LLMConfig
,LLMPresets
,None
]) – Configuration object (Config, LLMConfig, or LLMPresets)redis_cache (
Optional
[RedisCache
]) – Redis cache instance for caching results
- property model#
Get the model name from the configuration.
- get_openai_client()[source]#
Get the raw OpenAI client.
- Return type:
OpenAI
- Returns:
The underlying OpenAI client instance
- simple_call(text, system_prompt, *, model=None, temperature=None, top_p=None, max_tokens=None, prefix='llm', use_cache=True)[source]#
Call an LLM with a simple system prompt + user text pattern.
- call(messages, *, model=None, temperature=None, top_p=None, max_tokens=None, prefix='llm_chat', use_cache=True)[source]#
Call an LLM API with a complete chat history.
- ml_research_tools.core.llm_tools.create_llm_client(*, preset=None, tier=None, api_key=None, base_url=None, model=None, temperature=None, top_p=None, max_tokens=None, retry_attempts=None, retry_delay=None, config=None, redis_cache=None)[source]#
Create an LLMClient instance with the specified configuration.
This is a factory function that creates an LLMClient with the appropriate configuration.
- Parameters:
preset (
Optional
[str
]) – Name of the preset configuration to usetier (
Optional
[str
]) – Tier of model to use (e.g., “standard”, “premium”)config (
Union
[Config
,LLMConfig
,LLMPresets
,None
]) – Configuration object (Config, LLMConfig, or LLMPresets)redis_cache (
Optional
[RedisCache
]) – Redis cache instance
- Return type:
- Returns:
An initialized LLMClient instance
- ml_research_tools.core.llm_tools.generate_completion_params(*, llm_client, **additional_params)[source]#
Generate parameters for completion API calls based on configuration.
This function resolves LLM configuration based on presets/tiers and returns a dictionary of parameters suitable for passing to the OpenAI completion API calls.