API Reference
This section documents the darca-llm API.
—
AIClient
- class darca_llm.AIClient(backend: str = 'openai')[source]
Bases:
objectA unified client for interacting with LLMs using the darca pluggable backend system. Defaults to the OpenAI backend.
This class acts as a simple wrapper, delegating all method calls to the selected backend. Currently, only
openaiis supported.Unified interface to interact with LLMs.
- Parameters:
backend – The backend to use (default: “openai”)
- Raises:
LLMException – If the backend is unsupported
—
OpenAIClient
- class darca_llm.OpenAIClient[source]
Bases:
BaseLLMClientLLM backend that uses OpenAI’s GPT models via their official API.
This class implements the
BaseLLMClientinterface, utilizing the OpenAI Python client library to make requests to GPT models (e.g.,gpt-4,gpt-3.5-turbo).Handles direct interaction with OpenAI’s API. Inherits from
BaseLLMClient.- get_raw_response(system: str, user: str, llm: str = 'gpt-4', temperature: float = 1.0) str[source]
Send a system and user prompt to OpenAI and return the chat response.
- Parameters:
system (str) – The system message providing context or instructions to the LLM.
user (str) – The user message, typically containing the main query.
llm (str) – The identifier of the OpenAI model to use (e.g.,
gpt-4).temperature (float) – Sampling temperature for the request to control response randomness.
- Returns:
The text content of the LLM response.
- Return type:
str
- Raises:
LLMResponseError – If the API request fails or the response cannot be parsed.
—
BaseLLMClient
- class darca_llm.BaseLLMClient[source]
Bases:
ABCAbstract base class for LLM backends. Provides shared logic for file content processing.
Concrete implementations must implement the
get_raw_response()method, which handles sending prompts to the respective LLM.Abstract base class for all LLM clients.
- Methods:
get_raw_response(system, user, ...)get_file_content_response(system, file_content, ...)
- get_file_content_response(system: str, user: str, llm: str = 'gpt-4', temperature: float = 1.0) str[source]
Process a prompt to return the content of a single file.
This method:
Sends the prompt to the LLM via
get_raw_response().Verifies that the returned response contains exactly one code block.
Strips any Markdown or code block formatting from the response.
- Parameters:
system (str) – The system message for the LLM.
user (str) – The user query or request for the LLM, typically referencing a file content request.
llm (str) – The identifier for the LLM model to use (e.g.,
gpt-4).temperature (float) – The sampling temperature for the LLM, controlling creativity in the response.
- Returns:
Cleaned-up text containing the single file content.
- Return type:
str
- Raises:
LLMContentFormatError – If the response has multiple code blocks, or if it cannot be properly stripped of Markdown/code formatting.
- abstractmethod get_raw_response(system: str, user: str, llm: str = 'gpt-4', temperature: float = 1.0) str[source]
Send a raw prompt (consisting of a system message and a user message) to the LLM and return the string response.
- Parameters:
system (str) – The system-level instructions or context for the LLM.
user (str) – The user-level query or request for the LLM.
llm (str) – The identifier for the LLM model to use (e.g.,
gpt-4).temperature (float) – The sampling temperature for the LLM, controlling creativity in the response.
- Returns:
The raw response text returned by the LLM.
- Return type:
str
- Raises:
LLMResponseError – If the LLM request fails or returns an invalid response.
LLMAPIKeyMissing – If the required API key is not set in the environment.
—
Exceptions
- class darca_llm.LLMException(message, error_code=None, metadata=None, cause=None)[source]
Bases:
DarcaExceptionBase class for all darca-llm exceptions.
Inherits from
DarcaException.
- class darca_llm.LLMAPIKeyMissing(message, error_code=None, metadata=None, cause=None)[source]
Bases:
LLMExceptionRaised when the API key is missing for the selected LLM provider.
This exception indicates that the environment variable for the API key (e.g.,
OPENAI_API_KEY) is not set, preventing the LLM from being used.
- class darca_llm.LLMContentFormatError(message, error_code=None, metadata=None, cause=None)[source]
Bases:
LLMExceptionRaised when the input content contains multiple code blocks or when the response cannot be properly stripped of Markdown/code block formatting.
This exception is specifically tailored to ensure that the LLM response includes exactly one code block and that the format matches the expected structure.
- class darca_llm.LLMResponseError(message, error_code=None, metadata=None, cause=None)[source]
Bases:
LLMExceptionRaised when an LLM API request fails or returns malformed data.
This exception can be raised due to API connectivity issues, invalid responses, or unexpected errors from the LLM provider.