Welcome to darca-llm’s documentation!

Project Overview

darca-llm

Modular, backend-agnostic interface for interacting with large language models (LLMs).

This library is part of the darca-* ecosystem and provides a plug-and-play, extensible interface to communicate with LLM providers like OpenAI. It is designed with testability, structure, and future integration in mind.

Build Status Deploy Status Codecov Black code style License PyPi GitHub Pages

Features

  • ✅ Unified AIClient interface for all LLMs

  • 🔌 OpenAI integration out of the box (GPT-4, GPT-3.5)

  • 🧱 Extensible abstract interface (BaseLLMClient) for new providers

  • 🧪 Full pytest support with 100% coverage

  • 📦 Rich exception handling with structured DarcaException

  • 📋 Markdown-aware content formatting using _strip_markdown_prefix

  • 🧠 Logging support via darca-log-facility

Quickstart

  1. Install dependencies:

make install
  1. Run all quality checks (format, test, docs):

make check
  1. Run tests only:

make test
  1. Use the client:

from darca_llm import AIClient

ai = AIClient()
response = ai.get_raw_response(
    system="You are a helpful assistant.",
    user="What is a Python decorator?"
)
print(response)

Using get_file_content_response

The get_file_content_response() method allows for structured file content prompting with LLMs.

Example:

from darca_llm import AIClient

client = AIClient()

user_prompt = "Provide the content of a simple Python file."

result = client.get_file_content_response(
    system="Explain the code.",
    user=user_prompt
)
print(result)

This method ensures that only a single code block is returned and properly stripped of formatting using _strip_markdown_prefix().

Error Handling

All exceptions are subclasses of DarcaException and include:

  • LLMException: Base for all LLM-specific errors

  • LLMAPIKeyMissing: Raised when the API key is missing for the selected backend

  • LLMContentFormatError: Raised when: - Multiple blocks are detected within the response - The response cannot be properly stripped of markdown/code block formatting

  • LLMResponseError: Raised when the LLM provider returns an error or response parsing fails

All exceptions include:

  • error_code

  • message

  • Optional metadata

  • Optional cause

  • Full stack trace logging

Documentation

Build and view the docs:

make docs

Open the HTML documentation at:

docs/build/html/index.html

For detailed usage, refer to the usage.rst documentation.

Getting Started

Community & Contribution

Contributing

We welcome contributions to darca-llm!

Whether you’re improving functionality, adding support for new backends, writing docs, or fixing bugs — we’d love your help.

Issue templates and feature request forms are available in the repository under the Issues tab.

How to Contribute

  1. Fork the repository

  2. Create a new feature branch:

git checkout -b my-feature
  1. Install dependencies:

make install
  1. Run full checks locally (format, test, docs):

make check
  1. Push your branch and create a pull request

Quick Checks

If you want to run things quickly between commits:

  • Tests only:

    make test
    
  • Format only:

    make format
    
  • Pre-commit hooks only:

    make precommit
    

Thank you for contributing! 💙