Skip to main content
DSPy is a framework for programming language model pipelines. Braintrust instruments DSPy runs so you can inspect module spans, adapter formatting and parsing steps, underlying LLM calls, and tool usage in a single trace.

Setup

Install Braintrust and DSPy:
# uv
uv add braintrust dspy
# pip
pip install braintrust dspy
Set your API keys before you run your app:
.env
BRAINTRUST_API_KEY=<your-braintrust-api-key>
OPENAI_API_KEY=<your-openai-api-key>

Instrument DSPy

Braintrust’s DSPy integration requires dspy>=2.6.0.

Automatic Instrumentation

braintrust.auto_instrument() patches dspy.configure() so Braintrust’s DSPy callback is added automatically when you configure DSPy.
trace-dspy-auto.py
import os

import braintrust

braintrust.auto_instrument()
braintrust.init_logger(
    api_key=os.environ["BRAINTRUST_API_KEY"],
    project="dspy-example",  # Replace with your project name
)

import dspy

lm = dspy.LM("openai/gpt-5-mini")
dspy.configure(lm=lm)

cot = dspy.ChainOfThought("question -> answer")
result = cot(question="What is the capital of France?")
print(result.answer)
If you only want to patch DSPy instead of enabling all supported Python integrations, use patch_dspy().
trace-dspy-patch-only.py
import os

from braintrust import init_logger
from braintrust.integrations.dspy import patch_dspy

init_logger(
    api_key=os.environ["BRAINTRUST_API_KEY"],
    project="dspy-example",  # Replace with your project name
)
patch_dspy()

import dspy

lm = dspy.LM("openai/gpt-5-mini")
dspy.configure(lm=lm)

predict = dspy.Predict("question -> answer")
result = predict(question="What is 2 + 2?")
print(result.answer)

Manual callback setup

Use BraintrustDSpyCallback() when you want to attach the callback explicitly. If you also want detailed LiteLLM token and cost spans, patch LiteLLM before importing DSPy.
trace-dspy-manual.py
import os

from braintrust import init_logger
from braintrust.integrations.dspy import BraintrustDSpyCallback
from braintrust.integrations.litellm import patch_litellm

patch_litellm()

import dspy

init_logger(
    api_key=os.environ["BRAINTRUST_API_KEY"],
    project="dspy-example",  # Replace with your project name
)

# Disable DSPy's disk cache if you want every LiteLLM call to be traced.
dspy.configure_cache(enable_disk_cache=False, enable_memory_cache=True)

lm = dspy.LM("openai/gpt-5-mini")
dspy.configure(lm=lm, callbacks=[BraintrustDSpyCallback()])

cot = dspy.ChainOfThought("question -> answer")
result = cot(question="What is the capital of France?")
print(result.answer)
Python resources:

Examples

In Braintrust, a DSPy execution typically appears as a parent DSPy module span with child spans for adapter work and model calls. Example trace shape:
trace-dspy-auto
└── dspy.module.ChainOfThought
    ├── dspy.adapter.format
    ├── dspy.lm
    └── dspy.adapter.parse
Braintrust captures:
  • DSPy module spans such as Predict, ChainOfThought, and other module executions
  • Adapter formatting and parsing spans
  • LLM call inputs, outputs, and latency on dspy.lm spans
  • Tool execution when your DSPy program invokes tools
  • Extra LiteLLM token and cost spans when you patch LiteLLM

Resources