Skip to main content
Mistral AI provides language models, embeddings, and agent APIs. Braintrust instruments Mistral calls so you can inspect prompts, responses, streaming behavior, embeddings, fill-in-the-middle completions, and agent operations in Braintrust. Add your Mistral API key to your organization’s AI providers or to a project’s AI providers before you start.

Setup

You can either trace the native @mistralai/mistralai SDK or use the Braintrust gateway through the OpenAI SDK.
# pnpm
pnpm add braintrust @mistralai/mistralai
pnpm add braintrust openai
# npm
npm install braintrust @mistralai/mistralai
npm install braintrust openai
Set your environment variables:
.env
BRAINTRUST_API_KEY=<your-braintrust-api-key>
MISTRAL_API_KEY=<your-mistral-api-key>

# For organizations on the EU data plane, use https://api-eu.braintrust.dev
# For self-hosted deployments, use your data plane URL
# BRAINTRUST_API_URL=<your-braintrust-api-url>
If you only use the Braintrust gateway, your application code only needs BRAINTRUST_API_KEY.TypeScript resources:
API keys are stored as one-way cryptographic hashes, never in plaintext.

Instrument Mistral

Use wrapMistral() or Braintrust’s import hook if you want to trace the native @mistralai/mistralai SDK. Use the Braintrust gateway with the OpenAI SDK if you already have an OpenAI-compatible client.Requires @mistralai/mistralai>=1.0.0 for native SDK tracing.

Automatic instrumentation

Run your app with Braintrust’s import hook to patch the native Mistral SDK automatically.
trace-mistral-auto.ts
import { initLogger } from "braintrust";
import { Mistral } from "@mistralai/mistralai";

initLogger({
  projectName: "mistral-example", // Replace with your project name
  apiKey: process.env.BRAINTRUST_API_KEY,
});

const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const response = await client.chat.complete({
  model: "mistral-small-latest",
  messages: [{ role: "user", content: "Explain tracing in one sentence." }],
});

console.log(response.choices?.[0]?.message?.content);
Run with the import hook:
node --import braintrust/hook.mjs trace-mistral-auto.ts
If you’re using a bundler, see Trace LLM calls for plugin and loader setup.

Manual wrapper

Use wrapMistral() when you want to instrument only selected Mistral clients instead of patching the SDK globally.
trace-mistral-wrap.ts
import { initLogger, wrapMistral } from "braintrust";
import { Mistral } from "@mistralai/mistralai";

initLogger({
  projectName: "mistral-example", // Replace with your project name
  apiKey: process.env.BRAINTRUST_API_KEY,
});

const client = wrapMistral(
  new Mistral({ apiKey: process.env.MISTRAL_API_KEY }),
);
const response = await client.chat.complete({
  model: "mistral-small-latest",
  messages: [{ role: "user", content: "Explain tracing in one sentence." }],
});

console.log(response.choices?.[0]?.message?.content);

Braintrust gateway

Use the Braintrust gateway with the OpenAI SDK.
trace-mistral-gateway.ts
import { initLogger } from "braintrust";
import OpenAI from "openai";

initLogger({
  projectName: "mistral-example", // Replace with your project name
  apiKey: process.env.BRAINTRUST_API_KEY,
});

const client = new OpenAI({
  baseURL: "https://gateway.braintrust.dev/v1",
  apiKey: process.env.BRAINTRUST_API_KEY,
});

const response = await client.responses.create({
  model: "mistral-small-latest",
  input: "Explain tracing in one sentence.",
});

console.log(response.output_text);
TypeScript resources:

Examples

In Braintrust, a Mistral trace typically includes the root LLM span plus any child work created by streaming or agent execution. Braintrust captures:
  • Prompt input and model output
  • Model name and request metadata
  • Streaming timing, including time to first token when available
  • Embeddings and fill-in-the-middle requests for the native TypeScript and Python SDKs
  • Agent operations for the native Python SDK
  • Parent-child relationships when Mistral calls happen inside an existing Braintrust span

Evaluate with Mistral

You can evaluate Mistral-powered tasks with Braintrust the same way you evaluate other model providers. The example below uses the Braintrust gateway so the same pattern works in both TypeScript and Python.
mistral-eval.ts
import { Eval } from "braintrust";
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://gateway.braintrust.dev/v1",
  apiKey: process.env.BRAINTRUST_API_KEY,
});

Eval("Mistral evaluation", {
  data: () => [
    { input: "What is 2 + 2?", expected: "4" },
    { input: "What is the capital of France?", expected: "Paris" },
  ],
  task: async (input) => {
    const response = await client.responses.create({
      model: "mistral-small-latest",
      input,
    });
    return response.output_text;
  },
  scores: [
    {
      name: "accuracy",
      scorer: ({ output, expected }) => (output === expected ? 1 : 0),
    },
  ],
});
For more evaluation patterns, see Create experiments.

Resources