Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 25 additions & 10 deletions src/pages/docs/observe/features/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,42 @@ title: "Set up observability"
description: "Instrument your application and send traces to an Observe project so you can monitor LLM calls, latency, and cost in one place."
---

## What it is
## About

**Set up observability** is the feature that connects your application to Observe. You configure API credentials, register an Observe project (name and type), and add instrumentation—either via the auto-instrumentor for supported SDKs (e.g. OpenAI) or via OpenTelemetry for custom tracing. Once configured, your LLM requests are sent as traces to Future AGI and appear in the Observe dashboard so you can inspect runs, attach evals, and set alerts.
This is how you connect your application to Future AGI so LLM calls are captured in the Observe dashboard. Register a project, instrument your app, and every request appears automatically with its inputs, outputs, cost, latency, and token usage.

## Use cases
---

## When to use

- **Production monitoring** — See latency, cost, and token usage for LLM calls in one place instead of scraping logs.
- **Debugging** — Tie a user report or failure to a specific trace and span so you can reproduce and fix issues.
- **Quality and evals** — Once traces are flowing, attach evaluations (e.g. hallucination, bias) and run them on historic or continuous data.
- **Sessions and alerts** — Use the same Observe project for session grouping and threshold-based alerts.
- **First step** — Setting up observability is the baseline before using other Observe features (sessions, evals, user dashboard, voice).
- **First-time setup**: Get traces flowing into the Observe dashboard so you can start monitoring production LLM calls.
- **Production monitoring**: See latency, cost, and token usage for every LLM call in one place instead of scraping logs.
- **Debugging**: Tie a user report or failure to a specific trace and span so you can reproduce and fix issues.
- **Baseline for other Observe features**: Sessions, evals, user tracking, and alerts all require traces to be set up first.

---

## How to

<Steps>
<Step title="Install the packages">
Install the core instrumentation package and the framework instrumentor for your LLM provider.

<CodeGroup titles={["Python", "JS/TS"]}>
```bash Python
pip install fi-instrumentation-otel traceAI-openai
```
```bash JS/TS
npm install @traceai/fi-core @traceai/openai
```
</CodeGroup>
</Step>

<Step title="Configure your environment">
Set environment variables so the SDK can connect to Future AGI. Get your API keys from the [dashboard](https://app.futureagi.com/dashboard/keys).

<CodeGroup titles={["Python", "JS/TS"]}>
```python
```python Python
import os
os.environ["FI_API_KEY"] = "YOUR_API_KEY"
os.environ["FI_SECRET_KEY"] = "YOUR_SECRET_KEY"
Expand Down Expand Up @@ -122,7 +136,7 @@ For supported frameworks and more options, see the [Auto Instrumentation](/docs/

---

## What you can do next
## Next Steps

<CardGroup cols={2}>
<Card title="Evals" icon="chart-line" href="/docs/observe/features/evals">
Expand All @@ -137,4 +151,5 @@ For supported frameworks and more options, see the [Auto Instrumentation](/docs/
<Card title="Alerts & Monitors" icon="zap" href="/docs/observe/features/alerts">
Get notified when metrics cross a threshold.
</Card>

</CardGroup>