Metrics & Analytics
Real-time latency, tokens, cost, and success rate monitoring.
Avg Latency
1434ms
13:44:4013:46:15
Tokens / Request
1,838
13:44:4013:46:15
Success Rate
95.5%
13:44:4013:46:15
Cost / Request
$0.0100
13:44:4013:46:15
Model Breakdown
| Model | Calls | Avg Latency | Tokens | Cost | Success |
|---|---|---|---|---|---|
| gpt-4o | 1,247 | 1420ms | 2.5M | $14.70 | 98.2% |
| gpt-4o-mini | 3,891 | 380ms | 5.1M | $3.07 | 99.1% |
| claude-3.5-sonnet | 892 | 1180ms | 1.8M | $8.01 | 97.8% |
Tool Performance
| Tool | Calls | Avg Latency | Success | Errors |
|---|---|---|---|---|
| web-search | 2,340 | 1850ms | 94.2% | 136 |
| calculator | 1,580 | 12ms | 99.9% | 2 |
| code-exec | 890 | 450ms | 96.5% | 31 |
| db-query | 670 | 85ms | 98.1% | 13 |
Integration Code
import { createMetrics, PrometheusExporter, DatadogExporter } from 'agent-tools-kit/observability'
const metrics = createMetrics({
exporters: [
new PrometheusExporter({ port: 9090 }),
new DatadogExporter({ apiKey: process.env.DD_API_KEY }),
],
// Built-in metric collectors
collect: ['latency', 'tokens', 'cost', 'success_rate', 'tool_usage'],
// Custom dimensions
dimensions: ['model', 'tool', 'agent_id', 'user_tier'],
// Alerting rules
alerts: [
{ metric: 'latency_p99', threshold: 5000, action: 'pagerduty' },
{ metric: 'success_rate', below: 95, action: 'slack', channel: '#agent-alerts' },
{ metric: 'cost_per_hour', threshold: 50, action: 'email' },
]
})
agent.use(metrics.middleware())
// Custom metric tracking
metrics.record('custom_score', 0.95, { model: 'gpt-4o', task: 'summarization' })
// Query metrics programmatically
const p99 = await metrics.query('latency', { percentile: 99, window: '1h' })