Quickstart: Anthropic + UsageTap

Track AI usage with any LLM provider using simple REST calls. No SDK required.

5 min Server-side ~50 LOC

Building with AI Assistants?

Give Cursor, Copilot, or Claude our LLM Reference page and they can build the integration for you. Just paste this into your prompt:

(Reference: https://usagetap.com/llmreference — API flow, endpoints, auth, usage fields)

The reference contains all API details, auth patterns, and field specs your AI assistant needs.

1Get your API key

Create an API key in Configure. Add it to your environment:

# .env.local
USAGETAP_API_KEY=ut_your_server_key
ANTHROPIC_API_KEY=sk-ant-your-key
Security: Never expose API keys in browser code or commit them to git.

2Create your API route

The pattern is simple: begin → call LLM → end. Works with any provider.

// app/api/chat/route.ts - Anthropic with UsageTap
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic();
const USAGETAP_API_KEY = process.env.USAGETAP_API_KEY!;
const BASE_URL = process.env.USAGETAP_BASE_URL ?? 'https://api.usagetap.com';

export async function POST(req: Request) {
  const { messages, customerId } = await req.json();

  // 1. Begin call
  const beginRes = await fetch(`${BASE_URL}/call_begin`, {
    method: 'POST',
    headers: {
      'x-api-key': USAGETAP_API_KEY,
      'Accept': 'application/vnd.usagetap.v1+json',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      customerId,
      feature: 'chat.send',
      requested: { standard: true, premium: true },
    }),
  });
  const begin = await beginRes.json();
  const { callId, allowed } = begin.data;

  try {
    // 2. Select model based on entitlements
    const model = allowed.premium ? 'claude-sonnet-4-20250514' : 'claude-haiku-4-20250514';

    // 3. Call Anthropic
    const response = await anthropic.messages.create({
      model,
      max_tokens: 1024,
      messages,
    });

    // 4. Report usage
    await fetch(`${BASE_URL}/call_end`, {
      method: 'POST',
      headers: {
        'x-api-key': USAGETAP_API_KEY,
        'Accept': 'application/vnd.usagetap.v1+json',
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        callId,
        modelUsed: model,
        inputTokens: response.usage.input_tokens,
        responseTokens: response.usage.output_tokens,
      }),
    });

    const text = response.content[0].type === 'text' ? response.content[0].text : '';
    return Response.json({ content: text });
  } catch (error) {
    // Always finalize, even on error
    await fetch(`${BASE_URL}/call_end`, {
      method: 'POST',
      headers: {
        'x-api-key': USAGETAP_API_KEY,
        'Accept': 'application/vnd.usagetap.v1+json',
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        callId,
        error: { code: 'ANTHROPIC_ERROR', message: String(error) },
      }),
    });
    throw error;
  }
}
Key points:
  • allowed.premium tells you which tier the customer can use
  • Always call /call_end — even when errors occur
  • Pass actual token counts from the provider response

3Test it

curl -X POST http://localhost:3000/api/chat \
  -H "Content-Type: application/json" \
  -d '{"customerId":"user_123","messages":[{"role":"user","content":"Hello!"}]}'

Done!

Check your Dashboard to see usage metrics, costs, and customer analytics in real-time.