Token

A token is the basic unit of text that AI models process. One token is roughly 3/4 of a word in English. The word "hamburger" is 3 tokens. AI model pricing and context windows are measured in tokens.

How It Works

Tokenization breaks text into pieces the model can process. Common English words are often a single token, while longer or less common words are split into multiple tokens. Code tends to use more tokens per character than prose because of special syntax. Token counts matter for two reasons: pricing (you pay per token) and context limits (models can only process a fixed number of tokens per request). Input tokens (your prompt) and output tokens (the model's response) are often priced differently.

Token in Chapeta

Chapeta does not add markup to token costs. You pay the provider price per token through your OpenRouter API key. Chapeta shows actual prompt token counts from the API so you can monitor costs accurately. When switching between models with different context windows, you can see exactly how much of the context budget your conversation uses.

See Token in action with Chapeta