Completion Token Usage & Cost
By default LiteLLM returns token usage in all completion requests (See here)
However, we also expose 5 helper functions + [NEW] an API to calculate token usage across providers:
encode: This encodes the text passed in, using the model-specific tokenizer. Jump to codedecode: This decodes the tokens passed in, using the model-specific tokenizer. Jump to codetoken_counter: This returns the number of tokens for a given input - it uses the tokenizer based on the model, and defaults to tiktoken if no model-specific tokenizer is available. Jump to codecost_per_token: This returns the cost (in USD) for prompt (input) and completion (output) tokens. Uses the live list fromapi.litellm.ai. Jump to codecompletion_cost: This returns the overall cost (in USD) for a given LLM API Call. It combinestoken_counterandcost_per_tokento return the cost for that query (counting both cost of input and output). Jump to codeget_max_tokens: This returns a dictionary for a specific model, with it's max_tokens, input_cost_per_token and output_cost_per_token. Jump to codemodel_cost: This returns a dictionary for all models, with their max_tokens, input_cost_per_token and output_cost_per_token. It uses theapi.litellm.aicall shown below. Jump to coderegister_model: This registers new / overrides existing models (and their pricing details) in the model cost dictionary. Jump to codeapi.litellm.ai: Live token + price count across all supported models. Jump to code
📣 This is a community maintained list. Contributions are welcome! ❤️
Example Usage
1. encode
Encoding has model-specific tokenizers for anthropic, cohere, llama2 and openai. If an unsupported model is passed in, it'll default to using tiktoken (openai's tokenizer).
from litellm import encode, decode
def test_encoding_and_decoding():
    try: 
        sample_text = "Hellö World, this is my input string!"
        # openai tokenizer 
        openai_tokens = token_counter(model="gpt-3.5-turbo", text=sample_text)
        openai_text = decode(model="gpt-3.5-turbo", tokens=openai_tokens)
        assert openai_text == sample_text
    except: 
        pass
test_encoding_and_decoding()
2. decode
Decoding is supported for anthropic, cohere, llama2 and openai.
from litellm import encode, decode
def test_encoding_and_decoding():
    try: 
        sample_text = "Hellö World, this is my input string!"
        # openai tokenizer 
        openai_tokens = token_counter(model="gpt-3.5-turbo", text=sample_text)
        openai_text = decode(model="gpt-3.5-turbo", tokens=openai_tokens)
        assert openai_text == sample_text
    except: 
        pass
test_encoding_and_decoding()
3. token_counter
from litellm import token_counter
messages = [{"user": "role", "content": "Hey, how's it going"}]
print(token_counter(model="gpt-3.5-turbo", messages=messages))
4. cost_per_token
from litellm import cost_per_token
prompt_tokens =  5
completion_tokens = 10
prompt_tokens_cost_usd_dollar, completion_tokens_cost_usd_dollar = cost_per_token(model="gpt-3.5-turbo", prompt_tokens=prompt_tokens, completion_tokens=completion_tokens))
print(prompt_tokens_cost_usd_dollar, completion_tokens_cost_usd_dollar)
5. completion_cost
- Input: Accepts a 
litellm.completion()response OR prompt + completion strings - Output: Returns a 
floatof cost for thecompletioncall 
litellm.completion()
from litellm import completion, completion_cost
response = completion(
            model="bedrock/anthropic.claude-v2",
            messages=messages,
            request_timeout=200,
        )
# pass your response from completion to completion_cost
cost = completion_cost(completion_response=response)
formatted_string = f"${float(cost):.10f}"
print(formatted_string)
prompt + completion string
from litellm import completion_cost
cost = completion_cost(model="bedrock/anthropic.claude-v2", prompt="Hey!", completion="How's it going?")
formatted_string = f"${float(cost):.10f}"
print(formatted_string)
6. get_max_tokens
- Input: Accepts a model name - e.g. 
gpt-3.5-turbo(to get a complete list, calllitellm.model_list) - Output: Returns a dict object containing the max_tokens, input_cost_per_token, output_cost_per_token
 
from litellm import get_max_tokens 
model = "gpt-3.5-turbo"
print(get_max_tokens(model)) # {'max_tokens': 4000, 'input_cost_per_token': 1.5e-06, 'output_cost_per_token': 2e-06}
7. model_cost
- Output: Returns a dict object containing the max_tokens, input_cost_per_token, output_cost_per_token for all models on community-maintained list
 
from litellm import model_cost 
print(model_cost) # {'gpt-3.5-turbo': {'max_tokens': 4000, 'input_cost_per_token': 1.5e-06, 'output_cost_per_token': 2e-06}, ...}
8. register_model
- Output: Returns updated model_cost dictionary + updates litellm.model_cost with model details.
 
from litellm import register_model
litellm.register_model({
        "gpt-4": {
        "max_tokens": 8192, 
        "input_cost_per_token": 0.00002, 
        "output_cost_per_token": 0.00006, 
        "litellm_provider": "openai", 
        "mode": "chat"
    },
})
9. api.litellm.ai
Example Curl Request
curl 'https://api.litellm.ai/get_max_tokens?model=claude-2'
{
    "input_cost_per_token": 1.102e-05,
    "max_tokens": 100000,
    "model": "claude-2",
    "output_cost_per_token": 3.268e-05
}