Translation Token Usage and History

Efficiently track and manage token usage across translation services, ensuring optimal resource allocation within predefined limits.

Overview

Token history logs consumption and remaining balances for translation services, calculating usage based on character count and service-specific multipliers.

Key Features

Tracks token usage across services.
Provides trends for efficient monitoring.
Helps allocate resources effectively.

Token Calculation

    Tokens are calculated based on character count and the multiplier specific to the translation engine.
    For each engine, the formula is:
    Total Tokens Consumed = Character Count × Multiplier
    Engine Multipliers:
  1. AWS: Multiplier = 0.75 (e.g., 100 characters consume 75 tokens)
  2. Chat-GPT: Multiplier = 1.5 (e.g., 100 characters consume 150 tokens)
  3. DeepL: Multiplier = 1 (e.g., 100 characters consume 100 tokens)
  4. Google Translate: Multiplier = 1 (e.g., 100 characters consume 100 tokens)

Limits and Extra Tokens

Organizations have predefined token limits for translations.
If tokens run out, the system will automatically use extra tokens, if available.
If neither regular nor extra tokens are available, the translation request will fail, and an error will be displayed.
Token overages and extras are logged transparently for accountability.

Dashboard Features

Displays Used tokens consumed monthly.
Shows token categories and extra limits.
Provides an overview of remaining token balances.

The picture shows us where we can look Token Usage.