The "gpt-token-counter-live" is a Visual Studio Code extension that displays the token count of selected text or the entire open document in the status bar. The token count is determined per model family using: GPT via tiktoken, Claude via Anthropic's tokenizer, and Gemini via a local approximation.
This tool is built to get a speedy token counting result right on VS Code while you are working on prompting files. I personally needed a lot while working on many LLM projects, so I decided to make one for myself. I hope this helps you too!
- Token Count Display: The extension provides a real-time token count of the currently selected text or the entire document if no text is selected. The token count is displayed on the right side of the status bar.
count_display.mp4
-
Auto-Update: The token count is automatically updated as you edit or select text, ensuring that the count is always accurate.
-
Easy Activation: The extension is activated as soon as VS Code starts up, so you don't have to manually activate it every time you start your editor.
-
Model Family Selection: Choose a model family instead of individual versions. Click the token count in the status bar and select one of: GPT (OpenAI), Claude (Anthropic), or Gemini (Google AI).
- GPT: Uses tiktoken
encoding_for_model('gpt-5')
to follow the latest GPT mapping. Falls back too200k_base
, thencl100k_base
when needed. - Claude: Uses Anthropic's tokenizer-based approximation for the Claude family (not an exact, model-official tokenizer for all Claude versions). Counts are close but may differ slightly from server-side accounting.
- Gemini: No public local tokenizer. Approximated using
o200k_base
/cl100k_base
when available, otherwise ~4 characters per token.
- GPT: Uses tiktoken
selection.mp4
- Visual Studio Code: The extension is developed for VS Code and will not work with other editors.
- It is also hosted on the Open VSX Registry.
The extension does not currently add any VS Code settings.
There are currently no known issues. If you encounter a problem, please report it on the issue tracker.
- Switch to model families in the UI: GPT, Claude, Gemini.
- Add Gemini token counting (approximate:
o200k_base
/cl100k_base
, fallback ~4 chars/token). - GPT tokenizer now uses
encoding_for_model('gpt-5')
with graceful fallbacks. - Updated
tiktoken
to 1.0.22.
- Added support for new OpenAI models: o3-mini, o1, o1-mini, gpt-4o-mini
- Updated to tiktoken 1.0.20
- Updated Claude models to only include Claude-3.5, Claude-3.7
- Removed older models: text-davinci-003, davinci, babbage
- Removed Claude-2 and Claude-3
- Moved from
gpt-tokenizer
totiktoken
package. - Fixed the special tokens issue.
- Modified the code to increase security.
- Added support for GPT-4o tokenizer.
- Removed unused models from the tokenizer list.
- Added Claude-3 as option using approximate token count.
- Added support for Claude tokenizer.
- Initial release of gpt-token-counter-live.
- Provides a token count in the status bar for the selected text or the entire document.
- Automatically updates the token count as text is edited or selected.
- Allows the user to select the model to use for token counting.