Aicontextator is a simple, practical CLI that bundles a project's files into a single LLM-ready context string. It respects .gitignore
and a .contextignore
file, estimates tokens with tiktoken
, and can split large outputs into multiple parts.
Stop manually finding, copying, and pasting code into your AI prompts. aicontextator automates the entire process, letting you build a comprehensive context from your project files with a single command, right from your terminal.
- π§ Smart file filtering: respects
.gitignore
automatically. - βοΈ Custom ignore rules: use a
.contextignore
file for project-specific exclusions. - π Structured JSON Output: Generate a detailed JSON file with project structure, file contents, token counts, and security warnings for programmatic use with
--format json
. - π Interactive Mode: Select or deselect files interactively using arrow keys.
- π‘οΈ Built-in Secret Scanning: Proactively scans the content of included files using the detect-secrets engine to identify and warn about potential secrets (like API keys) before they are added to the context
- π Secure defaults: excludes
.env
and.env.*
files by default to reduce secret leakage. - π€ Token-aware: estimates tokens using
tiktoken
. - π§© Automatic splitting: splits output into multiple parts when a per-part token limit is reached.
- π Clipboard-ready: copy the first part to the clipboard with
--copy
. - π² Tree view: preview included files with
--tree-only
or include the tree inside the context with--tree
. - βοΈ Configurable: add extra exclude patterns, restrict by extensions, and tune output.
Install and use easily with uv:
uv tool install git+https://github.com/ILDaviz/aicontextator
aicontextator /path/to/project
Manual (pip editable):
git clone https://github.com/ILDaviz/aicontextator
cd aicontextator
pip install -e .
aicontextator /path/to/project
aicontextator
aicontextator --copy
aicontextator -o custom.txt
aicontextator --tree
aicontextator --tree-only
aicontextator --prompt-no-header
The code includes a fully functional interactive mode, triggered by the --interactive flag, which allows the user to select files using a keyboard-navigable interface:
aicontextator --interactive
aicontextator --format json -o context_data.json --count-tokens
- ROOT_DIR: The project directory to analyze (defaults to current directory).
--output, -o
: Output filename. If splitting, parts are numbered (e.g., context-part-1.txt). Defaults to context.txt or context.json.--format
: The output format. Can be text (default) or json.--exclude, -e
: extra exclusion patterns (gitignore-style). Can be repeated.--ext
: include only specific extensions (repeatable).--copy, -c
: copy to clipboard.--count-tokens
: enable token counting (usestiktoken
).--max-tokens
: maximum tokens per output part (if exceeded, the output is split into parts).- Note: this does not perform hard splitting inside single files. If a single file exceeds
--max-tokens
by itself, it will still be placed whole into a part (and that part may exceed the limit). --warn-tokens
: print a warning when a part exceeds this token threshold.--interactive, -i
: Interactive Mode: Select or deselect files interactively using arrow keys, with the ability to quit immediately by pressing ESC.--tree
: include tree view in the generated context.--tree-only
: print only the tree and exit.--prompt-no-header
: do not prepend the descriptive header.
- Token counting uses
tiktoken
, the library recommended for token estimates for OpenAI models. Consider--count-tokens
to get a token report. --max-tokens
sets a per-part limit. The tool will aggregate files into a part until adding the next file would exceed the limit, then it starts a new part. It does not (by default) split file contents into multiple chunks.- For strategies and best practices on splitting text for LLMs, libraries like LangChain offer helpful patterns (e.g., document chunking and token-based splitters).
aicontextator --count-tokens
#### Creates context-part-1.txt, context-part-2.txt, ...
Β§aicontextator --count-tokens --max-tokens 100000 -o context.txt
aicontextator --count-tokens --warn-tokens 80000
aicontextator --exclude "tests/"
aicontextator --ext .py --ext .md
- The tool respects
.gitignore
and supports a.contextignore
file for extra ignore patterns. - Default exclude patterns include
.env
and.env.*
to avoid leaking environment secrets. Always double-check your.gitignore
/.contextignore
to be safe.
The following text is a collection of source code files from a software project.
Each file is delimited by a header line starting with "--- FILE: [filepath]".
Use only this content as the source of truth when answering questions.
Project structure:
myapp/
βββ README.md
βββ src
β βββ main.py
βββ tests
βββ test_main.py
<<<
--- FILE: README.md ---
# MyApp
Simple example project.
--- FILE: src/main.py ---
print("hello world")
--- FILE: tests/test_main.py ---
def test_hello():
assert True
>>>
- Clone:
git clone https://github.com/ILDaviz/aicontextator
cd aicontextator
- Create virtualenv and install dev deps:
uv venv
source .venv/bin/activate
uv pip install -e '.[dev]'
uv run test
uv run lint
uv run format
- Automatic summarization of long files to reduce token usage.
This project is licensed under the MIT license.
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.