TAG is a framework for implementing decentralized hierarchical multi-agent reinforcement learning systems. It enables the creation of hierarchical agents with arbitrary depth through a novel LevelEnv abstraction, which presents each level of the hierarchy as an environment to the level above it.
You can find the paper here: https://arxiv.org/abs/2502.15425
- Build hierarchical multi-agent systems with arbitrary depth
- Standardized information flow between levels via the LevelEnv abstraction
- Support for heterogeneous agents across different levels
- Flexible communication protocol between hierarchy levels
- Compatible with standard RL environments (OpenAI Gym, PettingZoo)
- Built-in implementations of common baseline agents (PPO, MAPPO)
TAG offers several unique advantages over existing hierarchical reinforcement learning frameworks:
-
Arbitrary Hierarchy Depth: Unlike traditional approaches limited to two-level manager/worker architectures, TAG supports hierarchies of arbitrary depth through the novel LevelEnv abstraction.
-
Fully Decentralized Design: TAG maintains agent autonomy at every level, allowing each agent to learn independently without requiring centralized training. This enables better scalability compared to centralized approaches like CTDE (Centralized Training, Decentralized Execution).
-
Standardized Information Flow: The LevelEnv abstraction creates a uniform interface between hierarchy levels, facilitating clear bidirectional communication—bottom-up through messages and top-down through actions.
-
Heterogeneous Agent Support: TAG enables seamless integration of different agent types (e.g., PPO, MAPPO, LLM agents) across various levels, allowing you to match agent capabilities to the requirements of each level.
-
Loose Coupling Between Levels: Each level in the hierarchy operates with relative independence, reducing complexity and allowing for more efficient learning across different temporal scales.
-
Biological Inspiration: Based on principles from TAME (Technological Approach to Mind Everywhere) that reflect how biological systems organize hierarchically, TAG creates more natural coordination mechanisms compared to traditional approaches.
These features make TAG particularly well-suited for complex multi-agent tasks requiring coordination across different scales and abstraction levels.
pip install tame-agent
For development installation:
# Clone the repository
git clone https://github.com/yourusername/tame
# Create a conda environment (recommended)
conda create --name tag python=3.10
# Install in editable mode with development requirements
pip install -e .[dev]
# Install pre-commit hooks (for development)
pre-commit install -t pre-commit -t pre-push
We provide a series of tutorials in the tutorials/ directory to help you get started with TAG:
-
Intro to the concepts of TAG: An introduction to the core concepts of TAG
- Understanding hierarchical RL
- The LevelEnv abstraction
- Information flow between levels
- Communication mechanisms
-
Quick start: How to quickly configure and launch experiments with the provided agents
- Launch an experiment
- How the configuration of and experiment works
-
Build a hierarchy: A step-by-step guide to building your first hierarchical agent
- Creating a simple 2-level hierarchy
- Configuring agents at each level
- Basic training and evaluation
-
Make your agents: Exploring how to create your own agents for the hierarchy
- Custom BaseAgents
- Custom LevelAgents
For specific implementations and environment-specific examples, check out the codebase under src/tame/agents/
and src/exp_configs/
. These include:
- MPE Simple Spread environment (
mpe_simple_spread/
) - VMAS Balance environment (
vmas_balance/
) - Various agent implementations (PPO, MAPPO, etc.)
We use pdoc for API documentation. You can find the full API documentation here
We provide several implementations in the tame/agents/ directory.
You can run experiments using the scripts/run_experiment.py
script:
python scripts/run_experiment.py \
--config-path src/exp_configs/mpe_simple_spread \
--config-name cfg_3ppo_221 \
--n-agents 4 \
--seed 42 \
--cuda 0
We welcome contributions! Please open a Pull Request or an Issue.
If you use TAG in your research, please cite:
@article{tag2024,
title={TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning},
author={Giuseppe Paolo, Abdelhakim Benechehab, Hamza Cherkaoui, Albert Thomas, Balázs Kégl},
journal={Under review by the International Conference on Machine Learning (ICML)},
year={2025}
}
This project is licensed under the MIT License - see the LICENSE file for details.