Skip to content

ASTRAL-Group/AlphaOne

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time

Junyu Zhang Runpei Dong Han Wang Xuying Ning
Haoran Geng Peihao Li Xialin He Yutong Bai Jitendra Malik
Saurabh Gupta Huan Zhang
University of Illinois Urbana-Champaign    UC Berkeley
: Equal contribution

🏠 About

We present AlphaOne (𝛼1), a universal framework for modulating reasoning progress in large reasoning models (LRMs) at test time. 𝛼1 first introduces 𝛼 moment, which represents the scaled thinking phase with a universal parameter 𝛼. Within this scaled pre-𝛼 moment phase, it dynamically schedules slow thinking transitions by modeling the insertion of reasoning transition tokens as a Bernoulli stochastic process. After the 𝛼 moment, 𝛼1 deterministically terminates slow thinking with the end-of-thinking token, thereby fostering fast reasoning and efficient answer generation.

This approach unifies and generalizes existing monotonic scaling methods by enabling flexible and dense slow-to-fast reasoning modulation, while offering critical insights into the joint optimization of reasoning capabilities and computational efficiency.

🛠️ Installation

Follow these steps to set up the environment for AlphaOne. Our installation is adapted from Qwen2.5-Math.

  1. Create and activate the conda environment.
conda create -n alphaone python=3.10
conda activate alphaone
  1. Install the required dependencies.
cd eval/latex2sympy
pip install -e .
cd ..
pip install -r requirements.txt 
pip install vllm==0.5.1 --no-build-isolation
pip install transformers==4.42.3

Now, launch your 𝛼1 LRMs and let the magic begin! 🚀

🎯 Quick Start

Note: we recommend running evaluation scripts with output redirection, for example,

nohup bash ./scripts/alphaone/eval_linear_deepseek_7b.sh >> deepseek-7b-alpha-1.4.log &

This makes it easier to monitor progress in real-time and keep track of multiple runs.

AlphaOne Evaluation

We provide evaluation scripts for evaluating 𝛼1 LRMs on math and science benchmarks, such as AIME24, AMC23, Minerva-Math, MATH500 and OlympiadBench.

To evaluate different models:

  • For DeepSeek-R1-Distill-Qwen-1.5B:
cd AlphaOne/eval
./scripts/alphaone/eval_linear_deepseek_1_5b.sh
  • For DeepSeek-R1-Distill-Qwen-7B:
cd AlphaOne/eval
./scripts/alphaone/eval_linear_deepseek_7b.sh
  • For Qwen QwQ-32B:
cd AlphaOne/eval
./scripts/alphaone/eval_linear_qwq.sh

For LiveCodeBench, which focuses on code generation and execution, use the following script:

./scripts/alphaone/eval_linear_code.sh

Baselines Evaluation

Refer to docs/BASELINE.md for detailed instructions to evaluate the baselines.

Acknowledgment

This repo builds on the excellent evaluation platforms Qwen2.5-Math and SEAL. Model deployment is powered by vllm.

Contact

If you have any questions related to the code or the paper, feel free to email Junyu Zhang ([email protected]) and Runpei Dong ([email protected]).

Citation

If you find our work useful in your research, please consider citing AlphaOne:

@article{AlphaOne25,
    title={AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time},
    author={Zhang, Junyu and Dong, Runpei and Wang, Han and Ning, Xuying and Geng, Haoran and Li, Peihao and He, Xialin and Bai, Yutong and Malik, Jitendra and Gupta, Saurabh and Zhang, Huan},
    journal={arXiv preprint arXiv:2505.24863},
    year={2025}
}

About

[EMNLP 2025 Main] AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •