You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* [bc-breaking] Generalize FakeQuantizeConfig beyond intx
**Summary:** The existing `FakeQuantizeConfig` performs only
intx quantization, but we plan to extend QAT to other dtypes
such as fp8 and nvfp4 in the near future. This is the necessary
refactor before that. Specifically:
```
# New abstract class
FakeQuantizeConfigBase
# Rename
FakeQuantizeConfig -> IntxFakeQuantizeConfig
```
In the future, we will have other types of `FakeQuantizeConfigBase`
for float dtypes that users can pass in instead of the existing
Intx one.
**BC-breaking notes:** For BC, we keep around the old names to
reference the new ones. However, this commit is still BC-breaking
in the sense that a few APIs now accept the abstract
`FakeQuantizeConfigBase` instead. For the most part, this abstract
class will be hidden from the user.
Before:
```
activation_config = FakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = FakeQuantizeConfig(torch.int4, group_size=32)
```
After:
```
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
```
**Test Plan:**
python test/quantization/test_qat.py
[ghstack-poisoned]
* New multi-step QAT API
**Summary:** This commit adds a new multi-step QAT API with the
main goal of simplifying the existing UX. The new API uses the
same `QATConfig` for both the prepare and convert steps, and
automatically infers the fake quantization configs based on
a PTQ base config provided by the user:
```
from torchao.quantization import (
quantize_,
Int8DynamicActivationInt4WeightConfig
)
from torchao.quantization.qat import QATConfig
\# prepare
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
qat_config = QATConfig(base_config, step="prepare")
quantize_(m, qat_config)
\# train (not shown)
\# convert
quantize_(m, QATConfig(base_config, step="convert"))
```
The main improvements include:
- A single config for both prepare and convert steps
- A single quantize_ for convert (instead of 2)
- No chance for incompatible prepare vs convert configs
- Much less boilerplate code for most common use case
- Simpler config names
For less common use cases such as experimentation, users can
still specify arbitrary fake quantization configs for
activations and/or weights as before. This is still important
since there may not always be a corresponding PTQ base config.
For example:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import IntxFakeQuantizeConfig, QATConfig
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
\# train and convert same as above (not shown)
```
**BC-breaking notes:** This change by itself is technically not
BC-breaking since we keep around the old path, but will become
so when we deprecate and remove the old path in the future.
Before:
```
\# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = IntXQuantizationAwareTrainingConfig(activation_config, weight_config),
quantize_(model, qat_config)
\# train (not shown)
\# convert
quantize_(model, FromIntXQuantizationAwareTrainingConfig())
quantize_(model, Int8DynamicActivationInt4WeightConfig(group_size=32))
```
After: (see above)
**Test Plan:**
```
python test/quantization/test_qat.py
```
[ghstack-poisoned]
* Update on "New multi-step QAT API"
**Summary:** This commit adds a new multi-step QAT API with the
main goal of simplifying the existing UX. The new API uses the
same `QATConfig` for both the prepare and convert steps, and
automatically infers the fake quantization configs based on
a PTQ base config provided by the user:
```
from torchao.quantization import (
quantize_,
Int8DynamicActivationInt4WeightConfig
)
from torchao.quantization.qat import QATConfig
# prepare
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(m, QATConfig(base_config, step="prepare"))
# train (not shown)
# convert
quantize_(m, QATConfig(base_config, step="convert"))
```
The main improvements include:
- A single config for both prepare and convert steps
- A single quantize_ for convert (instead of 2)
- No chance for incompatible prepare vs convert configs
- Much less boilerplate code for most common use case
- Simpler config names
For less common use cases such as experimentation, users can
still specify arbitrary fake quantization configs for
activations and/or weights as before. This is still important
since there may not always be a corresponding PTQ base config.
For example:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import IntxFakeQuantizeConfig, QATConfig
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
# train and convert same as above (not shown)
```
**BC-breaking notes:** This change by itself is technically not
BC-breaking since we keep around the old path, but will become
so when we deprecate and remove the old path in the future.
Before:
```
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = IntXQuantizationAwareTrainingConfig(activation_config, weight_config),
quantize_(model, qat_config)
# train (not shown)
# convert
quantize_(model, FromIntXQuantizationAwareTrainingConfig())
quantize_(model, Int8DynamicActivationInt4WeightConfig(group_size=32))
```
After: (see above)
**Test Plan:**
```
python test/quantization/test_qat.py
```
[ghstack-poisoned]
* Update on "New multi-step QAT API"
**Summary:** This commit adds a new multi-step QAT API with the
main goal of simplifying the existing UX. The new API uses the
same `QATConfig` for both the prepare and convert steps, and
automatically infers the fake quantization configs based on
a PTQ base config provided by the user:
```
from torchao.quantization import (
quantize_,
Int8DynamicActivationInt4WeightConfig
)
from torchao.quantization.qat import QATConfig
# prepare
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(m, QATConfig(base_config, step="prepare"))
# train (not shown)
# convert
quantize_(m, QATConfig(base_config, step="convert"))
```
The main improvements include:
- A single config for both prepare and convert steps
- A single quantize_ for convert (instead of 2)
- No chance for incompatible prepare vs convert configs
- Much less boilerplate code for most common use case
- Simpler config names
For less common use cases such as experimentation, users can
still specify arbitrary fake quantization configs for
activations and/or weights as before. This is still important
since there may not always be a corresponding PTQ base config.
For example:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import IntxFakeQuantizeConfig, QATConfig
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
# train and convert same as above (not shown)
```
**BC-breaking notes:** This change by itself is technically not
BC-breaking since we keep around the old path, but will become
so when we deprecate and remove the old path in the future.
Before:
```
# prepare
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = IntXQuantizationAwareTrainingConfig(activation_config, weight_config),
quantize_(model, qat_config)
# train (not shown)
# convert
quantize_(model, FromIntXQuantizationAwareTrainingConfig())
quantize_(model, Int8DynamicActivationInt4WeightConfig(group_size=32))
```
After: (see above)
**Test Plan:**
```
python test/quantization/test_qat.py
```
[ghstack-poisoned]
* Deprecate old QAT APIs
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Update base for Update on "Deprecate old QAT APIs"
**Summary:** Deprecates QAT APIs that should no longer be used.
Print helpful deprecation warning to help users migrate.
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_api_deprecation
```
Also manual testing:
```
>>> from torchao.quantization.qat import IntXQuantizationAwareTrainingConfig
>>> IntXQuantizationAwareTrainingConfig()
'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead:
base_config = Int8DynamicActivationInt4WeightConfig(group_size=32)
quantize_(model, QATConfig(base_config, step="prepare"))
# train (not shown)
quantize_(model, QATConfig(base_config, step="convert"))
Alternatively, if you prefer to pass in fake quantization configs:
activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False)
weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32)
qat_config = QATConfig(
activation_config=activation_config,
weight_config=weight_config,
step="prepare",
)
quantize_(model, qat_config)
Please see #2630 for more details.
IntXQuantizationAwareTrainingConfig(activation_config=None, weight_config=None)
```
[ghstack-poisoned]
* Add NVFP4 QAT
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on oasst1 for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7927|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7323|± | N/A|
| | |none |None |word_perplexity|↓ |18.8815|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.7921|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7316|± | N/A|
| | |none |None |word_perplexity|↓ |18.8409|± | N/A|
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
* Update base for Update on "Add NVFP4 QAT"
**Summary:** This commit adds a QAT flow for NVFP4, following the
numerics in `NVFP4Tensor` closely but without the dtyping casting,
swizzling, and the packing/unpacking. Users can call this flow as follows:
```
from torchao.quantization import quantize_
from torchao.quantization.qat import NVFP4FakeQuantizeConfig, QATConfig
qat_config = QATConfig(
activation_config=NVFP4FakeQuantizeConfig(),
weight_config=NVFP4FakeQuantizeConfig(),
step="prepare",
)
quantize_(model, qat_config)
```
**Test Plan:**
```
python test/quantization/test_qat.py -k test_qat_nvfp4
```
Initial benchmarks on fine-tuning Qwen3-1.7B on alpaca for 3 epochs:
```
# Without QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8322|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7804|± | N/A|
| | |none |None |word_perplexity|↓ |21.8611|± | N/A|
# With QAT
| Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr|
|--------|------:|------|------|---------------|---|------:|---|------|
|wikitext| 2|none |None |bits_per_byte |↓ | 0.8271|± | N/A|
| | |none |None |byte_perplexity|↓ | 1.7741|± | N/A|
| | |none |None |word_perplexity|↓ |21.4467|± | N/A|
```
[ghstack-poisoned]
0 commit comments