Skip to content

Commit 8b36ccb

Browse files
authored
Update habana version (#2279)
Signed-off-by: Sun, Xuehao <[email protected]>
1 parent 9874665 commit 8b36ccb

File tree

8 files changed

+11
-12
lines changed

8 files changed

+11
-12
lines changed

.azure-pipelines/scripts/install_nc.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ if [[ $1 = *"3x_pt"* ]]; then
1010
python setup.py pt bdist_wheel
1111
else
1212
echo -e "\n Install torch CPU ... "
13-
pip install torch==2.7.0 torchvision --index-url https://download.pytorch.org/whl/cpu
14-
python -m pip install intel-extension-for-pytorch==2.7.0 oneccl_bind_pt==2.7.0 --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
13+
pip install torch==2.7.1 torchvision --index-url https://download.pytorch.org/whl/cpu
14+
python -m pip install intel-extension-for-pytorch==2.7.0 oneccl_bind_pt --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
1515
python -m pip install --no-cache-dir -r requirements.txt
1616
python setup.py bdist_wheel
1717
fi

.azure-pipelines/scripts/ut/3x/run_3x_pt.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ echo "##[section]import check pass"
1212
# install requirements
1313
echo "##[group]set up UT env..."
1414
export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH
15-
pip install -r /neural-compressor/test/3x/torch/requirements.txt
16-
pip install torch==2.7.0 torchvision==0.22.0 # For auto-round
15+
sed -i '/^deepspeed/d' /neural-compressor/test/3x/torch/requirements.txt
16+
pip install -r /neural-compressor/test/3x/torch/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
1717
pip install pytest-cov
1818
pip install pytest-html
1919
echo "##[endgroup]"

.azure-pipelines/scripts/ut/3x/run_3x_pt_fp8.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ echo "##[section]import check pass"
1212
# install requirements
1313
echo "##[group]set up UT env..."
1414
export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH
15-
export PT_HPU_LAZY_MODE=1
1615
sed -i '/^intel_extension_for_pytorch/d' /neural-compressor/test/3x/torch/requirements.txt
1716
sed -i '/^auto_round/d' /neural-compressor/test/3x/torch/requirements.txt
1817
cat /neural-compressor/test/3x/torch/requirements.txt

.azure-pipelines/template/docker-template.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ steps:
7474
7575
- ${{ if eq(parameters.imageSource, 'pull') }}:
7676
- script: |
77-
docker pull vault.habana.ai/gaudi-docker/1.21.0/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
77+
docker pull vault.habana.ai/gaudi-docker/1.22.0/ubuntu24.04/habanalabs/pytorch-installer-2.7.1:latest
7878
displayName: "Pull habana docker image"
7979
8080
- script: |
@@ -95,7 +95,7 @@ steps:
9595
else
9696
docker run -dit --disable-content-trust --privileged --name=${{ parameters.containerName }} --shm-size="2g" \
9797
--runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host \
98-
-v ${BUILD_SOURCESDIRECTORY}:/neural-compressor vault.habana.ai/gaudi-docker/1.21.0/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
98+
-v ${BUILD_SOURCESDIRECTORY}:/neural-compressor vault.habana.ai/gaudi-docker/1.22.0/ubuntu24.04/habanalabs/pytorch-installer-2.7.1:latest
9999
docker exec ${{ parameters.containerName }} bash -c "ln -sf \$(which python3) /usr/bin/python"
100100
fi
101101
echo "Show the container list after docker run ... "

.azure-pipelines/ut-3x-pt-fp8.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ stages:
3939
jobs:
4040
- job:
4141
displayName: Torch 3x Habana FP8
42+
timeoutInMinutes: 120
4243
steps:
4344
- template: template/ut-template.yml
4445
parameters:
@@ -54,7 +55,7 @@ stages:
5455
jobs:
5556
- job:
5657
displayName: Torch 3x Habana FP8 baseline
57-
continueOnError: true
58+
timeoutInMinutes: 120
5859
steps:
5960
- template: template/ut-template.yml
6061
parameters:

.azure-pipelines/ut-3x-pt.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,6 @@ stages:
5252
jobs:
5353
- job:
5454
displayName: Unit Test 3x Torch baseline
55-
continueOnError: true
5655
steps:
5756
- template: template/ut-template.yml
5857
parameters:

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Intel® Neural Compressor
55
<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, and ONNX Runtime)</h3>
66

77
[![python](https://img.shields.io/badge/python-3.8%2B-blue)](https://github.com/intel/neural-compressor)
8-
[![version](https://img.shields.io/badge/release-3.4.1-green)](https://github.com/intel/neural-compressor/releases)
8+
[![version](https://img.shields.io/badge/release-3.5-green)](https://github.com/intel/neural-compressor/releases)
99
[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)
1010
[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)
1111
[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total&units=international_system&left_color=grey&right_color=green&left_text=downloads)](https://pepy.tech/project/neural-compressor)
@@ -56,7 +56,7 @@ To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, p
5656

5757
Run a container with an interactive shell, [more info](https://docs.habana.ai/en/latest/Installation_Guide/Additional_Installation/Docker_Installation.html#docker-installation)
5858
```
59-
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.21.0/ubuntu24.04/habanalabs/pytorch-installer-2.6.0:latest
59+
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.22.0/ubuntu24.04/habanalabs/pytorch-installer-2.7.1:latest
6060
```
6161

6262
> Note: Since Habana software >= 1.21.0, `PT_HPU_LAZY_MODE=0` is the default setting. However, most low-precision functions (such as `convert_from_uint4`) do not support this setting. Therefore, we recommend setting `PT_HPU_LAZY_MODE=1` to maintain compatibility.

test/3x/torch/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
auto_round
22
datasets
3-
deepspeed @ git+https://github.com/HabanaAI/DeepSpeed.git@1.21.0
3+
deepspeed @ git+https://github.com/HabanaAI/DeepSpeed.git@1.22.0
44
expecttest
55
intel_extension_for_pytorch
66
numpy

0 commit comments

Comments
 (0)