Model Zoo for Intel® Architecture v2.6.0
TensorFlow Framework
- Support for TensorFlow
v2.7.0
New TensorFlow models
- N/A
Other features and bug fixes for TensorFlow models
- Updates to only use docker
--privilegedwhen required and check--cpuset -
- Except for
BERT LargeandWide and Deepmodels
- Except for
- Updated the ImageNet download link
- Fix
platform_util.pyfor systems with only one socket or subset of cores within a socket - Replace
USE_DAAL4PY_SKLEARNenv var withpatch_sklearn - Add error handling for when a frozen graph isn't passed for BERT large FP32 inference*
PyTorch Framework
- Support for PyTorch
v1.10.0andIPEXv1.10.0
New PyTorch models
GoogLeNetInference(FP32, BFloat16**)Inception v3Inference(FP32, BFloat16**)MNASNet 0.5Inference(FP32, BFloat16**)MNASNet 1.0Inference(FP32, BFloat16**)ResNet 50Inference(Int8)ResNet 50Training(FP32, BFloat16**)ResNet 101Inference(FP32, BFloat16**)ResNet 152Inference(FP32, BFloat16**)ResNext 32x4dInference(FP32, BFloat16**)ResNext 32x16dInference(FP32, Int8, BFloat16**)VGG-11Inference(FP32, BFloat16**)VGG-11with batch normalization Inference(FP32, BFloat16**)Wide ResNet-50-2Inference(FP32, BFloat16**)Wide ResNet-101-2Inference(FP32, BFloat16**)BERT baseInference(FP32, BFloat16**)BERT largeInference(FP32, Int8, BFloat16**)BERT largeTraining(FP32, BFloat16**)DistilBERT baseInference(FP32, BFloat16**)RNN-TInference(FP32, BFloat16**)RNN-TTraining(FP32, BFloat16**)RoBERTa baseInference(FP32, BFloat16**)Faster R-CNN ResNet50FPN Inference(FP32Mask R-CNNInference(FP32, BFloat16**)Mask R-CNNTraining(FP32, BFloat16**)Mask R-CNN ResNet50 FPNInference(FP32)RetinaNet ResNet-50 FPNInference(FP32)SSD-ResNet34Inference(FP32, Int8, BFloat16**)SSD-ResNet34Training(FP32, BFloat16**)DLRMInference(FP32, Int8, BFloat16**)DLRMTraining(FP32)
Other features and bug fixes for PyTorch models
DLRMandResNet 50documentation updates
Supported Configurations
Intel Model Zoo 2.6.0 is validated on the following environment:
- Ubuntu 20.04 LTS
- Python 3.8, 3.9
- Docker Server v19+
- Docker Client v18+