Release 1.0b
Pre-releaseThis release of Intel® AI Quantization Tools for Tensorflow* 1.0 Beta is released under v1.0b tag (https://github.com/IntelAI/tools/tree/v1.0b). Please notes that Intel® AI Quantization Tools for Tensorflow* must depend on Intel® Optimizations for Tensorflow. This revision contains the following features and fixes:
New functionality:
• Add support .whl Pip and conda installation for Python3.4/3.5/3.6/3.7, and remove the Tensorflow source build dependency in Intel ® optimizations for Tensorflow1.14.0, 1.15.0 and 2.0.0.
• Add three entries to run the quantization for specific models under api/examples/
, including bash command for model zoo, bash command for custom model,
and python programming APIs direct call.
• Add Dockerfile for user to build the docker container.
• Add debug mode, and exclude ops and nodes in Python programming APIs.
• Add the Bridge interface with Model Zoo for Intel® Architecture.
• Add the Python implementation of summarize_graph to remove the dependency of Tensorflow source.
• Add the Python implementation of freeze_min/max, freeze_requantization_ranges,
fuse_quantized_conv_and_requantize, rerange_quantized_concat, Insert_logging of Transform_graph to remove the dependency of Tensorflow source build.
• Add per-channel support.
• Add support using the Intel® AI Quantization Tool for Tensoflow* Python Programming APIs for the following models:
**ResNet50
**ResNet50 v1.5
**SSD-MobileNet
**SSD-ResNet34
**ResNet101
**MobileNet
**Inception V3
**Faster-RCNN
**RFCN
• Add new procedures in README
• Support for Tensorflow1.14.0.
• Support for Tensorflow1.15.0.
• Support for Tensorflow 2.0.
• Support for Model Zoo for Intel® Architecture 1.5.
Bug fixes:
• Fix several bugs for Python rewrite transform_graph ops.
• Fix data types issue for optimize_for_inference.
• Fix the bug for MobileNet and ResNet101.
• Clean up the hardcode for Faster-RCNN and RFCN.
• Fix the pylint errors.