Releases: PINTO0309/onnx2tf
1.25.12
Flatten- Improved handling when
axisattribute is not defined and the batch size of the first dimension is undefined. - ONNX
wget https://github.com/PINTO0309/onnx2tf/releases/download/0.0.2/resnet18-v1-7.onnx onnx2tf -i resnet18-v1-7.onnx ls -lh saved_model/ assets fingerprint.pb resnet18-v1-7_float16.tflite resnet18-v1-7_float32.tflite saved_model.pb variables TF_CPP_MIN_LOG_LEVEL=3 \ saved_model_cli show \ --dir saved_model \ --signature_def serving_default \ --tag_set serve The given SavedModel SignatureDef contains the following input(s): inputs['data'] tensor_info: dtype: DT_FLOAT shape: (-1, 224, 224, 3) name: serving_default_data:0 The given SavedModel SignatureDef contains the following output(s): outputs['output_0'] tensor_info: dtype: DT_FLOAT shape: (-1, 1000) name: PartitionedCall:0 Method name is: tensorflow/serving/predict
- TFLite

- Improved handling when
What's Changed
- Improved handling when
axisattribute is not defined and the batch size of the first dimension is undefined. by @PINTO0309 in #692
Full Changelog: 1.25.11...1.25.12
1.25.11
BatchNormalization- Improved the conversion stability of
BatchNormalization.
- Improved the conversion stability of
What's Changed
- Improved the conversion stability of
BatchNormalization. by @PINTO0309 in #690
Full Changelog: 1.25.10...1.25.11
1.25.10
What's Changed
- Addressed the issue of missing conversions when multi-dimensional flattening is performed and the batch size of the first dimension is an undefined dimension. by @PINTO0309 in #689
Full Changelog: 1.25.9...1.25.10
1.25.9
Add,Sub- https://huggingface.co/onnx-community/metric3d-vit-small/blob/main/onnx/model.onnx
- metric3d-vit-small.onnx
- Fixed a bug in the optimization process for arithmetic operations. The error of the final output was less than
1e-4.

- A bug in the optimization process for the
y = (200 - x) - 200operation caused an incorrectSubmerge operation to be performed. - Model gives inaccurate results post conversion to tflite #685
What's Changed
- Fixed a bug in the optimization process for arithmetic operations by @PINTO0309 in #686
Full Changelog: 1.25.8...1.25.9
1.25.8
Shape- Fixed problem of being stuck in an infinite loop.
- An input to an ADD node keeps getting casted to float32 despite being float16 in the onnx file causing issues with the ADD op #681
What's Changed
- Fixed problem of being stuck in an infinite loop by @PINTO0309 in #682
Full Changelog: 1.25.7...1.25.8
1.25.7
Expand,BatchNormalization,Gather- Automatic accuracy compensation.
- Added the ability to automatically compensate for accuracy degradation due to dimensional transposition errors.
AveragePool-
Only very few edge cases are supported.
-
The dynamic tensor
AveragePoolis difficult to replace exactly with TensorFlow'sAveragePooling. -
I have fixed and released the critical problems except for
AveragePool, butAveragePool (with ceil_mode=1)with dynamic tensor as input is extremely difficult to fix due to compatibility issues with TensorFlow. -
The problem is that the error was not occurring in the
AveragePool (with ceil_mode=1)where the conversion error should have occurred, and the latest onnx2tf should now generate a conversion error in theAveragePool (with ceil_mode=1).INFO: 39 / 1464 INFO: onnx_op_type: AveragePool onnx_op_name: wa/xvector/block1/tdnnd1/cam_layer/AveragePool INFO: input_name.1: wa/xvector/block1/tdnnd1/nonlinear2/relu/Relu_output_0 shape: [1, 128, 'unk__71'] dtype: float32 INFO: output_name.1: wa/xvector/block1/tdnnd1/cam_layer/AveragePool_output_0 shape: [1, 128, 'unk__77'] dtype: float32 ERROR: The trace log is below. Traceback (most recent call last): File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 312, in print_wrapper_func result = func(*args, **kwargs) File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 385, in inverted_operation_enable_disable_wrapper_func result = func(*args, **kwargs) File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 55, in get_replacement_parameter_wrapper_func func(*args, **kwargs) File "/home/xxxxx/git/onnx2tf/onnx2tf/ops/AveragePool.py", line 171, in make_node output_spatial_shape = [ File "/home/xxxxx/git/onnx2tf/onnx2tf/ops/AveragePool.py", line 172, in <listcomp> func((i + pb + pe - d * (k - 1) - 1) / s + 1) TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' ERROR: input_onnx_file_path: ../cam++_vin.onnx ERROR: onnx_op_name: wa/xvector/block1/tdnnd1/cam_layer/AveragePool ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again. ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option. ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
-
- Unable to convert a model with 3d input shape of dynamic length into tflite int8 format #673
What's Changed
- Automatic accuracy compensation
Expand,BatchNormalization,Gatherby @PINTO0309 in #675
Full Changelog: 1.25.6...1.25.7
1.25.6
Concat-
Bug fix for
Dynamic Resizeoptimization pattern.
https://github.com/yakhyo/face-parsingONNX TFLite 

-
What's Changed
- Bug fix for Resize optimization pattern by @PINTO0309 in #672
Full Changelog: 1.25.5...1.25.6
1.25.5
Transpose- Fixed NHWC flag judgment bug in
Transposeof ViT for 3D tensor
- Fixed NHWC flag judgment bug in
What's Changed
- Fixed NHWC flag judgment bug in Transpose of ViT for 3D tensor by @PINTO0309 in #671
Full Changelog: 1.25.4...1.25.5
1.25.4
- Addition of automatic INT8 calibration process for RGBA 4-channel images.
- It should be noted that the MS-COCO image set does not include an alpha channel, so this auto-calibration does not allow for decent quantization.
- There is no correlation between the channel being 4 and the input data being an image. Therefore, if the model to be converted is data other than a 4-channel image, automatic calibration should not be used.
- The following means and standard deviations are used as fixed values.
mean = np.asarray([[[[0.485, 0.456, 0.406, 0.000]]]], dtype=np.float32) std = np.asarray([[[[0.229, 0.224, 0.225, 1.000]]]], dtype=np.float32)
- Also, use a fixed value of 0.5 for the alpha channel of the calibration data.
new_element_array = np.full((*calib_data.shape[:-1], 1), 0.500, dtype=np.float32)
- The following means and standard deviations are used as fixed values.
- magic_touch.onnx.zip

- [TODO] Add 4 channels of image data to the sample data for quantization #411
What's Changed
- Addition of automatic INT8 calibration process for RGBA 4-channel images by @PINTO0309 in #670
Full Changelog: 1.25.3...1.25.4
1.25.3
-
Improvements to
ScatterND. -
Improved conversion stability for
Transpose->Softmax->Transposecombinations. -
Improved error message regarding OP name error when
GroupConvolutionis included. -
Change
-osdoption toTrueby default.ERROR: Generation of saved_model failed because the OP name does not match the following pattern. ^[A-Za-z0-9.][A-Za-z0-9_.\\/>-]*$ ERROR: /model.22/cv2.2/cv2.2.2/Conv/kernel ERROR: Please convert again with the `-osd` or `--output_signaturedefs` option.
What's Changed
- Improvements to
ScatterND,Transpose->Softmax->Transposecombinations by @PINTO0309 in #669
Full Changelog: 1.25.2...1.25.3



