Re:Having trouble with model_convert and tensorflow model with pip installed 2024.3 (2024)

Here is a sample code extracted from my openvino CPU inference thread (I use yolo8 with iGPU in a separate thread);

#! /usr/bin/python3''' 28JUL2024wbk -- OpenVINO_SSD_Thread.py Run MobilenetSSD_v2 inferences on CPU using OpenVINO 2024 For use with AI2.py Setup and inference code largely lifted from the openvino python example: hello_reshape_ssd.py That was installed by the apt install of openvino 2024.2, the apt install is broken so I had to do a pip install to run the code :('''import numpy as npimport cv2import datetimeimport logging as logimport sysfrom imutils.video import FPSimport openvino as ovif True: # cut and paste from SSD_Thread avoid removing indents ###model_path = 'ssd_mobilenet_v2_coco.xml' # converted with openvino 2024 ###model_path = '../ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.xml' # converted with openvion 2024 model_path = 'frozen_inference_graph.xml' # converted with openvion 2024 using frozen_inference_graph from 2021 conversion### model_path = 'mobilenet_ssd_v2/MobilenetSSDv2cocoIR10.xml' # my IR10 conversion done with openvino 2021.3 ''' # simple python code to convert model, too bad it produces a multi output model. import openvino as ov ov_model = ov.convert_model('../ssd_mobilenet_v2_coco_2018_03_29/saved_model') ov.save_model(ov_model,'ssd_mobilenet_v2_coco.xml') ''' device_name = 'CPU' log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout) ## basically lifted from hello_reshape_ssd.py sample code installed with apt install of openvino 2024, which is broken # --------------------------- Step 1. Initialize OpenVINO Runtime Core ------------------------------------------------ log.info('Creating OpenVINO Runtime Core') core = ov.Core() print('[INFO] Using OpenVINO: ' + ov.__version__) devices = core.available_devices log.info('Available devices:') for device in devices: deviceName = core.get_property(device, "FULL_DEVICE_NAME") print(f" {device}: {deviceName}") # --------------------------- Step 2. Read a model -------------------------------------------------------------------- log.info(f'Reading the model: {model_path}') # (.xml and .bin files) or (.onnx file) model = core.read_model(model_path)## print(len(model.outputs), model.outputs)## print('') if len(model.inputs) != 1: log.error('Supports only single input topologies.') ##return -1 ''' if len(model.outputs) != 1: log.error('Supports only single output topologies') print(len(model.outputs), model.outputs) print('') ##return -1 ''' # --------------------------- Step 3. Set up input -------------------------------------------------------------------- ## create image to set model size ''' This was very confusing, sample code says: 'Reshaping the model to the height and width of the input image' which makes no sence to me. If I feed in larger images it sort of works but boxes are wrong and detections are poor. I know my MobilenetSSD_v2 model was for images sized 300x300 so I create a dummy image of this size and use it to "reshape" the model. ''' imageM = np.zeros(( 300, 300, 3), np.uint8) imageM[:,:] = (127,127,127) input_tensor = np.expand_dims(imageM, 0) # Add N dimension log.info('Reshaping the model to the height and width of the input image') n, h, w, c = input_tensor.shape model.reshape({model.input().get_any_name(): ov.PartialShape((n, c, h, w))}) #print(n, c, w, h) # --------------------------- Step 4. Apply preprocessing ------------------------------------------------------------- ## I've made zero effort to understand this, but it seems to work! ppp = ov.preprocess.PrePostProcessor(model) # 1) Set input tensor information: # - input() provides information about a single model input # - precision of tensor is supposed to be 'u8' # - layout of data is 'NHWC' ppp.input().tensor() \ .set_element_type(ov.Type.u8) \ .set_layout(ov.Layout('NHWC')) # noqa: N400 # 2) Here we suppose model has 'NCHW' layout for input ppp.input().model().set_layout(ov.Layout('NCHW')) # 3) Set output tensor information: # - precision of tensor is supposed to be 'f32'### ppp.output().tensor().set_element_type(ov.Type.f32) # 4) Apply preprocessing modifing the original 'model' model = ppp.build() # ---------------------------Step 4. Loading model to the device------------------------------------------------------- log.info('Loading the model to the plugin') compiled_model = core.compile_model(model, device_name) ### input_layer_ir = compiled_model.input(0)### output_layer_ir = compiled_model.output("boxes") image = cv2.imread('TestDetection.jpg')### N, C, H, W = input_layer_ir.shape resized_image = cv2.resize(image, (w, h)) input_tensor = np.expand_dims(resized_image, 0) # Add N dimension cv2.imshow('SSD input', resized_image) cv2.waitKey(0) results = compiled_model.infer_new_request({0: input_tensor}) print(results)

Running it as it gives this error:

[ INFO ] Creating OpenVINO Runtime Core[INFO] Using OpenVINO: 2024.3.0-16041-1e3b88e4e3f-releases/2024/3[ INFO ] Available devices: CPU: 12th Gen Intel(R) Core(TM) i9-12900K GPU: Intel(R) UHD Graphics 770 [0x4680] (iGPU)[ INFO ] Reading the model: frozen_inference_graph.xml[ INFO ] Reshaping the model to the height and width of the input imageTraceback (most recent call last): File "/home/wally/AI_code/AI2/ssdv2.py", line 83, in <module> model.reshape({model.input().get_any_name(): ov.PartialShape((n, c, h, w))})RuntimeError: Check 'TRShape::broadcast_merge_into(output_shape, input_shapes[1], autob)' failed at src/core/shape_inference/include/eltwise_shape_inference.hpp:26:While validating node 'opset1::Multiply Postprocessor/Decode/mul_2 (opset1::Multiply Postprocessor/Decode/div[0]:f32[191700], opset1::Subtract Postprocessor/Decode/get_center_coordinates_and_sizes/sub_1[0]:f32[1917]) -> (f32[?])' with friendly_name 'Postprocessor/Decode/mul_2':Argument shapes are inconsistent.

This is using the 2024 converted model in line 25 of the code. If I comment out line 25 and remove the comment from line 26 it uses the 2021 converted model and works fine.

Putting back the use of the 2024 converted model and commenting out line 83 it gets further but still fails:

[ INFO ] Loading the model to the pluginTraceback (most recent call last): File "/home/wally/AI_code/AI2/ssdv2.py", line 121, in <module> results = compiled_model.infer_new_request({0: input_tensor}) File "/home/wally/VENV/y8ovv/lib/python3.10/site-packages/openvino/runtime/ie_api.py", line 298, in infer_new_request return self.create_infer_request().infer(inputs) File "/home/wally/VENV/y8ovv/lib/python3.10/site-packages/openvino/runtime/ie_api.py", line 132, in infer return OVDict(super().infer(_data_dispatch(RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:223:Exception from src/plugins/intel_cpu/src/memory_desc/cpu_memory_desc.h:89:ParameterMismatch: Can not clone with new dims. Descriptor's shape: {0 - ?, 0 - ?, 3, 0 - ?} is incompatible with provided dimensions: {1, 300, 300, 3}.

You can download my 2021 converted *.xml & *.bin files from this link as a *.tar.bz2 archive, ~29 MB:

https://1drv.ms/u/s!AnWizTQQ52Yzg1cAt2vo3tgzVGHn?e=EQgags

The model I'm trying to convert with 2024.3 is from:

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

Here is the test image that I load in the sample code, edit line 114 to load a different image, it is zoomed-in detection image from the 2021 version of my system that I use to test the Email and/or MMS notifications:

Re:Having trouble with model_convert and tensorflow model with pip installed 2024.3 (1)

Re:Having trouble with model_convert and tensorflow model with pip installed 2024.3 (2024)

References

Top Articles
The Boogeyman (Film, 2023) - MovieMeter.nl
Die Filmstarts-Kritik zu The Boogeyman
Funny Roblox Id Codes 2023
Golden Abyss - Chapter 5 - Lunar_Angel
Www.paystubportal.com/7-11 Login
Joi Databas
DPhil Research - List of thesis titles
Shs Games 1V1 Lol
Evil Dead Rise Showtimes Near Massena Movieplex
Steamy Afternoon With Handsome Fernando
fltimes.com | Finger Lakes Times
Detroit Lions 50 50
18443168434
Newgate Honda
Zürich Stadion Letzigrund detailed interactive seating plan with seat & row numbers | Sitzplan Saalplan with Sitzplatz & Reihen Nummerierung
Grace Caroline Deepfake
978-0137606801
Nwi Arrests Lake County
Justified Official Series Trailer
London Ups Store
Committees Of Correspondence | Encyclopedia.com
Pizza Hut In Dinuba
Jinx Chapter 24: Release Date, Spoilers & Where To Read - OtakuKart
How Much You Should Be Tipping For Beauty Services - American Beauty Institute
Free Online Games on CrazyGames | Play Now!
Sizewise Stat Login
VERHUURD: Barentszstraat 12 in 'S-Gravenhage 2518 XG: Woonhuis.
Jet Ski Rental Conneaut Lake Pa
Unforeseen Drama: The Tower of Terror’s Mysterious Closure at Walt Disney World
Ups Print Store Near Me
C&T Wok Menu - Morrisville, NC Restaurant
How Taraswrld Leaks Exposed the Dark Side of TikTok Fame
University Of Michigan Paging System
Dashboard Unt
Access a Shared Resource | Computing for Arts + Sciences
Speechwire Login
Healthy Kaiserpermanente Org Sign On
Restored Republic
3473372961
Craigslist Gigs Norfolk
Moxfield Deck Builder
Senior Houses For Sale Near Me
Whitehall Preparatory And Fitness Academy Calendar
Jail View Sumter
Nancy Pazelt Obituary
Birmingham City Schools Clever Login
Thotsbook Com
Funkin' on the Heights
Vci Classified Paducah
Www Pig11 Net
Ty Glass Sentenced
Latest Posts
Article information

Author: Lilliana Bartoletti

Last Updated:

Views: 6168

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.