Occurrence |
Function |
Description |
Troubleshooting remedy |
---|---|---|---|
neural_network.init() |
Runtime error: “Error: Initialized neural network exceeds available memory” |
Receiving this error during runtime indicates that the required buffer size for initializing the neural network is not available. The buffer size differs from the file size of the neural network. Exact limits of the available buffer cannot be stated as this heavily depends on the application itself. To overcome this issue, ensure that the neural network requires a smaller buffer. The required buffer size is affected by the number and size layers of the neural network, as well as its input and output layers. When creating the model_buffer and converting it with the corresponding OpenVINO version, use a config file and add the following parameters to get the required buffer size from the dumped files. Content of file: MYRIAD_DUMP_ALL_PASSES YES MYRIAD_DUMP_INTERNAL_GRAPH_DIRECTORY dump The required buffer size is marked as “BSS” in the dump files. This applies to OpenVINO versions >= OpenVINO 2021.2 BSS stands for “Block Starting Symbol”. |
|
neural_network.init(), vid_pipeline.init() |
Calling both functions in a loop causes the app to freeze |
Repeatedly initializing both the video pipeline and the neural network in a loop leads to the app freezing. Generally, the video pipeline initialization should only occur once during an "initialization phase". A release and re-initialization of the video pipeline during module runtime is not supported. |
|
neural_network.run() |
Unexpected prediction for a given input |
Every input provided to the given neural network is checked prior to processing it. This entails verification of whether the provided input data can be shaped into the expected dimensions of the input tensor of the neural network. Inspections regarding the order of the input dimensions are not included and therefore have to be double checked by the user. To obtain valid predictions, ensure that especially the width, height, channels, and bytes per pixel of the input data match the neural network. |
|
neural_network.run() |
Unexpected number of detected objects for a given input |
The size of the output tensor during object detection is encoded in the network provided (“config_number”) as a maximum number of detections per output tensor. If the number of detected objects (“output_layer_number”) is not as desired, please double check the model conversion process. A configuration file that is passed to mo_tf.py for model optimization via OpenVINO can, for instance, be used to specify the upper limit for detected objects. If the config_number is lower than the output_layer_number, less memory than required is allocated for the output buffer. Therefore, the least relevant results will be dropped. If the config_number is higher than the output_layer_number, more memory than required is allocated for the output buffer. Therefore, no results will be dropped. If the memory for the output buffer cannot be allocated, an exception will be thrown. |
|
model conversion via Open-VINO |
mo_tf.py |
FPN topologies are not supported at model conversion |
OpenVINO does not support the use of FPN topologies (Feature Pyramid Networks) as they contain hard-coded shapes for some operations. Do not use mobile networks with these layers, use other SSD topologies instead. Consult the official OpenVINO website for details. |
model conversion via Open-VINO |
mo_tf.py |
Version mismatch during model conversion |
The use of corresponding versions within the conversion process is key to receiving an executable model. This applies with respect to the tensorflow version and the version used to generate the intermediate representation (IR). For generating the IR, the version is passed via the ‘–transformations_ config’ parameter to mo_tf.py. If a tensorflow model was trained with TF version 2.0 to 2.3, apply version 2.0 as a parameter (. . . /ssd_support_api_v2.0.json). If a tensorflow model was trained with TF version 2.4 or higher, apply version 2.4 as a parameter (. . . /ssd_support_api_v2.4.json). For details regarding the config file and the conversion process itself, consult the official OpenVINO website. For example: python3 mo_tf.py –saved_model_dir /ssd_mobilenet_v2/saved_model –transformations_config /model_optimizer/extensions/front/tf/ ssd_support_api_v2.4.json –tensorflow_object_detection_api_pipeline_config /ssd_mobilenet_v2/pipeline.config –reverse_input_channels –input_shape [1,640,640,3] |
neural_network.run() |
Predictions differ slightly when using CPU vs. NPU |
There are multiple reasons for this behavior. A possible issue might be that certain neural network operations change slightly when being optimized by OpenVINO. To bypass this issue, corresponding operations can be put on the optimization blacklist using the VPU_HW_BLACK_LIST parameter in the configuration file passed to myriad_compile. To ensure this is the case, the user can temporarily turn off the hardware acceleration on the NPU using the VPU_HW_STAGES_OPTIMIZATION NO parameter. |
|
neural_network.run() |
Multiple output tensors not supported |
The user needs to ensure that the model is converted with a 1-D output tensor only, via OpenVINO. Multidimensional output tensors are not supported by the NPU, even if the model conversion via OpenVINO is successful. |
|
Garbage collection |
with |
Object usage after dereferenced and garbage collected |
If an object is to be used multiple times, ensure that it is always within the scope of the corresponding structure. Since Micropython uses a similar reference model to Python, implicitly or explicitly deleting references of objects can cause errors during execution. Therefore, in order not to lose a reference, it is recommended to stay within one specified scope. Problematic behavior: refImage = None #first with scope with vid_pipeline.read_raw() as frame: #assign ref to refImage refImage = frame.data() #garbage collect frame.data #when leaving 'first with scope' # . . . while True: # . . . with vid_pipeline.read_raw() as frame: #refImage value no longer available #as ref has been garbage collected #before if fooBar(refImage,frame.data()): # . . . Desired behavior: refImage = None #first with scope with vid_pipeline.read_raw() as frame1: #assign ref to refImage refImage = frame1.data() #garbage collection not taking place #for refImage as still within #'first with scope' # . . . while True: # . . . with vid_pipeline.read_raw() as frame: #refImage value still available #still within 'first with scope' if fooBar(refImage,frame.data()): # . . . #garbage collect refImage here |