Module functions

TM NPU MicroPython

Product
SIMACTIC S7-1500 TM NPU 2.0
Language
en-US
Category
Manual

Initializes a video pipeline for preprocessing a raw image before integrating it into the neural network.

Note:

The video pipeline can only be initialized once. Reinitializing the video pipeline or initializing more than one instance of vid_pipeline is not supported.

  • camera

    Has to be an object produced via camera.camera().

  • target_resolution

    A tuple (width, height) describing the size required for the pipeline’s output frames. Currently, the target_resolution result of the multiplication of width and height is limited to a total of 307,200 pixels.

  • target_format

    Can be ‘RGB’.

    Note:

    ‘MONO’ is not supported.

  • target_normalization

    A tuple (scale_mean, scale_norm) with a default value of (0.0, 1.0). For normalization scale_mean is the value to be subtracted from each pixel, and scale_norm is the value to scale each pixel after mean subtraction. These are only used when the target_output is ‘FLOAT’.

  • target_output

    Can be ‘FLOAT’. It determines the format of the individual pixel data delivered.

    Note:

    ‘INTEGER’ is not supported.

  • target_layout

    How the output frame data should be arranged in the memory. Possible values: ‘PLANAR’.

    Note:

    ‘ROWMAJOR’ and ‘INTERLEAVED’ are not supported.

  • shaves_scaling

    The number of SHAVEs dedicated for scaling a frame.

    Note:

    Currently the only valid value for shaves_scaling is ‘4’. For details regarding SHAVEs, consult Chapter 4 Use of shaves and hardware accelerators.

  • shaves_conversion

    The number of SHAVEs dedicated for format conversion.

    Note:

    Currently the only valid value for shaves_conversion is ‘4’. For details regarding SHAVEs, consult Chapter 4 Use ofS shaves and hardware accelerators.

Requests that the video pipeline starts and collects a frame from the connected camera (formerly defined via the camera object) and transfers it into the modules image buffer, from where it can be grabbed for preprocessing. In case the “EXTERNAL_IMAGE” image was used and the image to be processed was not sourced from a connected camera but from the filesystem (e.g. the SD Card or an FTP Server), set_Image0 has to be called beforehand in order to provide the image for the video pipeline. In this case, start_streaming will then forward this image to the image buffer.

Note:

The image buffer can only hold one image at a time. Therefore, the only supported value for numFrames is 1.

Manually sets an image as input for the video pipeline, regardless of the stream resulting from the camera. frame must be a bytes object, not a string object, such as resulting from npufs.read() when used with binary mode (“rb”). Furthermore, it can not be a frame object, but can be an ExternalBuffer resulting from frame.data().

Note:

set_Image(frame) can only be used in combination with the ‘EXTERNAL_IMAGE’ parameter used in camera.camera() and matching other parameters.

Note:

Please be aware of the sequential call structure of this function and its dependencies towards prior function calls and initializations. The following functions have to be called in the following order: vid_pipeline.init(), vid_pipeline.set_Image(), vid_pipeline.start_streaming().

Requests that a previously started video pipeline stops.

Note:

This function must be called after the vid_pipeline has processed the one frame that was announced in start_streaming(). This is because stop_streaming() triggers the frame buffer to be cleared in order to be able to receive a new image.

Returns a frame object, before being processed by the video pipeline, e.g. received by camera.camera() or set by set_Image(frame). The frame is still processed by the vid_pipeline and can later be read by vid_pipeline.read_processed(). The function can only be called ‘numFrame’ times, as specified in start_streaming().

Returns a frame object, produced as the output from a currently streaming video pipeline after being processed. The frame object and its members are defined by the parameters used in vid_pipeline.init(). The function can only be called ‘numFrame’ times, as specified in start_streaming().

The returned frame is of the class type frame. When applying frame.data() on this frame, the returned data is of the type ExternalBuffer. It can be used as the input for a neural network. (see: neural_network.run())

See alsoGeneral descriptionClass: NeuralNet