Main functionalities

AI Asset Manager User Manual

Product
AI Asset Manager
Product Version
2.2.0
Language
en-US

AI Asset Manager App

This Industrial Edge application retrieves models from different cloud providers in a secure way. It provides an extension of Siemens Industrial Edge Management which deploys, manages, and scales AI applications in a distributed edge infrastructure. SIEMENS Industrial Edge Management distributes AI models to the edge layer while leveraging industrial edge capabilities. This support allows the customers to move from zero to AI without additional concerns about infrastructure and deployment.

AI Asset Manager App pulls the models from the cloud and delivers them to the AI Inference Server, which runs on the Industrial Edge device.

In addition to deploying AI models to AI Inference Server, AI Asset Manager receives metric data about the running pipeline and its environment, like pipeline execution and hardware statistics. These metric data can be used to set up alerts to notify users about potential malfunctions.

AI Asset Manager Agent

AI Asset Manager Agent (often referred to as an Agent) is a Docker container embedded inside the AI Inference Server.

The AI Asset Manager Agent is responsible for the communication between the AI Asset Manager and the API of the AI Inference Server. The AI Asset Manager Agent receives the action requests from the AI Asset Manager and processes them by calling the corresponding endpoints of the AI Inference Server. In addition, it updates the action statuses and the pipeline statuses towards AI Asset Manager.

The Agent is also responsible for gathering the above discussed metric data and forward them to AI Asset Manager.

The model deployment workflow

This section describes the model deployment to Industrial Edge:

  1. The AI engineer wants to deploy a trained and validated AI model to the Edge device (the training and validation can be performed in cloud environments or using the on-site AI Designer).

  2. The automation engineer supports the deployment based on the used environment during the training by using static workspaces or creating new workspaces in the AI Asset Manager.

    Cloud environment

    To access the used cloud, the automation engineer sets up a new workspace.

    For AWS, the AI Asset Manager can download the model automatically with the AI Inference Server.

    On-site using API Integration

    The automation engineer uses the open API endpoint of AI Asset Manager to upload a pipeline to the previously created API Integration workspace.

    Manual workspaceThe automation engineer can also manually upload the pipeline package that contains the model.

  3. The automation engineer executes the deployment on the AI Inference Server.

  4. The AI Asset Manager provides dashboards to monitor AI models running on devices across the fleet.

Pipeline monitoring workflow

Metric data about the deployed and running pipelines are automatically collected and forwarded by the AI Asset Manager Agent to the AI Asset Manager application.

Users can browse and visualize received metric data by navigating to the Prometheus UI available from the application.

Under Alert Configuration, users can define alert rules for solution-critical metrics, and configure notification channels to receive alert notifications if an alert rule is violated.