AI Inference Server GPU accelerated – 1 pipeline Hardware Requirements

AI Inference Server Hardware Requirements

Product
AI Inference Server
Language
en-US

AI Inference Server GPU accelerated - 1 pipeline application requires IPC BX-59A device (at least) with GPU.

Demands Value
App Version (latest) 2.1.0
Hard disk memory 3.3 GB
RAM memory (max.) 14.2 GB
CPU-Core demand (min.) 1 Core
CPU-Core demand (max.) Unlimited
Processor Architecture X86-64
Industrial Edge Management Version (downward compatibility) ≥ Edge Management App version 1.18.10, OS version: simatic-ipc-ied-os-2.1.0-22, iem-os-1.5.5-2
Docker Compose ≥ latest
Docker Engine Version ≥ 20.10.5
Kernel Version ≥ 5.10
Software dependencies Databus v3.0.0 , Optional: External Databus v2.0.4 Optional: SIMATIC S7 Connector v2.2.0-8 , Optional: SIMATIC S7+ Connector v1.4.0-0, Optional: OPC-UA Connector v2.2.0-7, Optional: Vision Connector version 1.0.0, Optional: Basler Vision Connector 1.0.0+20240126.8
Hardware prerequisite 1 network interface
Protocols -
Required Domain Access -
Ports IPv4 & IPv6 / HTTPS / Port 443 / Egress/Ingress
Publicly accessible No
Further Interfaces HTTPS – Rest API
User Interface Yes
UI is Mobile optimized No
Browser (optimized for) Preferred: Chrome
UI Display Target Size 24”
UI Language Default: English, German, Chinese
User Documentation Language Default: English, German, Chinese
Tested on Industrial Edge Device Type & Version IPC BX-59A (minimum: 32 GB RAM) with GPU
Supported IPC HW IPC BX-59A (minimum: 32 GB RAM) with GPU
Excluded platform none
App Labels none
Disclaimer
The GPU enabled version of AI Inference Server mentioned in this document has not been onboarded to the China marketplace. This is due to export restrictions that prevent the distribution or sale of this product within China.
Notice
Only one AI Inference Server application instance (from the above list) can be installed on an edge device. It means you can install MLFB 6AV2170-0LA10-0AA0 (AI Inference Server) or MLFB 6AV2170- 0LA10-1AA0 (AI Inference Server – 3 pipelines) or MLFB 6AV2170-0LA11-0AA0 (AI Inference Server GPU & Vision – 1 pipeline) on a single edge device. You cannot install e.g. MLFB 6AV2170-0LA11-0AA0 (AI Inference Server GPU & Vision – 1 pipeline) and MLFB 6AV2170-0LA10-1AA0 (AI Inference Server – 3 pipelines) applications on a single device.
CAUTION
AI Inference Server should not be used in mission-critical scenarios with high risks (i.e. development, construction, maintenance or operation of systems, the failure of which could lead to a life-threatening situation or to catastrophic damage ("critical application")).** Examples of critical applications: Use in avionics, navigation, autonomous vehicle applications, AI solutions for automotive products, the military, medicine, life support or other life- critical applications.