AI Containers™

Choose your AI Containers™(s)™ via AnyConnect Console

Deploying AI Containers™ securely, at scale

In edge inference, the classification and/or decision of a neural network typically happens at the edge (on the camera). While inferring on one single device is relatively straightforward, deploying different neural networks from various frameworks, securely, over the air, to millions of different edge devices in the field—is a lot more complicated.

AnyConnect provides a solution to this problem through an AI Store™, enabling users to deploy AI Containers™ securely to the edge, at scale, with access control. The AnyConnect Console monitors all devices on the platform, and provides the ability to centrally view, control and manage deployed containers.

  1. Select AI Containers™(s)™ from our AI Store™.
  2. Optionally, upload your own AI Containers™(s)™ to our AI Store™.
  3. Deploy selected AI Containers™ to your cameras with AnyConnect’s OTA Programming feature.

AnyConnect updates your AI Containers™ for Cloud Inference

A Repository of AI Containers™

AnyConnect’s Smarter AI™ Camera Platform offers a repository of AI Containers™ in its cloud. The repository holds containers with both free and paid trained neural networks, as well as some software logic.

The repository also supports leading deep learning frameworks, such as TensorFlow, PyTorch, Keras, MXNet, ONNX, and more.

AnyConnect OTA Programming deploys your AI Containers™ for Edge Inference

AI Container™ at the Edge

AnyConnect’s Smarter AI™ Camera Platform deploys AI Containers™ to the edge, seamlessly, securely, and over-the-air. The system will deliver the container with the right framework, to the right edge device, and its neural network accelerator—automatically. This system supports heterogeneous edge device deployments with different types of edge inference accelerators, like CPUsGPUsIntel MovidiusGoogle Coral Edge TPU.

The AnyConnect console allows you to manage and monitor the deployment of AI Containers™ to the edge seamlessly. The management system built into the platform will automatically convert the trained neural networks and their associated logic to the right frameworks needed by your edge device infrastructure.

Our platform will also provide statistics on the quality of inferences centrally.

AI Containers™ in depth

AI Containers™ help you to solve plumbing, deployment & security challenges related to the deployment of AI applications to the edge. Although chipmakers have created common AI runtimes, such as Intel OpenVinoQualcomm Neural Processing SDK, NVIDIA JetPack SDK, etc. to simplify porting AI Models to their chips, many problems remain as deploying an AI Application to the edge on a new product always requires long, complicated and costly engineering efforts. Let’s look at what AI Containers™ look like.

AnyConnect’s AI Containers™ have four main parts – Egresses, Ingresses, Configuration Interface, and the AI Compute, Security & Storage Interface. AI Containers™ enable system administrators to deploy an AI Application to a new product graphically in minutes. Let’s dive into each of those four sections.

Ingress

All video, audio, and data (e.g., sensor data, positioning, etc.) streams required by an AI Application is explicitly defined in the Ingress part of AI Containers™. The Ingress side specifies the format of streams in detail, such as resolution, framerate, and color for video, resolution & number of channels for audio, etc. As an administrator, you’ll know before deploying an AI Containers™ Over-the-Air (OTA), if this container is compatible with the camera product. For instance, does this product have a suitable AI Accelerator, does it have the required sensors/imagers/microphones, and finally, does it have enough available compute capacity.

Egress

AI Models’ inferences format (what AI Models predict/recognize/classify) are standardized and directly usable to create events and notifications. On top of that, the egress interface provides inference quality metrics. Inference quality metrics are recorded in AnyConnect’s cloud to provide an understanding of the AI Model’s inference quality per camera and over time.

Configuration Interface

As AI Models come from different suppliers and AI Containers™ can host multiple models, the configuration of these models and the associated software logic is complicated. AI Containers™ standardize the configuration of AI Models and the embedded software logic, ensuring easy and engineering department-free deployments.

AI Compute, Security & Storage

AI Containers™ do not replace or add layers between AI Models and their runtimes; they offer unified, standardized, and monitored access to compute, storage (slow storage such as SD card and fast storage such as NVME SSD) as well as security resources. Security is critical as many AI models come with copy protection, either encryption or a Digital Right Management (DRM). By providing access to Crypto Cores, such as Trust Zone and Trusted Platform Modules (TPMs), AI Containers™ allow secured AI Models on almost any device.

AI Containers™ vs. Docker

Containers and Docker, in particular, have been a significant (r)evolution in IT and in the way to code, test, deploy, and manage applications. It started with the Virtual Machine revolution, 20 years ago, enabling IT administrators to optimize IT infrastructure (compute, storage, and network) as well as increase availability.

Containers, often referred to as OS-level virtualization, remove the OS from virtualization, enabling Applications to run on top of other applications contained in images. This technology allowed for software-centric deployments (code, test, deployment, and maintenance), known today as DevOps.

Docker and Containers, in general, are very useful to deploy applications on IT and cloud systems; however, deploying AI Applications to embedded devices with real-time video, audio and sensors has its own set of challenges, mostly related to plumbing and configuration:

  • Ingress: which imager/sensor has to be connected to an AI Application / AI Model?
  • Egress: how to standardize the format of inferences and inference quality metrics from the AI Application / AI model?
  • Data handling: How to resize/transform data streams, easily, efficiently, and automatically on an embedded device?
  • Scale: How to deploy Edge AI without requiring expensive and hard to find software developers?
  • Configuration: how to standardize the configuration of these AI Applications?
  • How to provide easy access to AI Acceleration, Storage, Crypto Cores, and Vaults.

Unfortunately, Docker containers do not provide solutions to those challenges, worst they are adding an additional layer of software, using compute, and complexity lengthening development cycles. Finally, a good way to understand AI Containers™ is to compare them to Android/iOS applications. Most interactions between the device hardware and the Application is standardized (e.g., camera, microphone, etc.), AI Containers™ do the same for AI Applications.

We designed AI Containers™ to facilitate the deployment of AI Applications to the edge by System Administrators with a graphical web tool.

AI Containers™ vs. Kubernetes

Kubernetes is an open-source container orchestration system. In other words, it’s a convenient tool to orchestrate the deployment of containers on host machines, whether those are in a local data center or are in a cloud system. Kubernetes has a lot of fancy features that are heavily biased towards enterprise-class IT, such as automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes, which is the foundation of many PaaS & IaaS systems, works with Docker and other container systems.

Deploying AI Applications to millions of embedded devices in the field with intermittent network connections (e.g., 4G/LTE, 5G, Wi-Fi), Over-the-Air (OTA) is a completely different ball game. Furthermore, as compute capability is limited and varies a lot on embedded devices, you have to understand the compute performance level required, as well as the sensor types needed to run a specific AI Application. On top of that, system administrators want to have one single interface to manage their camera networks, like deploying AI Applications as well as Firmware and OS updates.

Console is an API based web portal to let you configure & manage large deployments of AI-Driven and connected camera networks. Console enables system administrators to push AI Containers™ to edge devices, Over-the-Air. On top of that, Console allows system administrators to update OS, firmware, and camera apps “en masse” and Over-the-Air. AnyConnect offers a wealth of AI Applications in its marketplace: the AI Store™. Deploying an AI Application through the AI Store™ is almost as easy as installing an App on your phone. Console is API based, and integrates with existing deployment management systems.