Smart spaces are delivering unprecedented value, creating a continuous flow of information between physical and digital worlds.
By incorporating technologies such as the Internet of Things (IoT), cloud computing, machine learning, and AI at the edge, world-class businesses can capture digital data and turn them into actionable insights.
However, the process is complicated with edge environments distributed outside the confines of an enclosed data center and requires management across many locations.
Innodisk, a provider of industrial-embedded Flash and DRAM solutions sought to address these problems while building a smart factory for their subsidiary, Aetina. Working closely with NVIDIA, Innodisk embarked on a mission to build a high-performance end-to-end vision AI solution for industrial settings.
AI projects are split up into two phases: AI model development and AI model deployment. This post addresses the common challenges associated with these steps and how Innodisk leveraged different tools and technologies to address them.
Developing an AI model
The project started off with the development of a product inspection solution that is able to meet strict quality production standards. The Flash and DRAM products that Innodisk produces are small and complex components designed for harsh environments and applications.
Innodisk required a solution capable of processing high-resolution image recognition tasks quickly and was challenged with overcoming several common issues when developing an edge AI model. This included insufficient raw data, long data processing times, costly model training, high compute power needs, and verifying that the model was ready for deployment.
Using the NVIDIA TAO Toolkit, available through NVIDIA AI Enterprise, Aetina created production-ready AI models customized to the needs of Innodisk in just days. While this often takes months, they sped up the process by fine-tuning NVIDIA pretrained models, rather than training a model from scratch.
After training, the model was integrated into an application. Aetina containerized the application for easy deployment at the edge. Then they uploaded the customized container to their private registry.
Deploying an AI model
Having finalized the application, the next step was finding a solution for deploying and managing it at scale.
They turned to cloud-native technology to manage their edge deployments. In this case, they used Kubernetes, an open-source system for containerized applications, creating helm charts to deploy their application.
With the NVIDIA TAO Toolkit, Aetina quickly moved from model development to deployment. But, model deployment also presents complexities. Enterprises often grapple with long deployment times, security issues, and costly deployments and monitoring costs.
Aetina looked to NVIDIA Fleet Command to alleviate these issues.
Fleet Command is a managed platform for container orchestration that streamlines the provisioning and deployment of systems and AI applications at the edge. After an application is deployed, the AI life cycle is simplified through over-the-air application updates, remote monitoring and management, and strict data protection against leakages and data forgery.
Using Fleet Command Aetina deployed its AI model quickly and with ease.
Completing the end-to-end AI workflow
Using this vision AI solution, Innodisk now performs accurate inspections in less than 1 second, enabling them to build more products efficiently and cost-effectively.
Before this, the factory relied on a human inspector stationed in a factory line, taking 10 seconds to do the same task. With this solution, factory workers are freed from monotonous tasks and focus on more important functions.
The process has also led Aetina to build an end-to-end solution that other organizations can use for transforming their environments into smart spaces.
To learn more about this solution, check out the on-demand GTC session End-to-End Smart Factory AI Application: From Model Development to Deployment with Aetina.