Date:

Enhanced Security and Streamlined Deployment of AI Agents with NVIDIA AI Enterprise

Simplified Management of AI Agent Pipelines

The newly launched NVIDIA NIM Operator simplifies the deployment and management of NIM microservices used to deploy AI pipelines on Kubernetes. NIM Operator automates the deployment of AI pipelines and enhances performance with capabilities such as intelligent model pre-caching for lower initial inference latency and faster autoscaling. You can choose to autoscale based on CPU, GPU, or NIM-specific metrics, such as NIM max requests, KVcache, and so on. It also simplifies the upgrade process by providing easy rolling upgrades. Change the version number of the NIM microservice and the NIM Operator updates the deployments in the cluster.

Security and API Stability for AI Models

NVIDIA AI Enterprise includes monthly feature branch releases for AI and data science software which contain top-of-tree software updates and are ideal for AI developers who want the latest features. This software is maintained by NVIDIA for one month until the next version is released, and available security fixes are applied before each release. Although this is great for customers who want to stay on the leading edge with the newest capabilities, there’s no guarantee that APIs will not change from month to month. This can make it challenging to build enterprise solutions that need to be both secure and reliable over time, as developers may need to adjust applications after an update.

To address this need, NVIDIA AI Enterprise also includes production branches of AI software. Production branches ensure API stability and regular security updates and are meant for deploying AI in production when stability is required. Production branches are released every 6 months and have a 9-month lifecycle. Throughout the 9-month lifecycle of each production branch, NVIDIA continuously monitors critical and high common vulnerabilities and exposures (CVEs) and releases monthly security patches. By doing so, the AI frameworks, libraries, models, and tools included in NVIDIA AI Enterprise can be updated for security fixes while eliminating the risk of breaking an API.

AI for Healthcare

Customers in highly regulated industries often require software to be supported for even longer periods. For these customers, NVIDIA AI Enterprise also includes long-term support branches (LTSB), which are supported with stable APIs for 3 years.

LTSB 1 coincided with the first release of NVIDIA AI Enterprise in 2021 and includes foundational AI components. LTSB 2, as part of this latest release of NVIDIA AI Enterprise, adds Holoscan, which includes Holoscan SDK and Holoscan Deployment Stack. Holoscan is the NVIDIA AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core capabilities to run real-time streaming, imaging, and other applications.

More Ways to Deploy NIM Microservices

NVIDIA AI Enterprise is supported on both on-premises and public cloud services. You can deploy NIM microservices and other software containers into self-managed Kubernetes running on cloud instances, but many prefer to use Kubernetes managed by the cloud provider. Google Cloud has now integrated NVIDIA NIM into Google Kubernetes Engine to provide enterprise customers with a simplified path for deploying optimized models directly from the Google Cloud Marketplace.

Availability

The next version of NVIDIA AI Enterprise is available now. License holders can download production branch versions of most AI software containers right away, but the NIM microservices are expected to be added to the production branch at the end of November. As always, you also get the benefit of enterprise support, which includes guaranteed response times and access to NVIDIA experts for timely issue resolution.

Conclusion

NVIDIA AI Enterprise offers a range of features that make it easier to manage and deploy AI agent pipelines, ensuring security and API stability for AI models. With the addition of LTSB, customers in highly regulated industries can have software supported for even longer periods. The platform is supported on both on-premises and public cloud services, providing flexibility and ease of deployment.

Frequently Asked Questions

Q: What is the main benefit of NVIDIA AI Enterprise?
A: The main benefit of NVIDIA AI Enterprise is its ability to simplify the management and deployment of AI agent pipelines, ensuring security and API stability for AI models.

Q: What is the difference between feature branch and production branch in NVIDIA AI Enterprise?
A: Feature branch releases contain top-of-tree software updates and are ideal for AI developers who want the latest features, while production branches ensure API stability and regular security updates and are meant for deploying AI in production when stability is required.

Q: How long is the lifecycle of each production branch in NVIDIA AI Enterprise?
A: Each production branch in NVIDIA AI Enterprise has a 9-month lifecycle.

Q: What is LTSB in NVIDIA AI Enterprise?
A: LTSB stands for Long-Term Support Branch, which is a type of branch that includes foundational AI components and is supported with stable APIs for 3 years.

Q: What is Holoscan in NVIDIA AI Enterprise?
A: Holoscan is the NVIDIA AI sensor processing platform that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core capabilities to run real-time streaming, imaging, and other applications.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here