AI Agents Driving New Enterprise Applications
NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety.
Confidential Computing for Data Safety
With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions, and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks.
Unlocking the Full Potential of Agentic AI
“By bringing our Gemini models on premises with NVIDIA Blackwell’s breakthrough performance and confidential computing capabilities, we’re enabling enterprises to unlock the full potential of agentic AI,” said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. “This collaboration helps ensure customers can innovate securely without compromising on performance or operational ease.”
AI Agents Driving New Enterprise Applications
Agentic AI systems can reason, adapt, and make decisions in dynamic environments. For example, in enterprise IT support, an agentic AI system can diagnose issues, execute fixes, and escalate complex problems autonomously. Similarly, in finance, an agentic AI system can investigate anomalies and take proactive measures such as blocking transactions or adjusting fraud detection rules in real-time.
The On-Premises Dilemma
While many can already use the models with multimodal reasoning, those with stringent security or data sovereignty requirements have yet been unable to do so. With this announcement, Google Cloud will be one of the first cloud service providers to offer confidential computing capabilities to secure agentic AI workloads across every environment – whether cloud or hybrid.
AI Observability and Security for Agentic AI
Scaling agentic AI in production requires robust observability and security to ensure reliable performance and compliance. Google Cloud today announced a new GKE Inference Gateway built to optimize the deployment of AI inference workloads with advanced routing and scalability.
Conclusion
This collaboration between NVIDIA and Google Cloud brings agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models. With confidential computing capabilities, enterprises can unlock the full potential of agentic AI while maintaining data safety and security.
FAQs
Q: What is agentic AI?
A: Agentic AI systems can reason, adapt, and make decisions in dynamic environments, unlike traditional AI models that perceive or generate based on learned knowledge.
Q: What is confidential computing?
A: Confidential computing ensures that sensitive code in AI models and data remain secure and cannot be viewed or modified.
Q: What are the benefits of this collaboration?
A: This collaboration enables enterprises to innovate securely without compromising on performance or operational ease, while maintaining data safety and security.
Q: What is the GKE Inference Gateway?
A: The GKE Inference Gateway is a new solution that optimizes the deployment of AI inference workloads with advanced routing and scalability.

