Ready, Set, NIM!
Though the pace of innovation with AI is incredible, it can still be difficult for the PC developer community to get started with the technology.
NVIDIA NIM Microservices
Bringing AI models from research to the PC requires curation of model variants, adaptation to manage all of the input and output data, and quantization to optimize resource usage. In addition, models must be converted to work with optimized inference backend software and connected to new AI application programming interfaces (APIs). This takes substantial effort, which can slow AI adoption.
NVIDIA NIM microservices help solve this issue by providing prepackaged, optimized, easily downloadable AI models that connect to industry-standard APIs. They’re optimized for performance on RTX AI PCs and workstations, and include the top AI models from the community, as well as models developed by NVIDIA.
NIM Microservices Support
NIM microservices support a range of AI applications, including large language models (LLMs), vision language models, image generation, speech processing, retrieval-augmented generation (RAG)-based search, PDF extraction, and computer vision. Ten NIM microservices for RTX are available, supporting a range of applications, including language and image generation, computer vision, speech AI, and more. Get started with these NIM microservices today:
- NIM microservices are also available through top AI ecosystem tools and frameworks.
AI Blueprints Will Offer Pre-Built Workflows
NVIDIA AI Blueprints give AI developers a head start in building generative AI workflows with NVIDIA NIM microservices.
PDF to Podcast AI Blueprint
The PDF to podcast AI Blueprint will transform documents into audio content so users can learn on the go. By extracting text, images, and tables from a PDF, the workflow uses AI to generate an informative podcast. For deeper dives into topics, users can then have an interactive discussion with the AI-powered podcast hosts.
3D-Guided Generative AI
The AI Blueprint for 3D-guided generative AI will give artists finer control over image generation. While AI can generate amazing images from simple text prompts, controlling image composition using only words can be challenging. With this blueprint, creators can use simple 3D objects laid out in a 3D renderer like Blender to guide AI image generation. The artist can create 3D assets by hand or generate them using AI, place them in the scene, and set the 3D viewport camera. Then, a prepackaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.
NVIDIA NIM on RTX With Windows Subsystem for Linux
One of the key technologies that enables NIM microservices to run on PCs is Windows Subsystem for Linux (WSL).
Project G-Assist Expands PC AI Features With Custom Plug-Ins
As part of Project G-Assist, an experimental version of the System Assistant feature for GeForce RTX desktop users is now available via the NVIDIA App, with laptop support coming soon.
G-Assist: AI-Powered PC Assistant
G-Assist helps users control a broad range of PC settings — including optimizing game and system settings, charting frame rates and other key performance statistics, and controlling select peripherals settings such as lighting — all via basic voice or text commands.
Get Started with G-Assist
G-Assist is built on NVIDIA ACE — the same AI technology suite game developers use to breathe life into non-player characters. Unlike AI tools that use massive cloud-hosted AI models that require online access and paid subscriptions, G-Assist runs locally on a GeForce RTX GPU. This means it’s responsive, free, and can run without an internet connection. Manufacturers and software providers are already using ACE to create custom AI Assistants like MSI’s AI Robot engine, the Streamlabs Intelligent AI Assistant, and upcoming capabilities in HP’s Omen Gaming hub.
Building and Customizing G-Assist Plug-Ins
G-Assist was built for community-driven expansion. Get started with this NVIDIA GitHub repository, including samples and instructions for creating plug-ins that add new functionality. Developers can define functions in simple JSON formats and drop configuration files into a designated directory, allowing G-Assist to automatically load and interpret them. Developers can even submit plug-ins to NVIDIA for review and potential inclusion.
Conclusion
NVIDIA NIM microservices for RTX are available at build.nvidia.com, providing developers and AI enthusiasts with powerful, ready-to-use tools for building AI applications. Download Project G-Assist through the NVIDIA App’s "Home" tab, in the "Discovery" section. G-Assist currently supports GeForce RTX desktop GPUs, as well as a variety of voice and text commands in the English language. Future updates will add support for GeForce RTX laptop GPUs, new and enhanced G-Assist capabilities, as well as support for additional languages.
FAQs
Q: What are NIM microservices?
A: NIM microservices are prepackaged, optimized, easily downloadable AI models that connect to industry-standard APIs.
Q: What applications do NIM microservices support?
A: NIM microservices support a range of AI applications, including large language models (LLMs), vision language models, image generation, speech processing, retrieval-augmented generation (RAG)-based search, PDF extraction, and computer vision.
Q: How do I get started with NIM microservices?
A: Get started with NIM microservices at build.nvidia.com.
Q: What is G-Assist?
A: G-Assist is an AI-powered PC assistant that helps users control a broad range of PC settings, including optimizing game and system settings, charting frame rates and other key performance statistics, and controlling select peripherals settings such as lighting.
Q: How do I get started with G-Assist?
A: Get started with G-Assist through the NVIDIA App’s "Home" tab, in the "Discovery" section. G-Assist currently supports GeForce RTX desktop GPUs, as well as a variety of voice and text commands in the English language.

