What are NVIDIA Launchables?
NVIDIA Launchables are one-click deployable GPU development environments with predefined configurations that can help you get up and running with a workflow. They function as templates that contain all the essential components necessary to achieve a purpose:
- NVIDIA GPUs
- Python
- CUDA
- Docker containers
- Development frameworks, including NVIDIA NIM, NVIDIA NeMo, and NVIDIA Omniverse
- SDKs
- Dependencies
- Environment configurations
They also can contain GitHub repos or Jupyter notebooks automatically set up and mounted in a GPU instance.
Launchable examples
Here are a couple of scenarios where Launchables come in handy:
- Setting up Megatron-LM for GPU-optimized training
- Running NVIDIA AI Blueprint for multimodal PDF data extraction
- Deploying Llama3-8B for inference with NVIDIA TensorRT-LLM
Setting up Megatron-LM for GPU-optimized training
Before tinkering with different parallelism techniques like tensor or pipeline parallelism, you must have PyTorch, CUDA, and a beefy GPU setup to have a reasonable training pipeline.
With the Megatron-LM Launchable, you get access to an 8xH100 GPU node environment from a cloud partner that comes with PyTorch, CUDA, and Megatron-LM setup. Now, you can immediately adjust different parameters, such as –tensor-model-parallel-size and –pipeline-model-parallel-size, to determine which parallelism technique is most suitable for your specific model size and pretraining requirements.
Launchable benefits
After collecting feedback from early users, here are some core technical capabilities that have developers excited about using Launchables for reproducible workflows:
- True one-click deployment
- Environment reproducibility
- Flexible configuration options
- Built for collaboration
True one-click deployment
Development environment setup typically involves hours of debugging dependencies, configuring GPU drivers, and testing framework compatibility.
Launchables reduce this to a one-click deployment process by providing preconfigured environments with frameworks, CUDA versions, and hardware configurations. This means that you can start writing code immediately instead of wrestling with infrastructure.
Environment reproducibility
Environment inconsistency remains a major source of debugging overhead in AI development teams.
Launchables solve this by packaging your entire development stack, from CUDA drivers to framework versions, into a versioned, reproducible configuration. When you share a Launchable URL, you’re guaranteeing that any end consumer gets an identical development environment, eliminating “works on my machine” scenarios.
Flexible configuration options
Different AI workloads require different hardware and software configurations.
Launchables support this through granular environment customization:
- Select specific NVIDIA GPUs (T4 to H100) based on your vRAM requirements.
- Define container configurations with precise Python and CUDA version requirements.
- Include specific GitHub repositories or Jupyter notebooks to be automatically mounted in your GPU instance.
Built for collaboration
Launchables streamline collaboration by enabling anyone to share complete development environments through a single URL. For open source maintainers, educational instructors, or even teammates sharing an internal project, you can track deployment metrics to understand how others are using your environment.
This is also particularly valuable for ensuring reproducibility in research settings and maintaining consistent training environments across distributed teams.
Creating a Launchable
Creating a Launchable is straightforward:
- Choose your compute: Select from a range of NVIDIA GPUs and customize your compute resources.
- Configure your environment: Pick a VM or container configuration with specific Python and CUDA versions.
- Add your code: Connect your Jupyter notebooks or GitHub repositories to be added to your end GPU environment.
- Share and deploy: Generate a shareable link that others can use to instantly deploy the same environment.
Get started with one-click deployments today
Launchables drastically reduce the traditional friction of sharing and reproducing GPU development environments by letting you package, version, and instantly deploy exact configurations. Teams spend less time on infrastructure setup and more time building AI applications.
We are actively expanding readily available Launchables on build.nvidia.com as new NIM microservices and other NVIDIA software, SDKs, and libraries are released. Explore them today!
Conclusion
NVIDIA Launchables are a game-changer for AI development, providing a one-click deployment solution that streamlines collaboration, reproducibility, and flexibility. With Launchables, developers can focus on building AI applications without worrying about the underlying infrastructure.
Frequently Asked Questions
Q: What is an NVIDIA Launchable?
A: An NVIDIA Launchable is a one-click deployable GPU development environment with predefined configurations that can help you get up and running with a workflow.
Q: What are the benefits of using Launchables?
A: Launchables provide true one-click deployment, environment reproducibility, flexible configuration options, and built-for-collaboration features.
Q: How do I create a Launchable?
A: Creating a Launchable is straightforward and involves choosing your compute, configuring your environment, adding your code, and sharing and deploying your environment.
Q: Where can I find more information about Launchables?
A: You can find more information about Launchables on build.nvidia.com, where you can explore readily available Launchables and learn more about the benefits and features of this innovative solution.

