Content Streaming and Engagement Enter a New Dimension with QUEEN
Content streaming and engagement are entering a new dimension with QUEEN, an AI model by NVIDIA Research and the University of Maryland that makes it possible to stream free-viewpoint video, which lets viewers experience a 3D scene from any angle.
Reduce, Reuse and Recycle for Efficient Streaming
Free-viewpoint videos are typically created using video footage captured from different camera angles, like a multicamera film studio setup, a set of security cameras in a warehouse or a system of videoconferencing cameras in an office. Prior AI methods for generating free-viewpoint videos either took too much memory for livestreaming or sacrificed visual quality for smaller file sizes. QUEEN balances both to deliver high-quality visuals — even in dynamic scenes featuring sparks, flames or furry animals — that can be easily transmitted from a host server to a client’s device.
How QUEEN Works
QUEEN tracks and reuses renders of static regions in a scene, focusing instead on reconstructing the content that changes over time. Using an NVIDIA Tensor Core GPU, the researchers evaluated QUEEN’s performance on several benchmarks and found the model outperformed state-of-the-art methods for online free-viewpoint video on a range of metrics. Given 2D videos of the same scene captured from different angles, it typically takes under five seconds of training time to render free-viewpoint videos at around 350 frames per second.
Applications of QUEEN
This combination of speed and visual quality can support media broadcasts of concerts and sports games by offering immersive virtual reality experiences or instant replays of key moments in a competition. In warehouse settings, robot operators could use QUEEN to better gauge depth when maneuvering physical objects. And in a videoconferencing application — such as the 3D videoconferencing demo shown at SIGGRAPH and NVIDIA GTC — it could help presenters demonstrate tasks like cooking or origami while letting viewers pick the visual angle that best supports their learning.
Conclusion
QUEEN is one of over 50 NVIDIA-authored NeurIPS posters and papers that feature groundbreaking AI research with potential applications in fields including simulation, robotics and healthcare. The code for QUEEN will soon be released as open source and shared on the project page.
Frequently Asked Questions
Q: What is QUEEN?
A: QUEEN is an AI model by NVIDIA Research and the University of Maryland that makes it possible to stream free-viewpoint video, which lets viewers experience a 3D scene from any angle.
Q: How does QUEEN work?
A: QUEEN tracks and reuses renders of static regions in a scene, focusing instead on reconstructing the content that changes over time.
Q: What are the applications of QUEEN?
A: QUEEN can be used to build immersive streaming applications, support media broadcasts of concerts and sports games, and aid in warehouse settings or videoconferencing applications.
Q: When will the code for QUEEN be released?
A: The code for QUEEN will soon be released as open source and shared on the project page.

