Nvidia had a real show in 2018 when it first announced real-time ray tracing on its RTX 20 Series graphics cards, but it didn’t take long for most gamers to realize. This technology was ahead of its time in terms of usability. . That’s why it’s exciting to hear that the company is making more progress in improving the efficiency of ray tracing on GPUs for future generations of hardware. As discovered by @ 0X22H on Twitter and reported by Tom’s Hardwarean Nvidia research group recently supported by Sana Damani of the Georgia Institute of Technology publish its findings on topic. True, they came up with a rather abstract name for the technique: introducing “Subwarp Interleaving”.
No wonder this publication is so technical and delves into levels of physics that we won’t even try to explain. Don’t take our word for it lightly, though, here’s an excerpt from the introduction only: “Subwarp Interleaving exploits yarn divergence to hide pipeline stalls in different parts of the workload occupied. Use low warp. Subwarp Interleaving enables detailed interlacing of diverging paths in a warp with the goal of increasing hardware usability and reducing warp latency.” Of course, you can read the article yourself for better context on the matter, and it would make more sense than just the excerpt we’ve provided. However, good luck with that.
This is the lesson we learned. This new “technique”, as the paper refers to it, enables real-time ray tracing efficiency to improve by an average of 6.8%, with best case results up to 20%. Now that’s not phenomenal considering the massive performance impacts caused by enabling RTX, but it’s progress for sure and will matter as hardware performance improves overall. Nvidia seems determined to push ray tracing in games, and the results can be dramatic.
Wondering when we might see this in products?
The article goes on to note that the technique requires architectural-level changes to the hardware to work. This means that gamers won’t be able to reap the benefits of ray tracing performance improvements through an Nvidia driver update, but the option may be rolled out to unreleased products. . Since Nvidia is only publishing this as a learning effort, we shouldn’t expect anything in the immediate future.
The good news, however, is that as Nvidia continues to add more tensor cores and other improvements to graphics cards, the technique will push real-time ray tracing into a de facto technology in mainstream games. .
Will we soon see more support for all-beam lighting? Probably not from most developers, but ray-traced shadows run less and have a nice visual bonus. We may see more of that in the near future, especially since it can save development costs by allowing developers to bypass the manual processes of creating built-in shaders. Better pictures with less effort and at a low cost of performance? Following Ray is sure to find wider adoption sooner or later.
If you’re curious about other visual and performance-related technologies coming to the game in the near future, check out CES Coverage 2022. Nvidia shows DLDSRit’s an interesting twist on using DLSS for downscaling, while AMD unveiled a universal set of image upscalers called Radeon Super Resolution works for all RDNA graphics cards.
https://www.pcinvasion.com/nvidia-ray-tracing-efficiency-research/ Nvidia posts findings on how to increase ray tracing efficiency on GPUs in the future