Nvidia rethinks the graphics card with the RTX 2080

Nvidia rethinks the graphics card with the RTX 2080
Why Ray Tracing on Nvidias New GPUs Is so Exciting
Share Share Nvidia rethinks the graphics card with the RTX 2080 share tweet Linkedin Reddit Pocket Flipboard Email Nvidia unveiled its new Turing architecture and RTX 2080 graphics cards last month with a big promise of six times more performance than the previous generation, and the new ability to support real-time ray tracing in modern games. Weve seen performance claims, some details about the Turing technical architecture, and many PC gamers trying to calculate how these new cards will actually perform, but I sat down with Nvidia last month to get an in-depth look at the new Turing architecture thats powering this next generation of cards. Its fair to say that theres a lot going on.

The big new feature of Nvidias Turing architecture is the ability to support real-time ray tracing in games. Ray tracing is a rendering technique used by movie studios to generate light reflections and cinematic effects. Cars, released back in 2006, was the first extensively ray-traced movie, and many studios now use ray tracing in modern films. Even rendering tools like Adobe AfterEffects, Maya, and 3ds max all support some form of ray tracing, and its a technique thats considered the holy grail for video games.

The name is a giveaway as to what ray tracing actually is: Trying to determine the path of photons of light in virtual environments, so those virtual environments look as realistic as possible. Being able to work out how light should fall in a scene requires a lot of computing power, even more so as objects and light sources start moving (as they tend to do in games and movies).

Nvidia announced three new GeForce RTX graphics cards at Gamescom 2018, the GeForce RTX 2080, GeForce RTX 2080 Ti, and GeForce RTX 2070. Weve covered all the details of what you need to know about each GPU in those articles, but for those that want to get technical, this will be an in-depth look at the Nvidia Turing architecture. Theres a ton of new technology in the Turing architecture, and Nvidia rightly calls it the biggest generational improvement in its GPUs in more than ten years, perhaps ever.

Ray tracing is nothing new. Coders have been experimenting with it since the very first days of computer graphics. What has changed over time is how realistic and detailed ray tracing can be, and how fast it can be computed, and with Nvidias new cards the technology is taking another jump forward.

There was a ton of speculation and, yes, blatantly wrong guesses as to what the Turing architecture would contain. Prior to the initial reveal at SIGGRAPH 2018, every supposed leak was fake. Making educated guesses about future architectures is a time honored tradition on the Internet, but such guesses are inevitably bound to be wrong. Weve covered many details of Nvidias Turing architecture previously, but today we can finally take the wraps off everything and get into the minute details.

Ray tracing attempts to work out the path of light by imagining the eye looking at a scene and working backwards to the light source. The color of each pixel must be figured out based on the objects in a scene, their relationship to each other, and the number, type, and position of the scenes light sources—not a straightforward set of calculations at all.

Well have reviews of the first two Turing cards next week, just ahead of the official September 20 launch date, where well provide hard performance numbers. If the specifications for Turing didnt seem like anything special, just the same old stuff with some fancy ray tracing and deep learning marketing slapped on, theres far more going on than paper specs. If you want the lowdown on all of Turings deepest and darkest secrets, youve come to the right place.

The way that light falls on a black cloth is different to how it falls on a chrome sphere, of course, and ray tracing algorithms need to take into account all these properties for every object in view, and every light source hitting them. Light bounces and reflects too, making the process of figuring out the colors of a few million pixels even harder.

Next Up In Circuit Breaker Samsung unveils three new AKG wireless headphones The DelFly Nimble robot can fly like a real insect Samsung teases new Galaxy event for October 11th Nvidia rethinks the graphics card with the RTX 2080 Nintendo is launching a dedicated wireless NES controller for the Switch Moment releases first Mfi battery case for the iPhone X, just as its being discontinued Most Read This terrifying graphic from The Weather Channel shows the power and danger of Hurricane Florence Mixed reality reveals the very real danger of rising floodwaters How to pick between the new iPhone XS, XS Max, and XR Verge3.0_Logomark_Color_1 Command Line Command Line delivers daily updates from the near-future.

As a result, it takes a huge amount of processing power. Weve seen high quality ray traced scenes for years now, but its been largely restricted to static images or scenes that can be rendered a long time in advance—movie CGI has been able to deploy ray tracing a long time before video games because huge banks of computers can spend days calculating the physics of a scene.

If youre interested in overclocking a gaming PC, the testing phase and tools can be a little daunting. Its a time consuming process that puts many people off from experimenting with GPU overclocking, but Nvidias Scanner aims to take care of the testing in around 20 minutes. Nvidia has built its RTX 2080 GPUs with overclocking in mind, but the company is also planning to make Scanner available on older generations of GPUs too. Nvidia is working with MSI, Gigabyte, Asus, and other card makers to support these additional cards, with an API to plug into existing GPU management tools.

Pixar even published a paper about how it used ray tracing in the move Cars. Adding ray tracing to the renderer enables many additional effects such as accurate reflections, detailed shadows, and ambient occlusion, explains Pixar. (Ambient occlusion is a technique used to better work out where light is blocked in a scene, and how thats represented on screen).

Share Share Nvidia brings one-click overclocking to its graphics cards share tweet Linkedin Reddit Pocket Flipboard Email Illustration by Alex Castro / The Verge Nvidia is making it easier for PC gamers to overclock their graphics cards. A new tool, dubbed Nvidia Scanner, will be available for the companys latest line of GeForce RTX 2080 graphics cards and provide one-click overclocking. Nvidia is running its own workload to test the limits of a GPU by pushing its clock speeds and checking the voltage curve to detect any failures.

Review: Nvidia Turing Architecture Examined And Explained – Graphics

Image: NvidiaA lot of the impressive effects that ray tracing brings—reflections, shadows, refractions—already exist in computer games, but the details of these effects are mostly fudged or estimated by graphics artists. Moving to true, real-time ray tracing has been compared to moving from graphics painted by artists to graphics calculated by physics.

Instead, the company wants to continue using rasterization for the things its good at and add certain ray-traced effects where those techniques would produce better visual fidelity—a technique it refers to as hybrid rendering. Nvidia says rasterization is a much faster way of determining object visibility than ray-tracing, for example, so ray-tracing only needs to enter the picture for techniques where fidelity or realism is important yet difficult to achieve via rasterization, like reflections, refractions, shadows, and ambient occlusion. Nvidia notes that the traditional rasterization pipeline and the new ray-tracing pipeline can operate “simultaneously and cooperatively” in its Turing architecture.

Nvidia Scanner API brings one-click overclocking capabilities

Up until now, games have solely relied on a rendering technique called rasterization, where instead of bombarding a scene with millions of rays of virtual light bouncing around in every direction, graphics processors calculate how the millions of triangles that make up 3D models should look when converted to pixels and flat 2D images—but just one at a time. Simulated light rays coming straight from a virtual light source influence the color and brightness of those rendered pixels, but multiple triangles cant affect each other, requiring shadows, indirect illumination, and reflections to all be cleverly simulated.

Its Turing Day at TR. Weve been hearing about the innovations inside Nvidias Turing GPUs for weeks, and now we can tell you a bit more about whats inside them. Turing implements a host of new technologies that promise to reshape the PC gaming experience for many years to come. While much of the discussion around Turing has concerned the companys hardware acceleration of real-time ray-tracing, the tensor cores on board Turing GPUs could have even more wide-ranging effects on the way we game—to say nothing of the truckload of other changes under Turings hood that promise better performance and greater flexibility for gaming than ever before.

On the best games of today, on the best hardware, it looks fantastic—but its still not quite like looking at the real world. Real-time ray tracing promises another step in that direction.

The software groundwork for this technique was laid earlier this year when Microsoft revealed the DirectX Raytracing API, or DXR, for DirectX 12. DXR provides access to some of the basic building blocks for ray-tracing alongside existing graphics-programming techniques, including a method of representing the 3D scene that can be traversed by the graphics card, a way to dispatch ray-tracing work to the graphics card, a series of shaders for handling the interactions of rays with the 3D scene, and a new pipeline state object for tracking whats going on across raytracing workloads. 

Lighting is particularly hard to get right with rasterization: Its treated more or less as moving in a straight line, brightening up the sides of objects closest to the light and casting a shadow on the other side, but lighting in the real world doesnt quite work like that.

On top of the architectural details that we can discuss this morning, Nvidia sent over both GeForce RTX 2080 and RTX 2080 Ti cards for us to play with. As of this writing, those cards are on a FedEx truck and headed for the TR labs. Nvidia has hopped on the “unboxing embargo” bandwagon, meaning we can show you the scope of delivery of those cards later today. Performance numbers will have to wait, though. First, Nvidia is pulling back the curtain on the Turing architecture and the first implementations thereof. Lets discuss some of the magic inside.

Thanks to the engineers at Nvidia, Microsoft, and others, weve now gotten to the stage where ultra-realistic lighting can be mapped out without being pre-rendered. You should see the difference most clearly in complicated lighting setups, where theres a row of frosted glass doors, or a stained glass window, or a waterfall.

Nvidias GeForce business has been on a good run of late. So good, in fact, that the GTX 10-series, based on the Pascal microarchitecture, has been making hay for over two years. And why not, because the Pascal-based GPUs havent had really serious competition from rival AMD in all that time. Yet Nvidia also knows it cannot rest on its laurels forever. The graphics business is an ever-evolving beast, and what GPU hardware architects put into their cutting-edge designs has ramification for present and future games engines. To that end, Nvidia announced the brand-spanking-new RTX 20-series GPUs during Gamescom last month. Since then, various titbits have either been divulged officially or leaked out. We already know that RTX cards, hewn from the Turing microarchitecture, improve upon incumbent Pascal in a number of ways, including more, efficient shader cores, new-fangled GDDR6 memory, Tensor cores for hardware-accelerated compute used in deep learning, and a number of RT cores for real-time ray tracing speed-ups. Turing is in many ways an iterative design that harnesses whats good about the mainly workstation- and server-orientated Volta architecture and adds to it with explicit hardware support for ray tracing. Theres plenty going on under the hood, of course, so its worth paying closer attention to the microarchitecture that will be powering the next slew of GeForce gaming cards.

With the switch to the RTX moniker from GTX, Nvidia is making a big noise about the ray tracing capabilities of its new series of cards. Those cards and the Turing GPUs contained within have been engineered especially to cope with the kinds of advanced, demanding calculations ray tracing requires.

An interesting by-product of going for more SMs with fewer FP units is what you can do with other bits of the design. You can change the ratio of texture units per shader unit but Nvidia has not chosen to do so; texturing capability isnt that important anymore. What it has done, however, is keep 256KB for register files per SM. Doing simple maths tells us there is more than 2.4x the total register-file space on TU102 compared to GP102, and the consensus is that increasing these, which constitute the fastest type of memory used for in-flight operations, helps enable quicker completion of threads. We guess Nvidia wanted to reduce instances of what is known as register pressure, where a lack of available registers forces the GPU to run to much slower local memory. This is purely a GPU design decision that increases the die size over the previous generation but the additional silicon cost is considered worth it from a performance maintenance point of view.

Nvidia Turing GPU deep dive: Whats inside the radical GeForce RTX 2080 Ti

If you think about how light travels through a scene—literally at the speed of light—and how shadows, reflections, and refractions are formed (such as when light passes through water), its perfectly understandable why consumer graphics cards have taken all this time to get to a stage where real-time ray tracing is possible.

Check out the Shadow of the Tomb Raider demo from Nvidia below, for example. Weve seen lighting tricks like this before, but not in the same detail or with the same level of realism: Note how strong backlight is enough to turn objects and people into complete silhouettes, something that needs the power of ray tracing to properly work.

How has this come to be? How can it be so big without, on paper, appearing to be that much faster when judged on CUDA cores alone? Nvidia has made a number of design choices that it reckons will pay off somewhat now but more so in the future. There are specific INT32 cores, those new Tensor cores, RT cores, more on-chip cache, and more register-file space, to name but the obvious die-space culprits. Get the feeling this is more of a workstation-type card reimagined than a pure gaming GPU? Thats the choice Nvidia made by betting big on the types of technologies future games will employ.

Its something engineers and researchers have been working towards for half a century, both in computing and movie-making. The best CGI in films now looks like its part of the real world, reflections and refractions and all, and now Nvidias new cards promise to bring the same realism to games.

Nvidia claims GeForce RTX 2080 hits the sweet spot for 4K gaming at 60 fps

Real-time ray tracing replaces a majority of the techniques used today in standard rendering with realistic optical calculations that replicate the way light behaves in the real world, delivering more lifelike images, says Nvidia.

Getting the most out of your hardware means out of the box settings are no good. Overclocking is the only way to squeeze out every last ounce of performance. Nvidia is making it even easier for gamers to get a performance boost with one-click overclocking via an API called Nvidia Scanner.

Another example shown below comes from an Unreal Engine demo of the technology published back in March. Look at the detail and the quality of the reflections in the scene—suddenly its barely distinguishable from live action (though this demo required a hugely powerful PC setup and some background rasterization to work).

Even though there are factory overclocked models of graphics cards, these cards still have more headroom available. Voltages as well as clock and memory speeds can still be cranked up further. Doing so brings additional risk, but still safe enough when proceeding with caution.

The new cards actually split ray tracing up into two components—ray casting (tracking the paths of rays) and shading (determining the final appearance of objects). The RT cores on the new RTX cards are designed to speed up ray casting specifically, so performance increases and visual improvements will vary depending on the geometry of a scene and how much other work there is to do.

Despite launching with the GeForce RTX series, Nvidia Scanner will not be limited exclusively to the newest GPUs. Older graphics cards will be eligible for use with the utility, although no specific generation was given as the cutoff for support.

Being the first consumer graphics cards to include specific hardware for ray tracing, the RTX 2000 series represents another milestone in real-time rendering. Of course shadows, reflections and other lighting tricks have been in games for years, but now they dont have to be fudged or approximated—they can be finely calculated.

It is still a nascent technology though, and games will have to be written specifically to take advantage of the RTX hardware. At the moment, the list of games supporting the new standards is relatively thin, but it should grow and grow over time. As weve said, the ray tracing revolution wont start overnight, but it is beginning.

What All of Your Computer's Specs Really Mean10 Photoshop Tricks to Fix Your Worst PhotosWhy You're Watching TV All Wrong and How to Fix ItEverything You Need to Know About Augmented Reality Now That Its Invading Your PhoneThe Anatomy of a MotherboardEverything You Need to Know About 4K GamingAbout the authorDavid NieldDavid NieldContributor


Posted in World