A $2,000 video card for consumers shouldn’t exist. The GeForce RTX 5090, like the $1,599 RTX 4090 before it, is more a flex by NVIDIA than anything truly meaningful for most gamers. NVIDIA CEO Jensen Huang said as much when he revealed the GPU at CES 2025, assuming that it’ll be for hardcore players who have $10,000 rigs. Personally, I don’t know anyone who actually fits that bill, not unless you count parasocial relationships with streamers. (My own setup doesn’t even cross $5,000.)
But we all know why NVIDIA is hyping up the unattainable RTX 5090: It lets the company show off benchmarks that AMD can’t touch, once again cementing itself as the supreme leader of the high-end video card market. It’s not just about gaming, either. The RTX 5090 is also being positioned as an AI workhorse since it’s powered by NVIDIA’s new Blackwell architecture, which leans on the company’s Tensor Cores for artificial intelligence work more than ever. Realistically, though, the $549 RTX 5070 is the GPU more gamers will actually be able to buy.
I’ll admit, I went into this review with a mixture of excitement and disgust. It’s astonishing that NVIDIA was able to stuff 91 billion transistors and 21,760 CUDA cores in the RTX 5090, and I couldn’t wait to see how it performed. Still, I find it genuinely sad that NVIDIA keeps pushing the bar higher for GPU prices, in the process making the gaming world even more unequal. A $2,000 graphics card, in this economy?!
But after hours of benchmarking and playtime, I realized the RTX 5090 wasn’t much of a threat to gaming accessibility. Wealthy PC gamers have always overspent for graphics performance — I’ve seen people (unwisely) pay thousands more than consumer GPUs just to get extra VRAM from NVIDIA’s Quadro cards. But the rise of PC handhelds like the Steam Deck, which are a direct offshoot of the Nintendo Switch’s success, is a clear sign that convenience matters more than raw power to mainstream players today. I don’t think many Switch 2 buyers are saving up for an RTX 5090.
For the few who can afford it, though, NVIDIA’s new flagship sure is a treat.
Hardware: Leaning more on AI
In many ways, the RTX 5000 GPU family is the convergence of NVIDIA’s decades-long GPU expertise and its newfound role powering the AI hype train. Sure, they’ll run games faster than before, but what makes them unique is their ability to tap into “neural rendering” AI for even better performance. It’s at the heart of DLSS 4, the company’s latest AI upscaling technology, which can now generate up to three frames for every one that’s actually rendered by the RTX 5090.
That’s how NVIDIA can claim this GPU is twice as fast as the RTX 4090, or that the RTX 5070 matches the speed of the 4090. Does it really matter if these frames are “fake” if you can’t tell, and they lead to smoother gameplay?
Before I dive further into the AI side of things, though, let’s take a closer look at the RTX 5090. Once again, it features 21,760 CUDA cores, up from 16,384 cores on the 4090, as well as 32GB of GDDR7 VRAM instead of the 4090’s 24GB of GDDR6X. (I thought I was future-proofing my desktop when I equipped it with 32GB of RAM years ago, but now that video cards have caught up I’m almost convinced to go up to 64GB.) The 5090 also sports 5th-gen Tensor cores with 3,352 of AI TOPs performance, while the 4090 had 1,321 AI TOPS with last-gen Tensor hardware.
RTX 5090 | RTX 5080 | RTX 5070 Ti | RTX 5070 | RTX 4090 | |
Architecture |
Blackwell |
Blackwell |
Blackwell |
Blackwell |
Lovelace |
CUDA cores |
21,760 |
10,752 |
8,960 |
6,144 |
16,384 |
AI TOPS |
3,352 |
1,801 |
1,406 |
988 |
1,321 |
Tensor cores |
5th Gen |
5th Gen |
5th Gen |
5th Gen |
4th Gen |
RT cores |
4th Gen |
4th Gen |
4th Gen |
4th Gen |
3rd Gen |
VRAM |
32 GB GDDR7 |
16 GB GDDR7 |
16 GB GDDR7 |
12 GB GDDR7 |
24 GB GDDR6X |
Memory bandwidth |
1,792 GB/sec |
960 GB/sec |
896 GB/sec |
672 GB/sec |
1,008 GB/sec |
TGP |
575W |
360W |
300W |
250W |
450W |
I tested the RTX 5090 Founder’s Edition GPU (provided by NVIDIA), which is dramatically slimmer than its 4090 counterpart. The 5090 has a sleek two-slot case that can actually fit in small form factor systems. The three-slot 4090, meanwhile, was so massive it felt like it was going to tear my PCIe slot out of my motherboard. NVIDIA also added another cooling fan this time around, instead of just relying on a vapor chamber and a single fan. The 5090’s main PCB sits in the center of the card, and it’s connected to other PCB modules at the PCIe slot and rear ports (three DisplayPort 2.1b and an HDMI 2.1b connection).
DLSS 4: The real star of the show
While multi-frame generation is the defining feature for the RTX 50 cards, there are several other DLSS 4 features that should help games look dramatically better. Best of all, those capabilities are also trickling down to earlier RTX GPUs. RTX 40 cards will be more efficient with their single-frame generation, while RTX 30 and 20 cards will also see an upgrade from AI transformer models used for ray reconstruction (leading to more stable ray tracing), Super Resolution (higher quality textures) and Deep Learning Anti-Aliasing (DLAA).
These transformer models should also fix some rendering artifacts present in earlier versions of DLSS. At NVIDIA’s Editor’s Day earlier this month, the company showed off how the updated version of Ray Reconstruction made a chainlink fence in Alan Wake 2 appear completely sharp and clear. An earlier version of the feature made the same fence look muddy, almost as if it was out of focus. In Horizon Forbidden West, the new version of Super Resolution revealed more detail from the texture of Aloy’s bag.
DLSS 4 will be supported in 75 games and apps at launch, including Indiana Jones and the Great Circle and Cyberpunk 2077, according to NVIDIA. For titles that haven’t yet been updated with new DLSS menu options, you’ll also be able to force support for the latest features in the NVIDIA app.
In use: An absolute powerhouse, with fake frames and without
I could almost hear my motherboard breathe a sigh of relief when I unplugged the RTX 4090 and swapped in the slimmer 5090. Installation was a cinch, though I still needed to plug in four PSU connectors to satisfy its demand for 575 watts of power and a 1,000W PSU. If you’re lucky enough to have a new PSU with a 600W PCIe Gen 5 cable, that will also work (and also avoid tons of cable clutter).
I tested the RTX 5090 on my home rig powered by an AMD Ryzen 9 7900X and 32GB of RAM, alongside a 1,000W Corsair PSU. I also used Alienware’s 32-inch 4K QD-OLED 4K 240Hz monitor to get the most out of the 5090, and honestly, you wouldn’t want to run this GPU on anything less.
Once I started benchmarking, it didn’t take long for the RTX 5090 to impress me. In the 3DMark Steel Nomad test, which is a demanding DX12 demo, it scored 14,239 points, well above the 9,250 points I saw on the RTX 4090. Similarly, the 5090 hit 15,416 points in the 3DMark Speedway benchmark, compared to the 4090’s 10,600 points. These are notable generation-over-generation gains without the use of frame generation or any DLSS sorcery — it’s just the raw power you see with more CUDA and RT cores.
None |
3DMark TimeSpy Extreme |
Port Royal (Ray Tracing) |
Cyberpunk (4K RT Overdrive DLSS) |
Blender |
NVIDIA RTX 5090 |
19,525 |
36,003/166fos |
246fps (4X frame gen) |
14,903 |
NVIDIA RTX 4090 |
16,464 |
25,405/117fps |
135fps |
12,335 |
NVIDIA RTX 4080 Super |
13,168 |
18,435/85fps |
80fps |
8,867 |
NVIDIA RTX 4070 Ti Super |
11,366 |
15,586/72fps |
75fps |
7,342 |
Once I started gaming and let DLSS 4 do its magic, my jaw just about hit the floor. But I suppose that’s just a natural response to seeing a PC hit 250fps on average in Cyberpunk 2077 while playing in 4K with maxed-out ray tracing overdrive settings and 4x frame generation. In comparison, the 4090 hit 135fps with the same settings and single frame generation.
Now I know most of those frames aren’t technically real, but it’s also the first time I’ve seen any game fill out the Alienware monitor’s 4K 240hz refresh rate. And most importantly, Cyberpunk simply looked amazing as I rode my motorcycle down rain-slicked city streets and soaked in the reflections and realistic lighting from robust ray tracing.
Like Cypher in The Matrix (far from the best role model, I know), after suffering through years of low 4K framerates, I couldn’t help but feel like “ignorance is bliss” when it comes to frame generation. I didn’t see any artifacts or stuttering. There wasn’t anything that took away from my experience of playing Cyberpunk. And the game genuinely looked better than I’d ever seen it before.
And if you’re the sort of person who could never live with “fake frames,” the RTX 5090 is also the only card I’ve seen that can get close to 60fps in Cyberpunk natively in 4K with maxed out graphics and no DLSS. I hit 54fps on average in my testing, whereas the 4090 chugged along at 42fps in native 4K. You could also compromise a bit and turn on 2x or 3x frame generation to get a solid fps boost, if the idea of 4x frame generation just makes you feel dirty.
And if you can’t tell, I quickly got over any fake frame trepidation. When I used the NVIDIA app to turn on 4x frame generation in Dragon Quest: The Veilguard, I once again saw an average framerate of around 240fps in 4K with maxed out graphics. I’ve already spent over 25 hours in the game, but running through a few missions at that framerate still felt revelatory. Combat sequences were clearer and easier to follow, possibly thanks to better Ray Reconstruction and Super Resolution, and I could also make out even more detail in my character’s ornate costumes. On the 4090, I typically saw around 120fps with standard frame generation.
The 5090’s DLSS 4 performance makes me eager to see how the cheaper RTX 5070 and 5070 Ti cards perform. If a $550 card can actually get close to what I saw on the $1,599 4090, even if it’s relying on massive amounts of frame generation, that’s still a major accomplishment. It would also be great news for anyone who invested in a 4K 120Hz screen, which is tough to fill with other mid-range GPUs.
Outside of gaming, the RTX 5090 also managed to convert a minute-long 4K clip into 1080p using the NVENC H.264 encoder in just 23 seconds. That’s the fastest conversion I’ve seen yet. In comparison, the RTX 4090 took 28 seconds. Add up those seconds on a much larger project, and the 5090 could potentially save you hours of repeated rendering time. Naturally, it also saw the fastest Blender benchmark score we’ve ever seen, reaching 14,903 points. The RTX 4090, the previous leader in our benchmarks, hit 12,335 points.
Throughout benchmarks and lengthy gaming sessions, the RTX 5090 typically reached around 70 degrees Celsius with audible, but not annoying, fan noise. The card also quickly cooled down to idle temperatures between 34C and 39C when it wasn’t under load. Aiming to push the limits of NVIDIA’s cooling setup, I also ran several stress test sessions in 3DMark, which involves looping a benchmark 20 times. It never crashed, and achieved over 97 percent accuracy in most of the tests. There was just one Steel Nomad session where it scored 95.9 percent and failed 3DMark’s 97 percent threshold. That could easily be due to early driver issues, but it’s still worth noting.
The only time I really got the RTX 5090 cooking was during an exploration of the Speedway benchmark, where I could move the camera around the ray traced scene and look at different objects and characters. The card hit 79C almost immediately and stayed there until I quit the demo. During that session, as well as typical gaming, the 5090 drew between 500W and 550W of power.
Looking ahead: AI NPCs and neural shaders
On top of DLSS, NVIDIA is also planning to tap into its RTX cards to power AI NPCs in games like PUBG and ZooPunk. Based on what I saw at NVIDIA’s Editor’s Day, though, I’m more worried than excited. The company’s Ace technology can let NPCs generate text, voices and even have conversational voice chats, but every example I saw was robotic and disturbing. The AI Ally in PUBG makes a lot of sense on paper — who wouldn’t want a computer companion that could help you fight and find ammo? But in the demo I saw, it wasn’t much of a conversationalist, it couldn’t find weapons when asked and it also took way too long to hop into a vehicle during a dangerous firefight.
As I wrote last week, “I’m personally tired of being sold on AI fantasies, when we know the key to great writing and performances is to give human talent the time and resources to refine their craft.“ And on a certain level, I think I’ll always feel like the director Hayao Miyazaki, who described an early example of an AI CG creature as, “an affront to life itself.”
NVIDIA’s Neural Shaders are an attempt to bring AI right into texture shaders, something the company says wasn’t possible on previous GPUs. These can be implemented in a variety of ways: RTX Neural Materials, for example, can use AI to render complex materials like silk and porcelain, which often have nuanced and reflective textures. RTX Neural Texture Compression, on the other hand, can store complex textures while saving up to 7 times the VRAM used from typical block compression. For ray tracing, there’s RTX Neural Radiance Cache, which is trained on live gameplay to help simulate path-traced indirect lighting.
Much like NVIDIA’s early ray tracing demos, it’s unclear how long it’ll take for us to see these features in actual games. But from the glimpses so far, NVIDIA is clearly thinking of new ways to deploy its AI Tensor Cores. RTX Neural Faces, for example, uses a variety of methods to make faces seem more realistic, and less like plastic 3D models. There’s also RTX Mega Geometry, which can help developers make up to “100x more ray traced triangles,” according to NVIDIA. Demos show it being used to construct a large building as well as an enormous dragon.
Wrap-up: The new unattainable GPU king
The $2,000 GeForce RTX 5090 is not meant for mere mortals, that much is clear. But it points to an interesting new direction for NVIDIA, one where AI features can seemingly lead to exponential performance gains. While I hate that it’s pushing GPU prices to new heights, there’s no denying that NVIDIA has crafted an absolute beast. But, like most people, I’m more excited to see how the $549 RTX 5070 fares. Sure, it’s also going to lean into frame generation, but at least you won’t have to spend $2,000 to make the most of your 4K monitor.
This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/nvidia-geforce-rtx-5090-review-pure-ai-excess-for-2000-140053371.html?src=rss
Source link
+ There are no comments
Add yours