We saw NVIDIA's high-performance-per-watt Maxwell architecture land in the low-end with the 750 and 750 Ti earlier this year. Now they have improved it with the GM204. While the "4" indicates a normal performance chip as opposed to a "big" compute-oriented chip, its positioning in the GeForce lineup is at the high end (for now), just like with the GK104 over two years ago. Unlike with the GK107 and GK104, the GM204 is not quite four times the GM107 in terms of SMM count. While the GM107 has 5 SMMs per GPC, the GM204 has 4. Due to various improvements in 2nd generation Maxwell, the GM204 is even more efficient than GM107, which was quite a boost over Kepler to begin with. And if anyone is wondering, yes, the GM204 is still 28 nm. Currently 20 nm is mostly for phone SoCs. Last year I remember discussing the situation of a 3rd year of 28 nm with ImSpartacus, and here we are, soon to start the 4th year of 28 nm (we can safely assume the GM204 won't be retired for most of a year at least). Notice that the memory bandwidth of the 980 is less than that of the 780 Ti (also the 780) and equal to that of the 770. This situation is not bad news in and of itself, since it's generally the case that the GFLOPS/bandwidth ratio for GPUs increases over time. Code: NVIDIA 8800 GTX GTX 280 GTX 480 GTX 680 GTX 980 Architecture Tesla Tesla Fermi Kepler Maxwell SP GFLOPS 518 933 1435 3090 4612 Bandwidth (GB/s) 86.4 141.7 177.4 192.0 224.0 Ratio 6.0 6.6 7.6 16.0 20.6 AMD 2900 XT 4870 5870 6970 7970 290X Architecture R600 R700 Evergreen N.I. S.I. C.I. SP GFLOPS 476 1200 2720 2703 3789 5632 Bandwidth (GB/s) 128.0 115.2 153.6 160.0 264.0 320.0 Ratio 3.7 10.4 17.7 16.9 14.4 17.6 NVIDIA made another change with GM204 though, they doubled the number of ROPs, whose job is to form the final pixels that are sent to the display. They use a considerable amount of bandwidth so it doesn't necessarily make sense to increase ROPs without an associated memory bandwidth increase. But GDDR5 is maxed out, high-bandwidth memory isn't here yet, and a wider memory interface can cost more in terms of $ and power than a narrower one. NVIDIA's solution was to improve color compression. Color compression looks for multiple identical pixels or certain patterns among pixels to reduce color data sizes and thus bandwidth. With 2nd generation Maxwell, the number of recognized patterns has increased, resulting in more bandwidth savings. Thus while the GTX 770 has 224 GB/s bandwidth, the GTX 980 has, in this sense, an average of ~297 GB/s bandwidth. The power consumption of the GTX 980 and 970 are impressive but unsurprising after the GTX 750 and 750 Ti. Under load, the 980 uses similar power to a 680, and significantly less than any GK110 or Hawaii card. Its performance per watt is only beaten by GM107-based cards. As for performance, the 970 fits in right with the previous generation's top cards and the 980 has a small but notable lead. Those who didn't find GK110- and Hawaii-based cards a compelling upgrade over whatever they already had probably won't see GM204 as one either. One place I hope to see more Maxwell GPUs is in systems such as Steamboxes. Supposedly GM204 is coming to mobile with a large performance increase over GK104- and Pitcairn-based mobile GPUs. While mobile GM204 is likely to be 75-100+ W, similar to mobile GK104, at least one can get a lot more performance out of it. There is reportedly also a GM206 coming soon, and if it ends up halfway between the GM107 and GM204 then it'll have around GK104 performance for around GK106/Pitcairn power consumption. Sources: AnandTech, TPU.
I may purchase a 970 in the near future considering the power consumption is perfect for either of my rigs.
I'm glad they can squeeze out some more performance out of 28nm silicon. But really, 4 years on the same process is just silly. I hope we don't hit the same performance peak as with CPUs, 4k gaming will be awesome, just waiting for some good IPS 4k display.
Will 4k actually be usable in any form? Thats a crapton of pixels to compute and with supposed "advancements" in graphics, won't it just lead to having 10 fps , even with top tier cards? Im not feeling it as well because how the " LOL 12/10 NEXT GEN, GOTYAY" games are pretty crap. Cant really enjoy movies in it as they will probably stick to the 720 and 1080 for like the next 10 years.
I was thinking about making this thread. I would've called it "Trickster so mad," because Trickster loves extracting as much efficiency from his GPUs as possible and the new highest performance Nvidia GPU is probably its most efficient GPU ever. Lel, already have. We're living in a post-Sandy Bridge world. It's all about power consumption now. The same core in your ~4.5W tablet is also used in your 80+W desktop. We know that CPU architectures can generally only be designed for efficient use within an order of magnitude of TDPs and the range I just described is quite a bit bigger than that. Something has to give. It's either going to be the 50-80W range or the 8-4.5W range. Which range do you think Intel cares about more? Fun fact: This is also why today's overclocking is shitty. We're already getting ~30 average fps on 4K Crysis 3 @ high. GPU manufacturers kinda saw this coming.
This may sound silly, but I swear I heard from somewhere(it was like 10 years ago or something), that the higher the res the less need for anti-aliasing. Is that a silly statement? I just say it because if true then anti aliasing can be toned down and still look just as good if not better than what you get in a smaller display. Oh that reminds me, isn't fxaa the cheapest anti aliasing? I wonder what the bench marks would look like if they used msaa or even ssaa.
That's true assuming a constant screen size. Or, rather, it's true for an increasing PPI but not necessarily for an increasing resolution. Aliasing is caused because pixels are discrete and not continuous. Once you hit a certain pixel density, then the difference between a discrete color display and a continuous one becomes blurred, both metaphorically and physically.
But thats terrible. Crysis only on high? And, from personal experience, its a pretty nicely optimised game right now. Newer titles will have even more requirements. It seems like its at least two years before its anywhere near usable. I also love how its called "4k". Gotta love marketing, seeing as the last "new big thing" shoulve been called "2k".
4K was inherited from photography & film. TV marketers use UHD, which is consistent with their previous HD marketing.
Spartacus, I think you'll like this: Gigabyte Mini-ITX GTX 970 1076 MHz base, 1216 MHz boost (which are 2% and 3% higher respectively than the reference clocks) 4 GB VRAM 1x 8-pin 12 cm long Source: Guru3D
We've been over this... It can be done. The last time fooshi asked, I provided pics. And where it can't, why the fuck aren't you on a clc anyway? I'd think you people were peasants.
When are they going to stop wasting time with this efficiency bullshit and actually bring out a card capable of giving good FPS on a 4k monitor. Seriously, what the fuck is this shit. I want a 4K monitor but I'm not gaming in shit graphics or anything less than glorious 60fps.
I bet they can't and are just focusing on powersaving until someone in the tech department has a breakthrough.
Never. This architecture is in tablets now. I'm sure Nvidia wouldn't mind going into phones in a few years.
With the 970, Nvidia has the best mainstream card out there and with dual 980s you get in a territory where 4k@60FPS is possible. Also, Gsync works fine and only with Nvidia cards, so that is another good reason to go with Nvidia.