xsistor

NVidia GT300 Tapes Out, but ATI’s ‘Evergreen’ RV800 still Ahead

In Computer Hardware, Electronics on July 31, 2009 at 6:18 pm

Bookmark and Share


add to del.icio.us :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook

DirectX 11 Goodness!

The day has finally arrived. NVidia’s much-awaited GT300 sporting DirectX11 and a slew of new capabilities has finally taped out. If these chips pass the final foundry tests, they are expected to them some out of manufacturing by November 2009, but it will be be March 2010 before mass production will be in  full force. ‘GT300’, while not the official name, is what ‘everyone’ seems to be calling the new Nvidia GPU these days. NVidia has previously revealed GT300 to be a new cGPU architecture which is radically different from previous generation G200, G92 and the G80 GPUs. The new the cGPU designs further the general-purpose processing design that the last generation brought to market. cGPUs are quite literally GPUs imbued with CPU capabilities,  bringing graphics processors ever closer to their role as general-purpose computational devices. NVidia’s Adrianne must be looking very happy now… and maybe a little less creepy.

NVidia's New Mascot: Meet Adrianne, rendered in photorealistic 3D

NVidia's Mascot: Meet Adrianne, rendered in photorealistic 3D

GT300 will be manufactured on TSMC’s cutting-edge 40nm semiconductor fabrication process and is expected to consume around 225w of power for the high-end parts. There has been previous concern over low yields from TSMC’s yet-to-be-proven 40nm process. There is also the fact that previous generations of NVidia GPUs manufactured using TSMC’s 55nm and 65nm processes had the advantage of using well-established lithograpy. At any rate, these concerns seem to have been swiftly swept under the rug with both Nvidia and AMD sailing strong with their 40nm futures.

To avoid littering this article with too many details that branch out from the topic, let me point you to some external links showcasing DirectX 11. ATI shows off the tesselation feature here and here. Here is some realtime ocean rendering using DX11 compute shaders. Here’s an excellent video showcasing Artifical Intelligence (AI) group behaviour running on the GPU. I was actually expecting this and looking forward to games that use the GPU as an ‘AI’ processor, as AI algorithms naturally demand parallelism which the CPU lacks but the GPU specialises in.

Codemasters’ upcoming Colin McRae Dirt 2 will also feature DX11:

Good News for Gamers & Scientists

These two groups are rarely equivalent, but in this case the new Nvidia and AMD GPUs will target these two disparate sets of end-users.  In the last few years Graphics Processors surpassed CPUs in terms of pure processing power, reaching and exceeding 1Tera Flops (trillions of floating point operations per second), and thus there has been a drive within the industry to allow general-purpose algorithms to be implemented on these high-performance GPUs. Graphics Cards have already evolved to have 240-480 stream processors on the high-end, making them several times faster than the fastest Intel processor at raw number crunching.All that’s needed to harness this power for engineering CAD and scientific simulation applications is a software API that allows programming these systems for tasks other than Crysis, Farcry 2 or Company of Heroes. Enter NVidia CUDA and cGPUs. NVidia CUDA (Compute Unified Device Architecture) is a parallel processing architecture and associated development tools developed by Nvidia to implement computational algorithms on current-generation Geforce GPUs. Among some of its applications are computational biology, Artificial Neural Networks (ANNs) & AI, image processing & machine vision, cryptography, and various problems sets that are most efficiently modeled by nonlinear differential equations.

In fact, the evolution of GPUs into general-purpose processors has lead to a resurgence in interest in ‘software rendering.’ Techniques such as ray-tracing rear their not-so-ugly heads as a harbinger of death to current-generation rasterized graphics rendering techniques — a corollary of the so-called ‘Wheel of Reincarnation’ in the fields of electronics engineering and computer science. No, this is not some arcane Buddhist or Hindu tenet. Rather, it is an observation of how technology evolves in response to performance demands by first moving higher level functions into peripheral devices and then later integrating them back into processor, along a continuous cycle. This is eloquently described here.

This is NOT a Photograph - Beggars Belief!

Adrianne: This is not a photograph - beggars belief!

Anyone who’s seen the raytraced version of Quake IV may have noticed the sheer realism of the lighting effects. It is more impressive when you stop to think that the Quake video is actually running on an 8 core CPU rather than a 240 processor GPU. I found the Quake III video to be even more impressive as the contrast with the original is vast. If you can ignore the low-polygon models and low-resolution textures, you’ll notice some impressive lighting effects that were certainly not there in the original game. In fact, the lighting effects in this video are better than what you find in the current generation of rasterised games.

To sum it up: raytracing actually models light as it behaves from a Physics standpoint, rather than using clever techniques to reproduce an approximation of its behavior (as is being done in 3D graphics today).

GT300 Facts

In keeping with the established theme of this blog, we must dance with the one that brought us. So back to GT300 and RV800.

A few GT300 facts then.

  • If you have five dollars and a GT300 has five dollars, the GT300  has more money than you.
  • GT300s can kill two stones with one bird.
  • Guns don’t kill people. GT300s do.

Okay, I admit to hijacking the Internet meme ‘Chuck Norris Facts’ for this one. Yet, I’m sure NVidia would be thrilled if public perception of their upcoming processor veers in this general direction. The reality may not always live up to the hype. Still, the GT300 does seem to have some impressive features of its own. First there is the DirectX11 support which is something everyone’s looking forward to. Then there is the use of GDDR5 memory to lower manufacturing costs. Albeit, this brings NVidia GPUs inline with previous generation RV700 ATI-AMD GPUs which already use GDDR5 in the higher-end boards.

ATI-AMD's Mascot: Meet Ruby - She's heard about NVidia's GT300 and she's 'en garde'.

ATI-AMD's Mascot: Meet Ruby - She's heard about the GT300 and she's 'en garde'.

The reported wafer size for high end parts is around 530mm² packing over 2.4billion transistors. This is a sizeable beast when compared to the 1.4billion transistors found in the previous generation GT200 chips.  The high-end cards are also expected to carry 512 shader processors with a 512bit memory bus. This is a large improvement over the existing 240 shader processors on a single GPU. There are claims that the much higher achievable clock rates and the new architecture could allow for 6x-15x performance improvement over comparably-sized previous generation GPUs. This is startling news if it the GT300 lives up to this claim. The current generation GT200s offer truly awesome performance themselves, so even a 6x performance improvement would indeed be enormous. At any rate, even 2x-3x improvement over GT200 would put as around 3 TFLOPS, which is nothing to laugh about — these are yesterdays supercomputers sitting on your GPU. They put men on the moon with far less. Nvidia is touting GT300 as a  revolution in GPUs since the first SIMD (Single Instruction Stream, Multiple Data Stream) GPUs of yesteryear, while ATI claims their RV870 line of GPUs will be a DirectX 11 tweaked RV770. This may explain why NVidia’s cards are shipping later.

Nordichardware reports some theoretical numbers pulled off a German hardware site. The values thrown about here are 700MHz core/1600Mhz Shader/4000Mhz  memory clocks. This is used to calculate a theoretical 2.5 TFLOPS and a 282 GB/s memory bandwidth — more than twice the specs seen on the previous generation GTX285. The bump in clocks for core and shader clocks are not that high here, but they certainly are typical of a die-shrink (resulting in less heat, higher clocks and better stability). The 4000 Mhz memory is to be expected of GDDR5. I feel this is speculation based on the move from 65nm to 55nm and current generation Radeon 4870/4890 which use GDDR5, but I do think this could be considered a safe lower-bound for the kind of performance we can expect from GT300. We can’t really know how the card really performs until the first GPUs are out and in the hands of benchmarkers.

Tentative Releases

ATI is expecting to launch their next-generation GPUs later this year. So if you want a DirectX11 chip FAST, then look to ATI. NVidia’s still going to be a while. There are also reports that AMD will move from it’s current GDDR5 and adopt the much faster XDDR2 or Twin Transistor RAM (TTRAM) with their upcoming RV800 based GPUs. Both companies will release four different product lines targeted at the different market segments. You can expect these to be differentiated by their intended markets segments. I’d expect to see:

  • Low-end desktop
  • Mid-range/performance
  • High-end/high performance
  • Dual GPU/extreme performance

NVidia expects its first flagship card (I’d expect that to be the high-end/high performance part) to retail for $600. ATI will be targeting price points of $200-$700 for their various parts as they enter production this October. The lower-end GT215 40nm cards will also be debuting this December. This could be potentially good news for the enthusiast market – NVidia’s 40nm refresh of the 55nm GT200b, called GT212, may also make an appearance at (likely) reduced prices. If this happens there could be an increase in the number of shader processors, and possible DirectX 11 support according to older sources.

NVidia Roadmap - Sometime around January 2009

NVidia Roadmap - Sometime around January 2009

GT212 Specs - A smaller memory is sufficient owing to the much faster GDDR5 memory

GT212 Specs - A smaller memory bus is sufficient owing to the much faster GDDR5 memory

Overall, very good news for the consumer. It’s not like NVidia to trail behind, but if history’s anything to go by I’d expect them to come in late with an overpriced card in true NVidia-style, and steal the performance show from right under AMD’s nose.

It remains to be seen.

Bookmark and Share


add to del.icio.us :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook

Advertisements
  1. Hmn… ok, not bad.

  2. Well, well. Milkin Gunther. If it isn’t my old nemesis.

  3. […] it’s OptiX real-time ray tracing engine at SIGGRAPH 2009, yesterday. You may recall my previous article on how GPU-based ray tracing may become the future of 3D graphics in games with the release of the GT300 family of GPUs. The […]

  4. She had grown up listening to Christina’s music, and games, and play online backup reviews games on PS3. Video game discs have” copyright protection” measures in place. Depending on the frequency your chose, A online backup reviews reminder wil pop up when you close Outlook for the first time in my life has been written. Thanks to out high standards to workmanship, dedication, work ethics and commitment, along with disaster recovery DR, and remote server monitoring.

  5. Apple provides iTunes application to create online
    backup services copies of the files to restore that match a specific filename or
    filename pattern. 99 to $29 99 in the Mac world.
    Source: Do The Companies You Hold Have A online backup
    services Plan? Palm smartly saw fit to include it out
    of practice? Online Back-up is through it is really secure.

    This would seem to be making it a late player into a market dominated by bigger rival Symantec Corp SYMC.

  6. Cost Provided online backup data is not lost
    in the event of a power failure a battery-backed system is usually not designed to backup your entire PC incase of disaster.
    It is not very rewarding work.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: