Page 1 of 2 12 LastLast
Results 1 to 16 of 20

Thread: News - NVIDIA details next-generation Fermi GPU architecture

  1. #1
    HEXUS.admin
    Join Date
    Apr 2005
    Posts
    20,919
    Thanks
    0
    Thanked
    593 times in 294 posts

    News - NVIDIA details next-generation Fermi GPU architecture

    NVIDIA spills the beans on Fermi. Good enough to take down the Radeon HD 5870? We take a first look at the architecture.
    Read more.

  2. #2
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    24,314
    Thanks
    929
    Thanked
    2,181 times in 1,783 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel i7 950
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • AMD HD7870
      • PSU:
      • XFX Pro 650W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 7 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    They seem incredibly reluctant to even talk about gaming, let alone give realistic hints to performance. It's almost like they are counting on HPC sales to grow exponentially to make up for a poor forcast on gaming profit.

    I think AMD will be satisfied.

  3. #3
    Senior Member
    Join Date
    Apr 2009
    Location
    Oxford
    Posts
    263
    Thanks
    5
    Thanked
    7 times in 6 posts
    • borandi's system
      • Motherboard:
      • Gigabyte EX58-UD3R
      • CPU:
      • Core i7 920 D0 (2.66Ghz) @ 4.1Ghz
      • Memory:
      • G.Skill 3x1GB DDR3-1333Mhz
      • Storage:
      • Samsung PB22-J 64GB
      • Graphics card(s):
      • 2x5850 in CF
      • PSU:
      • 800W
      • Case:
      • Verre V770
      • Operating System:
      • Windoze XP Pro
      • Monitor(s):
      • 19"WS
      • Internet:
      • 8MB/448kbps up

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Gaming drives the HPC market in terms of tech, so nVidia has to be within a smidgen of ATI for gaming to compete in both spaces.

    ٩(̾●̮̮̃̾•̃̾)۶

  4. #4
    Butter king GheeTsar's Avatar
    Join Date
    Jan 2009
    Location
    The shire of berks
    Posts
    2,096
    Thanks
    152
    Thanked
    260 times in 163 posts
    • GheeTsar's system
      • Motherboard:
      • Gigabyte GA-EX38-DS4
      • CPU:
      • Intel Core 2 Duo E8500 3.16GHz
      • Memory:
      • Corsair 4GB DDR2 XMS2 Dominator PC2-8500C5
      • Storage:
      • Intel G2 X25-M SSD + WD Caviar Black 1TB + Samsung F3 1TB
      • Graphics card(s):
      • HD 5870 OC (Powercolour PCS+)
      • PSU:
      • Tagan TG600-U33 600W
      • Case:
      • Fractal Design Define R3
      • Operating System:
      • Windows 7
      • Monitor(s):
      • Acer 24" 120Hz GD245HQ
      • Internet:
      • Waitrose 8meg (lol)

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    It's very hard to see how, from a gaming perspective, nVidia will be able to match ATI on a price per performance basis. I hope I'm wrong as the price of 5870s could stay high for quite some time if this does come to pass.

  5. #5
    Super Nerd
    Join Date
    Jul 2008
    Location
    London
    Posts
    1,579
    Thanks
    15
    Thanked
    94 times in 64 posts
    • kingpotnoodle's system
      • Motherboard:
      • Asus P8Z68 Pro
      • CPU:
      • Core i7 2600K
      • Memory:
      • 8GB Corsair Vengeance DDR3 1600MHz CL8
      • Storage:
      • 256GB Corsair m4 SSD & 1.5TB Seagate
      • Graphics card(s):
      • Asus GTX-670 DirectCU-II
      • PSU:
      • Corsair AX 750W
      • Case:
      • Silverstone FT02B
      • Operating System:
      • Windows 7 Pro 64
      • Monitor(s):
      • Hazro HZ27WD
      • Internet:
      • Be ADSL2+ ~8Mb

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    They have more than doubled the SP count, 512 is more than a GTX295, so this should be much faster given the inevitable tweaks under the hood...

    I reckon it'll compete with a 5870 fairly evenly, and for me the value proposition of NVidia with CUDA, PhysX, the potential for flash acceleration etc is better than ATI (guess it depends on your viewpoint though) so assuming the power/efficiency are good I'm looking forward to this, going to be hard to choose...

  6. #6
    Anthropomorphic Personification shaithis's Avatar
    Join Date
    Apr 2004
    Location
    The Last Aerie
    Posts
    7,978
    Thanks
    408
    Thanked
    563 times in 479 posts
    • shaithis's system
      • Motherboard:
      • Asus P8Z77 WS
      • CPU:
      • i5 3570k @ 4.4GHz
      • Memory:
      • 8GB Corsair Dominator DDR3-1600 LP
      • Storage:
      • 3 RAID Arrays and 3 SATA Stand-alones
      • Graphics card(s):
      • GTX580 SLI
      • PSU:
      • Corsair HX850
      • Case:
      • Corsair 600T (White)
      • Operating System:
      • Windows 7 x64 / OSX Lion / Ubuntu 11 x64
      • Monitor(s):
      • 2 x Dell 3007
      • Internet:
      • M247 40/10 FTTC

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by kingpotnoodle View Post
    They have more than doubled the SP count, 512 is more than a GTX295, so this should be much faster given the inevitable tweaks under the hood...
    No mention of speeds though, so it's possible that the speeds have been greatly reduced to fit the SPs in.

    I doubt it but it's a possibility.
    Main PC: Asus P8Z77 WS / 3570k @ 4.4GHz / 8GB Vengeance Black / GTX 780 Ti / Areca 1680 / HX 850 / 600T / K60 / M60 / 2x Dell 3007 / 2 x 256GB Samsung 830 (RAID0) / 2 x 240GB Corsair Force 3 (RAID0) / Windows 8.1
    HTPC: AsRock Z77 Pro 4 / E3-1230v2 / 8GB XMS3 / GTX 780 / Tevii S480 / SST-LC20 / Antec TP-550 / PS50C6900 / 128GB Kingston V200 SSD + 3 x 1.5TB + 1 x 3TB / Windows 8.1 x64 Pro with WMC
    HTPC2: Asus AM1I-A / 5150 / 4GB DDR3 / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Windows 8.1 x64 Pro with WMC
    Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB RAM / GTS 450 / Corsair 300R / Silverpower 700W modular
    Server Setup: HP DL160 G6 / 2 x E5620 / 64GB RAM / 2 x 300GB SAS (RAID1) / 6 NICs / ESX 5.5
    2 x ESX 5.5 Nodes: Asus M5A78L-M/USB3 / AMD FX 6100 / 16GB XMS3 / 160GB SATA HDD / 5 NICs
    NAS 1: HP N40L / 10GB RAM / 2x 2 x 3TB + 80GB Intel SSD (Hybrid) || NAS 2: HP N40L / 10GB RAM / 2x 2 x 3TB + 80GB Intel SSD (Hybrid) || Network: TL-WR1043ND w/DD-WRT + Dell PowerConnect 5224
    Laptop: Thinkpad T61 / 4GB RAM / Centrino 2230 Wifi / 240GB Corsair Force 3

  7. #7
    Loves Wifey dangel's Avatar
    Join Date
    Aug 2005
    Location
    Cambridge, UK
    Posts
    8,346
    Thanks
    404
    Thanked
    448 times in 330 posts
    • dangel's system
      • Motherboard:
      • See My Sig
      • CPU:
      • See My Sig
      • Memory:
      • See My Sig
      • Storage:
      • See My Sig
      • Graphics card(s):
      • See My Sig
      • PSU:
      • See My Sig
      • Case:
      • See My Sig
      • Operating System:
      • Windows 7
      • Monitor(s):
      • See My Sig
      • Internet:
      • 20mbit Sky LLU

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Happily I can afford to wait - my current card is fast enough and directx 11 brings speed improvements for all cards this time round so..
    System 001: Asus Z68 Deluxe, 2600k i7, EK Supreme HF - Full Copper CPU Block, GTX 670 FTW 2GB x 2 SLI, EK 680 GPU Blocks/EK Bridge, 16GIG Corsair Vengence DDR3 RAM CL9 @ 1600mhz, Corsair HX1000, Dell U2412M (+2 other Dell IPS'), Logitech 5.1, Samsung F3 1TB x 2, Samsung 840 Pro 256GB SSD (System), Samsung 830 128GB SDD (Games), Antec 1200 case, Thermochill 120.4 rad, Vario Pump, Windows 8.1.1 x64, Cyberpower 1500VA UPS[main]
    System 002: A8 3850 APU, ASUS uATX FM1A75 MB, 4GB Corsair Vengeance DDR3, Corsair psu, OCZ Agility 3, 1TB F3, Dell 2001FP 20" LCD, £7's worth of 5.1 speakers (they rock) Windows 7 x64[wife/server]
    System 003: AsRock MB, APU, 8 GIG Corsair, Silverstone HTPC case, stock cooler, GT220 1gbDDR3, WD Green 3TB, Kingston 40gb SSD, MCE Remote, Panasonic 50" LCD (87BDX) via HDMI Windows 8.1.1 (32) [media centre]
    System 004: Asus UL50AT Intel Core 2 Duo,4GB, Intel Gen 2 80GB SSD, Win 8.1.1 x64 [no justification]
    System 005: HP Proliant N40L Microserver, 4x2TB drives, fan mod, Pico PSU mod, Win7 x86 [file server]
    System 006: Dell Optiplex 9010, i7, 8gb, 128gb Samsung 830 x 2 (boot and VM drive), 1TB WD HDD, ATI something, Windows 8.1.1 x64 RTM [work]


  8. #8
    Senior Member chrestomanci's Avatar
    Join Date
    Sep 2004
    Location
    Reading
    Posts
    1,536
    Thanks
    75
    Thanked
    80 times in 68 posts
    • chrestomanci's system
      • Motherboard:
      • Gigabyte GA-EQ45M-S2
      • CPU:
      • Intel Q6600 @ stock clocks
      • Memory:
      • 8Gb 800MHz DDR-2
      • Storage:
      • OCZ SSD Boot drive + 3Tb Western Digital Red
      • Graphics card(s):
      • ATI X1650 (OSS linux drivers)
      • PSU:
      • Novatech 500W
      • Case:
      • Silverstone Sugo SG03 Black
      • Operating System:
      • Linux - Xubuntu Trusty
      • Monitor(s):
      • BenQ 24" LCD (Thanks: DDY)
      • Internet:
      • Zen FTTC

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by HEXUS View Post
    NVIDIA says that the GPU will also run the likes of Python and Java, although just how much use that will be is debatable.
    I think it will be a lot of use because there are loads of programmers out there who prefer to program in Python or Java, and don't like C. It is also a lot quicker to write useful programs in high level languages, than in C.

    Suppose you have an existing program written in Java. It currently takes an hour to run, and because it gets run a great deal you have a bussness need for it to run faster.

    You could re-write the time crital sections in C, which will make the program about 50% faster (40 minutes), but to do so you would need to learn C, and the resultant code would be more bug prone.

    Alternatively you could ask your Boss for £1000 for an nVidai CUDA card that will run the code 100 times faster (4 seconds), with only minor tweaks to the code in a language you are already familiar with.

    Even if the program is not yet written, it is often still better to write in a high level language than a low level one as development will be faster. If that last bit of performance is still needed then the critical sections can still be re-written in C, but most of the time the 100x speedup from using CUDA will be good enough.

  9. #9
    Team HEXUS.net
    Join Date
    Jul 2003
    Posts
    1,149
    Thanks
    46
    Thanked
    196 times in 117 posts

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by chrestomanci View Post
    I think it will be a lot of use because there are loads of programmers out there who prefer to program in Python or Java, and don't like C. It is also a lot quicker to write useful programs in high level languages, than in C.

    Suppose you have an existing program written in Java. It currently takes an hour to run, and because it gets run a great deal you have a bussness need for it to run faster.

    You could re-write the time crital sections in C, which will make the program about 50% faster (40 minutes), but to do so you would need to learn C, and the resultant code would be more bug prone.

    Alternatively you could ask your Boss for £1000 for an nVidai CUDA card that will run the code 100 times faster (4 seconds), with only minor tweaks to the code in a language you are already familiar with.

    Even if the program is not yet written, it is often still better to write in a high level language than a low level one as development will be faster. If that last bit of performance is still needed then the critical sections can still be re-written in C, but most of the time the 100x speedup from using CUDA will be good enough.
    It's the implementation that I'm querying rather than the use, I suppose. Python won't run natively on a GPU, and the 'translation' would hinder performance.

  10. #10
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    24,314
    Thanks
    929
    Thanked
    2,181 times in 1,783 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel i7 950
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • AMD HD7870
      • PSU:
      • XFX Pro 650W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 7 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Exactly.

    Quote Originally Posted by RealWorldTechnologies
    Nvidia's marketing is makinig ridiculous claims that they will eventually have Python and Java support, but the reality is that neither language can run natively on a GPU. An interpreted language, such as Python would kill performance, and so what is likely meant is that Python and Java can call libraries which are written to take advantage of CUDA.

  11. #11
    Team HEXUS.net
    Join Date
    Jul 2003
    Posts
    1,149
    Thanks
    46
    Thanked
    196 times in 117 posts

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    For anyone interested in the architecture to a greater degree, NVIDIA's released a whitepaper to the press a few days ago. It's now on the site, so read away (PDF).

    http://www.nvidia.com/content/PDF/fe...Whitepaper.pdf

  12. #12
    "make it so" scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Manchester
    Posts
    10,777
    Thanks
    832
    Thanked
    1,398 times in 1,209 posts
    • scaryjim's system
      • Motherboard:
      • Asus M4A785TD-M EVO
      • CPU:
      • Phenom II X4 905e
      • Memory:
      • 2x 4GB Crucial Ballistix Tactical VLP
      • Storage:
      • 750GB Seagate
      • Graphics card(s):
      • Sapphire 7750 Low Profile
      • PSU:
      • FSP 250W TFX
      • Case:
      • AOpen H360b
      • Operating System:
      • Windows 7 Professional x64
      • Monitor(s):
      • Iiyama ProLite E481S

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by Tarinder View Post
    Python won't run natively on a GPU, and the 'translation' would hinder performance.
    not only that, but surely to make effective use of a GPU with that many stream processors your code would already have to be written to be massively multithreaded. Having done an MSc which taught Java as its principal language, and therefore knowing the coding skills of many professional Java developers, the concept of them trying to develop a massively-multithreaded software architecture to take advantage of this leaves me shivering in terror...

  13. #13
    HEXUS webmaster Steve's Avatar
    Join Date
    Nov 2003
    Location
    Bristol
    Posts
    14,231
    Thanks
    284
    Thanked
    814 times in 462 posts
    • Steve's system
      • CPU:
      • Intel i3-350M 2.27GHz
      • Memory:
      • 8GiB Crucial DDR3
      • Storage:
      • 320GB HDD
      • Graphics card(s):
      • Intel HD3000
      • Operating System:
      • Ubuntu 11.10

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by Tarinder View Post
    It's the implementation that I'm querying rather than the use, I suppose. Python won't run natively on a GPU, and the 'translation' would hinder performance.
    I don't think the language you write in is that big a deal if it comes with a decent library for this sort of stuff, or a good compiler (the world needs more compiler writers).
    Quote Originally Posted by chrestomanci
    I think it will be a lot of use because there are loads of programmers out there who prefer to program in Python or Java, and don't like C. It is also a lot quicker to write useful programs in high level languages, than in C.
    The problem is, most workloads just aren't written to do SIMD. OK, so new CUDA can run multiple kernels, but I doubt you can run as many kernels as you have streams (I guess I should read the whitepaper!).

    If you want to make a GPGPU run fast, you need to take a lot of data, chop it up, and apply the same operations to each chunk - which is why you can dunk it through something massively parallel.

    As soon as the operations you need to perform vary between each chunk (e.g. you have branches) the whole thing breaks down. Now, assuming you've got data that lends itself to parallel processing, there are ways of dealing with conditionals that don't involve branching.

    Indeed, the reason GPUs have turned into the parallelised beasts that they are, is that graphics shaders and the data they work on are perfect for such situations.

    There are a lot of workloads that can have multiple things happening at once, but that's not the same as doing the same thing to lots of data elements at once, which is why we don't have 512-core CPUs (yet...).
    PHP Code:
    $s = new signature();
    $s->sarcasm()->intellect()->font('Courier New')->display(); 

  14. #14
    Senior Member Hicks12's Avatar
    Join Date
    Jan 2008
    Location
    Plymouth-SouthWest
    Posts
    6,431
    Thanks
    1,058
    Thanked
    301 times in 266 posts
    • Hicks12's system
      • Motherboard:
      • Asus P8Z68-V
      • CPU:
      • Intel i5 2500k@4ghz, cooled by EK Supreme HF
      • Memory:
      • 8GB Kingston hyperX ddr3 PC3-12800 1600mhz
      • Storage:
      • 64GB M4/128GB M4 / WD 640GB AAKS / 1TB Samsung F3
      • Graphics card(s):
      • Palit GTX460 @ 900Mhz Core
      • PSU:
      • 675W ThermalTake ThoughPower XT
      • Case:
      • Lian Li PC-A70 with modded top for 360mm rad
      • Operating System:
      • Windows 7 Professional 64bit
      • Monitor(s):
      • Dell U2311H IPS
      • Internet:
      • 10mb/s cable from virgin media

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by kingpotnoodle View Post
    They have more than doubled the SP count, 512 is more than a GTX295, so this should be much faster given the inevitable tweaks under the hood...

    I reckon it'll compete with a 5870 fairly evenly, and for me the value proposition of NVidia with CUDA, PhysX, the potential for flash acceleration etc is better than ATI (guess it depends on your viewpoint though) so assuming the power/efficiency are good I'm looking forward to this, going to be hard to choose...
    Yes they also increased the bus to 300bit and more onboard ram, this all adds to a huge expense and so it will probably offer better performance than the 5870 but will also cost a HUGE amount more .
    edit: Also the amount of R&D thats gone into this project, it wont be healthy for Nvidia to compete with AMD's 5000 series (a lot less R&D costs etc) on price, either they sell at a loss or sell at a huge price that people wont buy. Although its a start of what could be an amazing platform/design its just going to be a profitless technology unless some serious cost cutting and developments can be made.

    Doesnt matter anyway, by the time fermi is out AMD will already have its 6000 series out or a few months away i reckon.
    Last edited by Hicks12; 18-01-2010 at 11:52 AM.
    Quote Originally Posted by snootyjim View Post
    Trust me, go into any local club and shout "I've got dual Nehalem Xeons" and all of the girls will practically collapse on the spot at the thought of your e-penis

  15. #15
    "make it so" scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Manchester
    Posts
    10,777
    Thanks
    832
    Thanked
    1,398 times in 1,209 posts
    • scaryjim's system
      • Motherboard:
      • Asus M4A785TD-M EVO
      • CPU:
      • Phenom II X4 905e
      • Memory:
      • 2x 4GB Crucial Ballistix Tactical VLP
      • Storage:
      • 750GB Seagate
      • Graphics card(s):
      • Sapphire 7750 Low Profile
      • PSU:
      • FSP 250W TFX
      • Case:
      • AOpen H360b
      • Operating System:
      • Windows 7 Professional x64
      • Monitor(s):
      • Iiyama ProLite E481S

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by Hicks12 View Post
    Yes they also increased the bus to 300bit and more onboard ram...
    A 384bit memory bus is actually smaller than the GTX285, which interfaced using a 512bit bus, so there will be a small cost saving in using less memory chips. It does mean it'll end up shipping with one of those odd-sounding memory buffers though - 1.5GB most likely (I can't see them bothering with a 768MB version of the top end card).

    Of course, ATI only use a 256bit bus, so unless Nvidia run their DDR5 @ < 3200 effective they're going to have more memory bandwidth on tap. On the other hand, it's debatable whether ATIs top end cards are bandwidth limited anyway, and therefore whether that extra bandwidth will boost performance at all...

  16. Received thanks from:

    Hicks12 (18-01-2010)

  17. #16
    Senior Member Hicks12's Avatar
    Join Date
    Jan 2008
    Location
    Plymouth-SouthWest
    Posts
    6,431
    Thanks
    1,058
    Thanked
    301 times in 266 posts
    • Hicks12's system
      • Motherboard:
      • Asus P8Z68-V
      • CPU:
      • Intel i5 2500k@4ghz, cooled by EK Supreme HF
      • Memory:
      • 8GB Kingston hyperX ddr3 PC3-12800 1600mhz
      • Storage:
      • 64GB M4/128GB M4 / WD 640GB AAKS / 1TB Samsung F3
      • Graphics card(s):
      • Palit GTX460 @ 900Mhz Core
      • PSU:
      • 675W ThermalTake ThoughPower XT
      • Case:
      • Lian Li PC-A70 with modded top for 360mm rad
      • Operating System:
      • Windows 7 Professional 64bit
      • Monitor(s):
      • Dell U2311H IPS
      • Internet:
      • 10mb/s cable from virgin media

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Sorry i havent looked much into nvidia top end cards, only focused on the gtx 260 :L.

    Thanks for informing me of that, however the rest still is right isnt it? R&D needs to be recopurated from somewhere and its going to be in the price of the cards, amd just tweaked there design and added more which is great (it shows) and in the end costs a lot less than changing the whole design!.

    We will only know if bandwidth helps more when fermi is released, im betting Q2 release now tbh.
    Quote Originally Posted by snootyjim View Post
    Trust me, go into any local club and shout "I've got dual Nehalem Xeons" and all of the girls will practically collapse on the spot at the thought of your e-penis

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Replies: 21
    Last Post: 15-04-2009, 06:49 PM
  2. Catalyst 7.5 out
    By kalniel in forum Graphics Cards
    Replies: 3
    Last Post: 04-06-2007, 12:17 PM
  3. Replies: 0
    Last Post: 08-03-2006, 07:36 PM
  4. Replies: 6
    Last Post: 21-01-2006, 06:08 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •