Page 1 of 2 12 LastLast
Results 1 to 16 of 20

Thread: News - NVIDIA details next-generation Fermi GPU architecture

  1. #1
    HEXUS.admin
    Join Date
    Apr 2005
    Posts
    31,709
    Thanks
    0
    Thanked
    2,073 times in 719 posts

    News - NVIDIA details next-generation Fermi GPU architecture

    NVIDIA spills the beans on Fermi. Good enough to take down the Radeon HD 5870? We take a first look at the architecture.
    Read more.

  2. #2
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,025
    Thanks
    1,871
    Thanked
    3,383 times in 2,720 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    They seem incredibly reluctant to even talk about gaming, let alone give realistic hints to performance. It's almost like they are counting on HPC sales to grow exponentially to make up for a poor forcast on gaming profit.

    I think AMD will be satisfied.

  3. #3
    Senior Member
    Join Date
    Apr 2009
    Location
    Oxford
    Posts
    263
    Thanks
    5
    Thanked
    7 times in 6 posts
    • borandi's system
      • Motherboard:
      • Gigabyte EX58-UD3R
      • CPU:
      • Core i7 920 D0 (2.66Ghz) @ 4.1Ghz
      • Memory:
      • G.Skill 3x1GB DDR3-1333Mhz
      • Storage:
      • Samsung PB22-J 64GB
      • Graphics card(s):
      • 2x5850 in CF
      • PSU:
      • 800W
      • Case:
      • Verre V770
      • Operating System:
      • Windoze XP Pro
      • Monitor(s):
      • 19"WS
      • Internet:
      • 8MB/448kbps up

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Gaming drives the HPC market in terms of tech, so nVidia has to be within a smidgen of ATI for gaming to compete in both spaces.

    ٩(̾●̮̮̃̾•̃̾)۶

  4. #4
    Butter king GheeTsar's Avatar
    Join Date
    Jan 2009
    Location
    The shire of berks
    Posts
    2,106
    Thanks
    153
    Thanked
    260 times in 163 posts
    • GheeTsar's system
      • Motherboard:
      • Gigabyte GA-Z68XP-UD3P
      • CPU:
      • Intel i5 2500k
      • Memory:
      • Corsair 8GB
      • Storage:
      • Samsung EVO 850 1 TB + 2 x 1TB Storage
      • Graphics card(s):
      • ASUS Radeon R9 280X
      • PSU:
      • Tagan TG600-U33 600W
      • Case:
      • Fractal Design Define R3
      • Operating System:
      • Windows 10
      • Monitor(s):
      • Acer 24" 120Hz GD245HQ
      • Internet:
      • Virgin 100mb

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    It's very hard to see how, from a gaming perspective, nVidia will be able to match ATI on a price per performance basis. I hope I'm wrong as the price of 5870s could stay high for quite some time if this does come to pass.

  5. #5
    Super Nerd
    Join Date
    Jul 2008
    Location
    Cambridge
    Posts
    1,785
    Thanks
    22
    Thanked
    105 times in 72 posts

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    They have more than doubled the SP count, 512 is more than a GTX295, so this should be much faster given the inevitable tweaks under the hood...

    I reckon it'll compete with a 5870 fairly evenly, and for me the value proposition of NVidia with CUDA, PhysX, the potential for flash acceleration etc is better than ATI (guess it depends on your viewpoint though) so assuming the power/efficiency are good I'm looking forward to this, going to be hard to choose...

  6. #6
    Anthropomorphic Personification shaithis's Avatar
    Join Date
    Apr 2004
    Location
    The Last Aerie
    Posts
    10,857
    Thanks
    645
    Thanked
    872 times in 736 posts
    • shaithis's system
      • Motherboard:
      • Asus P8Z77 WS
      • CPU:
      • i7 3770k @ 4.5GHz
      • Memory:
      • 32GB HyperX 1866
      • Storage:
      • Lots!
      • Graphics card(s):
      • Sapphire Fury X
      • PSU:
      • Corsair HX850
      • Case:
      • Corsair 600T (White)
      • Operating System:
      • Windows 10 x64
      • Monitor(s):
      • 2 x Dell 3007
      • Internet:
      • Zen 80Mb Fibre

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by kingpotnoodle View Post
    They have more than doubled the SP count, 512 is more than a GTX295, so this should be much faster given the inevitable tweaks under the hood...
    No mention of speeds though, so it's possible that the speeds have been greatly reduced to fit the SPs in.

    I doubt it but it's a possibility.
    Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
    HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
    HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
    Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
    NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
    Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive

  7. #7
    Lovely chap dangel's Avatar
    Join Date
    Aug 2005
    Location
    Cambridge, UK
    Posts
    8,398
    Thanks
    412
    Thanked
    459 times in 334 posts
    • dangel's system
      • Motherboard:
      • See My Sig
      • CPU:
      • See My Sig
      • Memory:
      • See My Sig
      • Storage:
      • See My Sig
      • Graphics card(s):
      • See My Sig
      • PSU:
      • See My Sig
      • Case:
      • See My Sig
      • Operating System:
      • Windows 10
      • Monitor(s):
      • See My Sig
      • Internet:
      • 60mbit Sky LLU

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Happily I can afford to wait - my current card is fast enough and directx 11 brings speed improvements for all cards this time round so..
    Crosshair VIII Hero (WIFI), 3900x, 32GB DDR4, Many SSDs, EVGA FTW3 3090, Ethoo 719


  8. #8
    Senior Member chrestomanci's Avatar
    Join Date
    Sep 2004
    Location
    Reading
    Posts
    1,614
    Thanks
    94
    Thanked
    96 times in 80 posts
    • chrestomanci's system
      • Motherboard:
      • Asus AMD AM4 Ryzen PRIME B350M
      • CPU:
      • AMD Ryzen 1600 @ stock clocks
      • Memory:
      • 16Gb DDR4 2666MHz
      • Storage:
      • 250Gb Samsung 960 Evo M.2 + 3Tb Western Digital Red
      • Graphics card(s):
      • Basic AMD GPU (OSS linux drivers)
      • PSU:
      • Novatech 500W
      • Case:
      • Silverstone Sugo SG02
      • Operating System:
      • Linux - Latest Xubuntu
      • Monitor(s):
      • BenQ 24" LCD (Thanks: DDY)
      • Internet:
      • Zen FTTC

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by HEXUS View Post
    NVIDIA says that the GPU will also run the likes of Python and Java, although just how much use that will be is debatable.
    I think it will be a lot of use because there are loads of programmers out there who prefer to program in Python or Java, and don't like C. It is also a lot quicker to write useful programs in high level languages, than in C.

    Suppose you have an existing program written in Java. It currently takes an hour to run, and because it gets run a great deal you have a bussness need for it to run faster.

    You could re-write the time crital sections in C, which will make the program about 50% faster (40 minutes), but to do so you would need to learn C, and the resultant code would be more bug prone.

    Alternatively you could ask your Boss for £1000 for an nVidai CUDA card that will run the code 100 times faster (4 seconds), with only minor tweaks to the code in a language you are already familiar with.

    Even if the program is not yet written, it is often still better to write in a high level language than a low level one as development will be faster. If that last bit of performance is still needed then the critical sections can still be re-written in C, but most of the time the 100x speedup from using CUDA will be good enough.

  9. #9
    Team HEXUS.net
    Join Date
    Jul 2003
    Posts
    1,396
    Thanks
    75
    Thanked
    411 times in 217 posts

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by chrestomanci View Post
    I think it will be a lot of use because there are loads of programmers out there who prefer to program in Python or Java, and don't like C. It is also a lot quicker to write useful programs in high level languages, than in C.

    Suppose you have an existing program written in Java. It currently takes an hour to run, and because it gets run a great deal you have a bussness need for it to run faster.

    You could re-write the time crital sections in C, which will make the program about 50% faster (40 minutes), but to do so you would need to learn C, and the resultant code would be more bug prone.

    Alternatively you could ask your Boss for £1000 for an nVidai CUDA card that will run the code 100 times faster (4 seconds), with only minor tweaks to the code in a language you are already familiar with.

    Even if the program is not yet written, it is often still better to write in a high level language than a low level one as development will be faster. If that last bit of performance is still needed then the critical sections can still be re-written in C, but most of the time the 100x speedup from using CUDA will be good enough.
    It's the implementation that I'm querying rather than the use, I suppose. Python won't run natively on a GPU, and the 'translation' would hinder performance.

  10. #10
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,025
    Thanks
    1,871
    Thanked
    3,383 times in 2,720 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Exactly.

    Quote Originally Posted by RealWorldTechnologies
    Nvidia's marketing is makinig ridiculous claims that they will eventually have Python and Java support, but the reality is that neither language can run natively on a GPU. An interpreted language, such as Python would kill performance, and so what is likely meant is that Python and Java can call libraries which are written to take advantage of CUDA.

  11. #11
    Team HEXUS.net
    Join Date
    Jul 2003
    Posts
    1,396
    Thanks
    75
    Thanked
    411 times in 217 posts

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    For anyone interested in the architecture to a greater degree, NVIDIA's released a whitepaper to the press a few days ago. It's now on the site, so read away (PDF).

    http://www.nvidia.com/content/PDF/fe...Whitepaper.pdf

  12. #12
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,231
    Thanked
    2,291 times in 1,874 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by Tarinder View Post
    Python won't run natively on a GPU, and the 'translation' would hinder performance.
    not only that, but surely to make effective use of a GPU with that many stream processors your code would already have to be written to be massively multithreaded. Having done an MSc which taught Java as its principal language, and therefore knowing the coding skills of many professional Java developers, the concept of them trying to develop a massively-multithreaded software architecture to take advantage of this leaves me shivering in terror...

  13. #13
    HEXUS webmaster Steve's Avatar
    Join Date
    Nov 2003
    Posts
    14,283
    Thanks
    293
    Thanked
    841 times in 476 posts

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by Tarinder View Post
    It's the implementation that I'm querying rather than the use, I suppose. Python won't run natively on a GPU, and the 'translation' would hinder performance.
    I don't think the language you write in is that big a deal if it comes with a decent library for this sort of stuff, or a good compiler (the world needs more compiler writers).
    Quote Originally Posted by chrestomanci
    I think it will be a lot of use because there are loads of programmers out there who prefer to program in Python or Java, and don't like C. It is also a lot quicker to write useful programs in high level languages, than in C.
    The problem is, most workloads just aren't written to do SIMD. OK, so new CUDA can run multiple kernels, but I doubt you can run as many kernels as you have streams (I guess I should read the whitepaper!).

    If you want to make a GPGPU run fast, you need to take a lot of data, chop it up, and apply the same operations to each chunk - which is why you can dunk it through something massively parallel.

    As soon as the operations you need to perform vary between each chunk (e.g. you have branches) the whole thing breaks down. Now, assuming you've got data that lends itself to parallel processing, there are ways of dealing with conditionals that don't involve branching.

    Indeed, the reason GPUs have turned into the parallelised beasts that they are, is that graphics shaders and the data they work on are perfect for such situations.

    There are a lot of workloads that can have multiple things happening at once, but that's not the same as doing the same thing to lots of data elements at once, which is why we don't have 512-core CPUs (yet...).
    PHP Code:
    $s = new signature();
    $s->sarcasm()->intellect()->font('Courier New')->display(); 

  14. #14
    Senior Member Hicks12's Avatar
    Join Date
    Jan 2008
    Location
    Plymouth-SouthWest
    Posts
    6,586
    Thanks
    1,070
    Thanked
    340 times in 293 posts
    • Hicks12's system
      • Motherboard:
      • Asus P8Z68-V
      • CPU:
      • Intel i5 2500k@4ghz, cooled by EK Supreme HF
      • Memory:
      • 8GB Kingston hyperX ddr3 PC3-12800 1600mhz
      • Storage:
      • 64GB M4/128GB M4 / WD 640GB AAKS / 1TB Samsung F3
      • Graphics card(s):
      • Palit GTX460 @ 900Mhz Core
      • PSU:
      • 675W ThermalTake ThoughPower XT
      • Case:
      • Lian Li PC-A70 with modded top for 360mm rad
      • Operating System:
      • Windows 7 Professional 64bit
      • Monitor(s):
      • Dell U2311H IPS
      • Internet:
      • 10mb/s cable from virgin media

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by kingpotnoodle View Post
    They have more than doubled the SP count, 512 is more than a GTX295, so this should be much faster given the inevitable tweaks under the hood...

    I reckon it'll compete with a 5870 fairly evenly, and for me the value proposition of NVidia with CUDA, PhysX, the potential for flash acceleration etc is better than ATI (guess it depends on your viewpoint though) so assuming the power/efficiency are good I'm looking forward to this, going to be hard to choose...
    Yes they also increased the bus to 300bit and more onboard ram, this all adds to a huge expense and so it will probably offer better performance than the 5870 but will also cost a HUGE amount more .
    edit: Also the amount of R&D thats gone into this project, it wont be healthy for Nvidia to compete with AMD's 5000 series (a lot less R&D costs etc) on price, either they sell at a loss or sell at a huge price that people wont buy. Although its a start of what could be an amazing platform/design its just going to be a profitless technology unless some serious cost cutting and developments can be made.

    Doesnt matter anyway, by the time fermi is out AMD will already have its 6000 series out or a few months away i reckon.
    Last edited by Hicks12; 18-01-2010 at 12:52 PM.
    Quote Originally Posted by snootyjim View Post
    Trust me, go into any local club and shout "I've got dual Nehalem Xeons" and all of the girls will practically collapse on the spot at the thought of your e-penis

  15. #15
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,231
    Thanked
    2,291 times in 1,874 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Quote Originally Posted by Hicks12 View Post
    Yes they also increased the bus to 300bit and more onboard ram...
    A 384bit memory bus is actually smaller than the GTX285, which interfaced using a 512bit bus, so there will be a small cost saving in using less memory chips. It does mean it'll end up shipping with one of those odd-sounding memory buffers though - 1.5GB most likely (I can't see them bothering with a 768MB version of the top end card).

    Of course, ATI only use a 256bit bus, so unless Nvidia run their DDR5 @ < 3200 effective they're going to have more memory bandwidth on tap. On the other hand, it's debatable whether ATIs top end cards are bandwidth limited anyway, and therefore whether that extra bandwidth will boost performance at all...

  16. Received thanks from:

    Hicks12 (18-01-2010)

  17. #16
    Senior Member Hicks12's Avatar
    Join Date
    Jan 2008
    Location
    Plymouth-SouthWest
    Posts
    6,586
    Thanks
    1,070
    Thanked
    340 times in 293 posts
    • Hicks12's system
      • Motherboard:
      • Asus P8Z68-V
      • CPU:
      • Intel i5 2500k@4ghz, cooled by EK Supreme HF
      • Memory:
      • 8GB Kingston hyperX ddr3 PC3-12800 1600mhz
      • Storage:
      • 64GB M4/128GB M4 / WD 640GB AAKS / 1TB Samsung F3
      • Graphics card(s):
      • Palit GTX460 @ 900Mhz Core
      • PSU:
      • 675W ThermalTake ThoughPower XT
      • Case:
      • Lian Li PC-A70 with modded top for 360mm rad
      • Operating System:
      • Windows 7 Professional 64bit
      • Monitor(s):
      • Dell U2311H IPS
      • Internet:
      • 10mb/s cable from virgin media

    Re: News - NVIDIA details next-generation Fermi GPU architecture

    Sorry i havent looked much into nvidia top end cards, only focused on the gtx 260 :L.

    Thanks for informing me of that, however the rest still is right isnt it? R&D needs to be recopurated from somewhere and its going to be in the price of the cards, amd just tweaked there design and added more which is great (it shows) and in the end costs a lot less than changing the whole design!.

    We will only know if bandwidth helps more when fermi is released, im betting Q2 release now tbh.
    Quote Originally Posted by snootyjim View Post
    Trust me, go into any local club and shout "I've got dual Nehalem Xeons" and all of the girls will practically collapse on the spot at the thought of your e-penis

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Replies: 21
    Last Post: 15-04-2009, 06:49 PM
  2. Catalyst 7.5 out
    By kalniel in forum Graphics Cards
    Replies: 3
    Last Post: 04-06-2007, 12:17 PM
  3. Replies: 0
    Last Post: 08-03-2006, 08:36 PM
  4. Replies: 6
    Last Post: 21-01-2006, 07:08 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •