Page 2 of 2 FirstFirst 12
Results 17 to 17 of 17

Thread: AMD Radeon RX 6700 XT benchmarks leak out

  1. #17
    Senior Member
    Join Date
    Jul 2009
    Location
    West Sussex
    Posts
    1,721
    Thanks
    197
    Thanked
    243 times in 223 posts
    • kompukare's system
      • Motherboard:
      • Asus P8Z77-V LX
      • CPU:
      • Intel i5-3570K
      • Memory:
      • 4 x 8GB DDR3
      • Storage:
      • Samsung 850 EVo 500GB | Corsair MP510 960GB | 2 x WD 4TB spinners
      • Graphics card(s):
      • Sappihre R7 260X 1GB (sic)
      • PSU:
      • Antec 650 Gold TruePower (Seasonic)
      • Case:
      • Aerocool DS 200 (silenced, 53.6 litres)l)
      • Operating System:
      • Windows 10-64
      • Monitor(s):
      • 2 x ViewSonic 27" 1440p

    Re: AMD Radeon RX 6700 XT benchmarks leak out

    Quote Originally Posted by DanceswithUnix View Post
    Wide is much better than fast so the real way to feed a GPU is HBM, but people seem to turn their nose up at HBM these days.
    Speaking of HBM, the Zen3 EPYC launch had an interview over at AT and there's this titbit:
    We see more and more interest in using high bandwidth memory, for an on-package solution. I think you will see SKU’s in the future from a variety of companies incorporating HBM, especially for AI. That will initially be fairly specialized to be to be candid, because HBM is extremely expensive. So for most the standard DDR memory, even DDR5 memory, means that HBM is going to be confined initially to applications that are incredibly memory latency sensitive, and then you know, it’ll be interesting to how it plays out over time.
    https://www.anandtech.com/show/16548...t-norrod-milan

    Which implies that even for HPC it is too expensive. I guess HPC would need a lot more than 4GB or 8GB.
    As for people turning their noses up at HBM, I though people were just weren't impressed with the 4GB of Fury.
    And I guess AMD weren't impressed with allegedly loosing money on Fury and Vega.
    Pity as AMD spent a lot of money developing HBM and all they have to show for it is the Wikipedia entries:
    https://en.wikipedia.org/wiki/High_B...ry#Development
    In fact, I think Nvidia have done better out of it despite not being involved with its development simply because they sell a lot more high-end compute cards where HBM really helps.

  2. Received thanks from:

    DanceswithUnix (16-03-2021)

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •