Page 6 of 6 FirstFirst ... 3456
Results 81 to 91 of 91

Thread: Nvidia vastly overestimated gamer and miner GPU demand

  1. #81
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by Corky34 View Post
    Well technically speaking you could run it (the SDF protocol) over any electrically conductive medium, but that doesn't change the fact that AMD use it on the physical interconnect communication plane and that it's as far removed from HT as is PCIe to PCI.

    Also i think trying to connect up something like an Epyc or TR would be tricky with EMIB, I've not done an exact count but i think you need something like 15+ EMIB's, yes EMIB is cheaper but it doesn't seem to scale well and I'd suggest where you want to connect lots of things together is in the high-end stuff.
    I believe you are over thinking this.

    AIUI EPYC is just multi chip module packaging.
    Vega uses an interposer because it has thousands of traces going to memory chips on package.
    If you only have a few chips with dense wiring to connect, you can do that with EMIB and that is cheaper than an interposer.

    Separate to this:

    If you have something you want to tie into a coherent cache structure, infinity fabric can help.

  2. #82
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    I'm not over thinking it, i was simply replying to Stefank's claim that "Infinity fabric is a superset of HyperTransport which AMD employed on CPU since the first Athlon 64, it has nothing to do with the physical interconnect solution employed."

    As i originally said Infinity fabric is just a marketing term and the SDF has about as much in common with HT as PCI does with PCIe, and that it's intrinsically linked to the physical interconnect solution because unlike EMIB there's also no constraint on the topology of the nodes connected over the fabric, communication can be done directly node-to-node, island-hopping in a bus topology, or as a mesh topology system.

    As Seung Wook (S.W.) Yoon, director of product technology marketing at STATS ChipPAC said in the article Cat posted earlier..
    But for large SoCs with HBM and a 55mm to 65mm package, there is no solution available today except silicon interposers. There are other solutions that are low-cost, medium density, which is what you see with InFO and RDL bridges. But for chips with HBM, the connections need to be very dense. With regular DRAM, you don’t need all of that density.

  3. #83
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by Corky34 View Post
    I'm not over thinking it, i was simply replying to Stefank's claim that "Infinity fabric is a superset of HyperTransport which AMD employed on CPU since the first Athlon 64, it has nothing to do with the physical interconnect solution employed."

    As i originally said Infinity fabric is just a marketing term and the SDF has about as much in common with HT as PCI does with PCIe, and that it's intrinsically linked to the physical interconnect solution because unlike EMIB there's also no constraint on the topology of the nodes connected over the fabric, communication can be done directly node-to-node, island-hopping in a bus topology, or as a mesh topology system.

    As Seung Wook (S.W.) Yoon, director of product technology marketing at STATS ChipPAC said in the article Cat posted earlier..
    lol, tbh I'm now not sure what you are saying...

    It sounds like you think EPYC uses an interposer, when it doesn't?

    And AIUI, EPYC has two physical layers for its SDF connections just on that one chip, one for die-to-die on the package and one over what would otherwise be the PCIe serdes blocks for going between two sockets.

    PCIe is AIUI a ganged serial version of the parallel PCI, they have a heck of a lot in common other than the final physical transport so you might want another example there

  4. #84
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    What I'm saying is that it's wrong to say "Infinity fabric is a superset of HyperTransport which AMD employed on CPU since the first Athlon 64, it has nothing to do with the physical interconnect solution employed."

    Firstly Infinity fabric is an all encompassing marketing term for the SDF and SCF so couldn't be further removed from HyperTransport if you tried, one is a marketing term the other is a technology for interconnection of computer processors.

    Secondly it has everything to do with the physical interconnect solution employed as without the correct physical interconnect it wouldn't work, that for large SoCs with HBM and a 55mm to 65mm package, there is no solution available today except silicon interposers that AMD are using for all EPYC and TR chips as seen in the image accompanying the entry on wikichips for the Zen microarchitecture multiprocessors entry...
    This image originates from a slide presented at AMD EPYC Tech Day, June 20, 2017 and shows one layer of die interconnects on the EPYC interposer.

  5. #85
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by Corky34 View Post
    , there is no solution available today except silicon interposers that AMD are using for all EPYC and TR chips as seen in the image accompanying the entry on wikichips for the Zen microarchitecture multiprocessors entry...
    That wiki entry is confused, it mentions Epyc using MCM and interposer, but they are different so one of those sentences is wrong. AIUI, Epyc uses an MCM, a packaging technology that predates interposers by quite a long time (I'm sure it's over a decade, but too lazy to look it up).

    AMD uses interposers to wire up HBM, and Epyc doesn't use HBM.

    Now I'm off to open a bottle of wine, so have a fun evening, I won't be making any arguments for the rest of the day

  6. #86
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,231
    Thanked
    2,291 times in 1,874 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    I should probably stay out of this, but....



    Note the green organic substrate. Note the SMDs surrounding the 4 dies. Note the huge size which, afaik, is way beyond anything that could fit in a current silicon fab's process.

    That's a traditional MCM, not an interposer.

    Also
    What I'm saying is that it's wrong to say "Infinity fabric is a superset of HyperTransport which AMD employed on CPU since the first Athlon 64, it has nothing to do with the physical interconnect solution employed."
    Except

    In a way, Infinity Fabric ... can be thought of as a superset of a new and improved HyperTransport 2.0 considering it utilizes the HyperTransport messaging protocol.
    source: https://wccftech.com/amds-infinity-fabric-detailed/

    Infinity Fabric utilises the HyperTransort messaging protocol, and can be considered a superset of an improved HT version. It is carried within EPYC dies using in-silicon pathways, across EPYC chips by a traditional MCM, and between EPYC chips using in-motherboard pcb traces that would otherwise be dedicated to carrying PCIe ports. Nothing to do with the physical interconnect.

    So what you're saying it wrong, is essentially right. It might not give the full story, but there's nothing even vaguely "wrong" about it...

  7. #87
    Registered User
    Join Date
    Jun 2018
    Posts
    12
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by CAT-THE-FIFTH View Post
    AMD is behind Nvidia in power consumption,which is compounded by the GF process they use,so I don't disagree.I haven't seen any reliable estimates for the Vega M die size though and apparently its not fully enabled(apparently has 1792 shaders according to NBC).

    At least in its NUC form it seems slightly faster than a GTX1050TI. However,saying it is bigger than a GP106?? The GP106 is 200MM2 and Polaris 10 is 232MM2. That would place Vega M at close to Polaris 10 size,with less shaders(2304 shaders),but it has double the ROP count.

    The Ryzen APU is a smaller chip,since its a single 209.78MM2 SOC,as opposed to an Intel 4C/8T CPU at around 125MM2,plus a southbridge and a separate GPU,and runs off bog standard DDR4.

    Regarding the APU,laptop mag tested two HP X360 models,which are very similar(same battery and same case,so as close to apples to apples you can get which is not easy with laptops),and battery life was a bit better in the case of the Intel system:

    https://www.laptopmag.com/articles/a...l-8th-gen-core

    It could be that the Ryzen APU systems are configured for a higher TDP,but there were a ton of driver issues for the desktop models(see some of the discussions here we had for the desktop versions) and AMD taking yonks to actually update the drivers,to the extent one YT channel ran Vega64 drivers on the IGP and performance went up(!),so it makes me wonder whether that is also not helping.

    Even the TR article you linked to alluded to that. BT and Hexus had issues too.




    Who said I am not aware of other solutions(link describing some alternatives) - its not like people haven't been talking about it in the past here! But EMIB does look cost effective compared to what AMD/Nvidia have tried so far,and none of them have integrated a decent sized CPU and GPU(made on different nodes) like Intel have done in a production PC.
    You specifically stated that AMD and NVIDIA could get in trouble because Intel has EMIB, I think you are going too far with that statement as there are other solutions and I wasn't talking about anything from AMD or NVIDIA but from foundries who offers their solution to anyone.
    In the Kaby Lake G processor the silicon bridge connect just Vega with the HBM stack, the GPU is connected to the CPU through normal wiring just like in any past MCM design.

    You have to consider that Vega is less efficient than Polaris on die area and that Vega in the apu have less cu so is smaller, you think is strange that Vega require the size of the GTX 1060 to perform like a GTX 1050? look at the die size difference between desktop Vega and GP102. Intel traded area for power efficiency, to give you an example NVIDIA has a GP104 based sku that delivers 5.5 TFLOP at just 75W which is the tdp of a 1050ti.
    No one published the measured die size of the Kaby's Vega, not even taken with a ruler (we can only guess why), but since we know the size of the HBM die we can estimate it's size within a reasonable margin of error.


  8. #88
    Registered User
    Join Date
    Jun 2018
    Posts
    12
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by scaryjim View Post
    I should probably stay out of this, but....



    Note the green organic substrate. Note the SMDs surrounding the 4 dies. Note the huge size which, afaik, is way beyond anything that could fit in a current silicon fab's process.

    That's a traditional MCM, not an interposer.

    Also


    Except


    source: https://wccftech.com/amds-infinity-fabric-detailed/

    Infinity Fabric utilises the HyperTransort messaging protocol, and can be considered a superset of an improved HT version. It is carried within EPYC dies using in-silicon pathways, across EPYC chips by a traditional MCM, and between EPYC chips using in-motherboard pcb traces that would otherwise be dedicated to carrying PCIe ports. Nothing to do with the physical interconnect.

    So what you're saying it wrong, is essentially right. It might not give the full story, but there's nothing even vaguely "wrong" about it...
    Yep, there's no silicon interposer, it's just a standard MCM built on an organic substrate and to clarify, IF could works on silicon, wires, between socket and even on optical interconnection, that's why it has nothing to do with the physical interconnection solution
    Last edited by Stefank; 02-07-2018 at 01:34 AM.

  9. #89
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by DanceswithUnix View Post
    That wiki entry is confused, it mentions Epyc using MCM and interposer, but they are different so one of those sentences is wrong. AIUI, Epyc uses an MCM, a packaging technology that predates interposers by quite a long time (I'm sure it's over a decade, but too lazy to look it up).

    AMD uses interposers to wire up HBM, and Epyc doesn't use HBM.

    Now I'm off to open a bottle of wine, so have a fun evening, I won't be making any arguments for the rest of the day
    Not really as an interposer is just means of routing a connection from one place to another.

    Quote Originally Posted by scaryjim View Post
    Infinity Fabric utilises the HyperTransort messaging protocol, and can be considered a superset of an improved HT version. It is carried within EPYC dies using in-silicon pathways, across EPYC chips by a traditional MCM, and between EPYC chips using in-motherboard pcb traces that would otherwise be dedicated to carrying PCIe ports. Nothing to do with the physical interconnect.

    So what you're saying it wrong, is essentially right. It might not give the full story, but there's nothing even vaguely "wrong" about it...
    Infinity Fabric is a collective marketing term used to describe two separate system and as i said the SDF uses what could be considered a superset of HT, however IMO it's important to make a distinction as HT is a data communication protocol only and 'Infinity Fabric' is a data communication protocol and a control communication protocol.

    Quote Originally Posted by Stefank View Post
    ....to clarify, IF could works on silicon, wires, between socket and even on optical interconnection, that's why it has nothing to do with the physical interconnection solution
    The only place Infinity Fabric exists is in some marketing guys head, it's not one single thing like HT, it's a term used to describe two separate system.

    If the SDF and SCF have nothing to do with the physical interconnection solution then in theory we should be able to connect one CCX to another using a foot or two of 12 gauge wire, somehow i don't think that would work.
    Last edited by Corky34; 02-07-2018 at 09:08 AM.

  10. #90
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    Quote Originally Posted by Corky34 View Post
    Not really as an interposer is just means of routing a connection from one place to another.


    ...

    If the SDF and SCF have nothing to do with the physical interconnection solution then in theory we should be able to connect one CCX to another using a foot or two of 12 gauge wire, somehow i don't think that would work.
    Not sure what you are disagreeing with there.

    Yes an interposer is a way of making connections, yes you could route infinity fabric over it, but this is engineering and you don't do something costing a tenner when it can be done just as well for a quid. IF doesn't *need* an interposer, and at about $20 estimated for the one used on Vega they don't get used where they aren't absolutely needed.

    I'm pretty sure I get your original point, IF is an interesting and flexible solution. But generally things like PCIe are better supported and it just makes sense to use them instead, so although you *could* drive a vega chip from a zepplin die directly over the fabric they both supposedly support internally, in reality it makes prototyping and device drivers easier if you just couple then with PCIe.

    As for 12 gauge wire, Epyc is designed for a two socket system, so in theory you could couple the sockets together with 12 gauge wire so IF packets are going through those wires, it just wouldn't be sensible or the best performance.

  11. #91
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: Nvidia vastly overestimated gamer and miner GPU demand

    An interposer (interpōnere) is anything put up between one socket or connection to another socket or connection, if i put wires from every pin on a motherboards socket to a chip sitting 3 feet above the socket that would be an interposer, a rather silly one but I'm try to make a point, that calling something an interposer only tells us that some sort of electrical connection is sitting between something, that saying the electrical connection between something has nothing to do with the physical interconnect solution employed is self-contradictory.

    When i mentioned the 12 gauge wire is was referring to a single wire, not wires, i was attempting to show how ludicrous it is to say that the physical connection used has nothing to do with what's sent over it.

Page 6 of 6 FirstFirst ... 3456

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •