Page 2 of 2 FirstFirst 12
Results 17 to 22 of 22

Thread: AI and ML applications to eat into GDDR6 supply, says report

  1. #17
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,978
    Thanks
    778
    Thanked
    1,586 times in 1,341 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: AI and ML applications to eat into GDDR6 supply, says report

    Quote Originally Posted by Corky34 View Post
    I think when they're referring to networking they may be talking about high capacity switches where higher bandwidth/throughput is required.
    Those are the ones I am talking about. The problem is that you have to work out where the packet is going whilst the header comes in, else you delay the packet. With high end switches, delaying a lot of packets means a lot of storage for that bubble of data. So the problem is how fast you can look up a destination MAC and map it to an output port. I suppose if the downstream switch tells you to stall transmit then you have no choice but to buffer the packet, and gddr would be good for that.

  2. #18
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AI and ML applications to eat into GDDR6 supply, says report

    I may have this completely wrong as my knowledge of networking is rusty to say the least, having said that i don't think it's so much about mapping a MAC address to a particular port, it's probably more about ensuring the buffer within the switch is able to take in and store all the packets it's being sent so as to reduce the likelihood of an ACK not being sent that would result in the sender throttling.

    If i remember my networking correctly doesn't TCP have the ability to increase the gap between packet transmission, and retransmit packets, if it detects a bad link (if it doesn't receive ACK's)? If I've remembered correctly you don't, in an ideal world, want that to happen as in a burst situation you could end up with half your 100GBe ports throttling because they happened to be the unlucky ones. Much better to, if you can, take everything being sent to you, acknowledging you got it, store it in a buffer and transmit and/or load balance after you received it.

  3. #19
    Senior Member
    Join Date
    Mar 2005
    Posts
    4,932
    Thanks
    171
    Thanked
    383 times in 310 posts
    • badass's system
      • Motherboard:
      • ASUS P8Z77-m pro
      • CPU:
      • Core i5 3570K
      • Memory:
      • 32GB
      • Storage:
      • 1TB Samsung 850 EVO, 2TB WD Green
      • Graphics card(s):
      • Radeon RX 580
      • PSU:
      • Corsair HX520W
      • Case:
      • Silverstone SG02-F
      • Operating System:
      • Windows 10 X64
      • Monitor(s):
      • Del U2311, LG226WTQ
      • Internet:
      • 80/20 FTTC

    Re: AI and ML applications to eat into GDDR6 supply, says report

    Quote Originally Posted by Corky34 View Post
    I think when they're referring to networking they may be talking about high capacity switches where higher bandwidth/throughput is more desirable.
    Quote Originally Posted by DanceswithUnix View Post
    Those are the ones I am talking about. The problem is that you have to work out where the packet is going whilst the header comes in, else you delay the packet. With high end switches, delaying a lot of packets means a lot of storage for that bubble of data. So the problem is how fast you can look up a destination MAC and map it to an output port. I suppose if the downstream switch tells you to stall transmit then you have no choice but to buffer the packet, and gddr would be good for that.
    You're getting packets and frames mixed up.
    Packets=layer 3, (no MAC address) frames = layer 2 - use the MAC address.
    MAC address lookup is very simple and does not require much memory - just exceptionally fast, low latency memory. It's just a table of 48 bit addresses (6 bytes) and the ports they are assigned to. Switches can handle thousands of MAC addresses - not millions or billions. They don't need to.

    Quote Originally Posted by Corky34 View Post
    I may have this completely wrong as my knowledge of networking is rusty to say the least, having said that i don't think it's so much about mapping a MAC address to a particular port, it's probably more about ensuring the buffer within the switch is able to take in and store all the packets it's being sent so as to reduce the likelihood of an ACK not being sent that would result in the sender throttling.

    If i remember my networking correctly doesn't TCP have the ability to increase the gap between packet transmission, and retransmit packets, if it detects a bad link (if it doesn't receive ACK's)? If I've remembered correctly you don't, in an ideal world, want that to happen as in a burst situation you could end up with half your 100GBe ports throttling because they happened to be the unlucky ones. Much better to, if you can, take everything being sent to you, acknowledging you got it, store it in a buffer and transmit and/or load balance after you received it.
    See above. With the exceptions below, switches don't process packets. In the same way that ANPR doesn't process passengers - just the cars.

    It's got nothing to do with switches per se. Switches use ASICs simply because everything else is too slow. The memory is built in.

    Ahh, I hear you all say - why does xyz switch have a CPU and memory then? That's because it's not just a switch. It probably performs routing functions as well. Maybe a little bit of packet filtering thrown in there too. This might be where GDDR6 can be useful. I doubt it though. Layer 3 switches and sizable routers use TCAM memory. It's different and very, very expensive. It's like memory in reverse. In normal memory, you provide and address and get the contents. with TCAM you provide the contents and get a list of addresses back.

    But you might say, these SDN switches have CPU's, FPGA's and memory. Well yes they do. But they are not just switches. More like network decision servers. I strongly suspect it's for these cloudy network devices we will see the GDDR6 demand.
    "In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."

  4. #20
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AI and ML applications to eat into GDDR6 supply, says report

    I don't think I'm getting packets and frames mixed up, what I'm trying to say is that if all 24 incoming connection to a switch burst up to their maximum, lets say 10GBe each, those connections can either wait for the ACK's for the data they've sent or the switch can buffer what it can't send onwards immediately. Depending on the receipt of ACK's actually introduces latency (software vs hardware, further along the connection vs at the switch) and you can't ensure each of those 24 incoming connection have a fair share of the outgoing connection, say a 100GBe.

    Having a high bandwidth buffer on the switch allows, to a degree, exactly what it says, buffering of the incoming data so it's better able to handle 24 incoming connections all bursting to maximum over a short period and it allows the switch to load balance those incoming connections so they all have an equal chance of both sending the data and receiving the ACK in return (so that the same 10-11 ports don't happen to be the main ones getting their packets sent out and replied to on the out going connection).

    Having done some more research, it's to hot to sleep, i found this short article that probably does a better job of explaining what I'm trying to say.
    The most common cause of switch buffer is some variation of the many-to-one traffic pattern. For example, an application is clustered across many server nodes. If one node requests data from all the other nodes simultaneously, all of the replies should arrive at the same time. When this happens, all of the network traffic floods the egress switch port facing the requestor. If the switch doesn't have sufficient egress buffers, it will drop some traffic, adding application latency. Sufficient network buffers prevent the excessive delay caused by lower-level protocols working out what traffic was dropped.
    If you want to get a little more technical as lets admit it we all like to geek out now and again there's this fairly short white paper (PDF warning) on Why Big Data Needs Big Buffer Switches that goes into more details...
    In this paper, we demonstrate that without sufficient packet buffer memory in the switches, network bandwidth is allocated grossly unfairly among different flows, resulting in unpredictable completion times for distributed applications. This is the result of packets on certain flows getting dropped more often than on other flows, the so-called TCP/IP Bandwidth Capture effect. We present simulation data that show that in heavily loaded networks, query completion times are dramatically shorter with big buffer switches compared to small buffer switches.

  5. #21
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,978
    Thanks
    778
    Thanked
    1,586 times in 1,341 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: AI and ML applications to eat into GDDR6 supply, says report

    Quote Originally Posted by badass View Post
    Switches use ASICs simply because everything else is too slow. The memory is built in.
    Gigabit switches, yes. Lack of ASICs for 2.5GbE and above seems to be the reason those switches are still expensive.

  6. #22
    Admin (Ret'd)
    Join Date
    Jul 2003
    Posts
    18,481
    Thanks
    1,016
    Thanked
    3,208 times in 2,281 posts

    Re: AI and ML applications to eat into GDDR6 supply, says report

    Quote Originally Posted by EvilCycle View Post
    Same old story all the time, relying on the main players to help the consumers. It's never going to happen when they can carry on making high profits. We need to expect that memory prices are now higher forever, they may go up and down slightly but the threshold low price has now been risen with no going back. Every time the prices look set to fall, they will "innovate" to ensure that those who want the latest and greatest will always pay the extra premiums.
    Not necesarily.

    Dobt get me wrong - I'm not predicting ... either way ... what I think will happen. Merely because of either the market structure or the orofit motive.

    It is precisely that profit motive that can drop prices in a duopoly/oligopoly eituation. It's simikar to a zero-sum game and a classic prisonr's dilemna situation.

    On one level, it's in each company's best interests to keep prices high. BUT .... with high margins, the profit motive implies you can increase profits by reducing price (and grnce margin) provided the resulting increase in volume is sufficient. That, of course, depends on production elasticity, price elasticity of demand and a homogeneity of product. That is, it won't work if consumers are overlg brand conscious.

    However, if one company cuts price to increase volume of sales, sooner or later others will react. They have to, or their volume and hence profirs drop.

    So while the profit motive says keep prices high overall, it also says that at least briefly, company 1 can increase profits at the expense of company 2 thtough 8 ( or whatever), until other react. And when they do, prices drop and we're back at demand-equilibrium, but with a lower price.

    So it's in evdfybody's ( except consumers) interests to keep prices high except that one company csn temporarily get even more profit by cutting price.

    So will all companies accept medium-high profits when they can have very high profits in the short-term? Hence, pfisoner's dilemma.

    The answer may well depend on factiors like your stock level, your belief about opponent's stock level and rekative production lead times. If compsny S thinks B is producing/selling at maxmum and cannot quickly or easily increase production, they may well cut prices because they think it's in their short-term benefit and they gain at the expense of B through H (or whatever), but if their assumptions are wrong, we have a orice war and A through H all lose and consumers win.

    Of course, all that falls apart if companys collude, which is why the have an incentive to do so and why we have (periodicaly ignored) antittust competition lawx.

    But it does show how, even in an oligolopy market, the orofit motive csn be precisely what drives prices down, Hence my "not necessarily" at the start of this post.

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •