Page 2 of 2 FirstFirst 12
Results 17 to 25 of 25

Thread: Asus GeForce RTX 3080 Ti TUF Gaming

  1. #17
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,025
    Thanks
    1,871
    Thanked
    3,383 times in 2,720 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Edit!!

    This doesn't look very promising:
    https://videocardz.com/newz/next-gen...-400w-of-power
    Well I'm not interested in the flagship cards that go for performance at all cost, I'll be looking for the more efficient offerings further down the stack.

  2. #18
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by kalniel View Post
    Well I'm not interested in the flagship cards that go for performance at all cost, I'll be looking for the more efficient offerings further down the stack.
    The problem is if they need that kind of power just to get a generational flagship increase,what about the rest of the range?? That means by extension,also mainstream GPUs will go up in power,which isn't great as I use a SFF PC.Examples include GCN 1.2 and Fermi,or we go back further the Nvidia FX. Its very concerning,and might explain Nvidia and its new power connector.

    Plus those top AMD figures are for a rumoured dual MCM GPU. So that means the single MCM will be the midrange equivalent. So easily 200W power going from that(if the top end is 400W and has the best binned MCMs).

    You saw it with this generation - the RTX3060TI I have draws around 30W~40W more power under load than the GTX1080 I use,and its arputhe second most efficient desktop GPU in terms of performance/watt:
    https://tpucdn.com/review/amd-radeon...efficiency.png
    Last edited by CAT-THE-FIFTH; 30-07-2021 at 09:02 PM.

  3. #19
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,025
    Thanks
    1,871
    Thanked
    3,383 times in 2,720 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by CAT-THE-FIFTH View Post
    The problem is if they need that kind of power just to get a generational flagship increase,what about the rest of the range?? That means by extension,also mainstream GPUs will go up in power,which isn't great as I use a SFF PC.Examples include GCN 1.2 and Fermi,or we go back further the Nvidia FX. Its very concerning,and might explain Nvidia and its new power connector.

    Plus those top AMD figures are for a rumoured dual MCM GPU. So that means the single MCM will be the midrange equivalent. So easily 200W power going from that(if the top end is 400W and has the best binned MCMs).

    You saw it with this generation - the RTX3060TI I have draws around 30W~40W more power under load than the GTX1080 I use,and its arputhe second most efficient desktop GPU in terms of performance/watt:
    https://tpucdn.com/review/amd-radeon...efficiency.png
    Maybe, (and apologies to viewers tuning in to watch 3080ti talk, we've slightly diverted, though I guess talking about flagships is kind of on topic ) though I think there are reasons for all of that:

    This gen Nvidia are suffering from that less than stellar Samsung process so almost the full line up is stuffed for efficiency. That goes away for the 40 series, and we're back to the more usual situation of both parties feeling like they have to win the flagship battle at all costs - so they'll give them juice until Scottie is crying they cannae take no more. That doesn't necessarily apply to the rest of the range as long as the process is OK and they've not done an odd architecture design.

    AMD delivered on that front for CPU, and the infinity cache they needed for chiplets has netted them a win in reducing mem bus too so it's not looking too bad, flagship aside.

    Nvidia have shown they can deliver efficient monolithic designs, it's just a question of how big/toasty they have to go to compete with MCM - I think their own MCM is not going to be mainstream ready by next time around. But once they're 'only' competing with mid-range/low-chiplet GPUs they should fall back into an optimal perf/power band.

  4. #20
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by kalniel View Post
    Maybe, (and apologies to viewers tuning in to watch 3080ti talk, we've slightly diverted, though I guess talking about flagships is kind of on topic ) though I think there are reasons for all of that:

    This gen Nvidia are suffering from that less than stellar Samsung process so almost the full line up is stuffed for efficiency. That goes away for the 40 series, and we're back to the more usual situation of both parties feeling like they have to win the flagship battle at all costs - so they'll give them juice until Scottie is crying they cannae take no more. That doesn't necessarily apply to the rest of the range as long as the process is OK and they've not done an odd architecture design.

    AMD delivered on that front for CPU, and the infinity cache they needed for chiplets has netted them a win in reducing mem bus too so it's not looking too bad, flagship aside.

    Nvidia have shown they can deliver efficient monolithic designs, it's just a question of how big/toasty they have to go to compete with MCM - I think their own MCM is not going to be mainstream ready by next time around. But once they're 'only' competing with mid-range/low-chiplet GPUs they should fall back into an optimal perf/power band.
    The thing is that MCM designs have a power penalty too,ie,the interconnects. Its why AMD APUs are still monolithic,so I expect the first MCM designs to have their own set of power problems too.

    If AMD is using a dual MCM design,I expect the single MCM will be its mainstream GPU. So think instead of it in todays range. Instead of a 5120 shader Navi 21,you have a pair of Navi 22 GPUs stuck together. We all know from the old dual GPU cards,the best binned GPUs were used for them. Its the same with Ryzen - the 16 core CPUs,use a much better quality of bin chiplet(think Ryzen 9 5950X vs Ryzen 7 5800X).

    So if the top AMD GPU is 400W,its worrying because that mainstream GPU based off a single MCM GPU. That would be at least 200W,but probably more as the MCM chip used in it will be a worse bin.

  5. #21
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by CAT-THE-FIFTH View Post
    The thing is that MCM designs have a power penalty too,ie,the interconnects. Its why AMD APUs are still monolithic,so I expect the first MCM designs to have their own set of power problems too.
    MCM is usually compared with having lots of chips on a PCB, and driving MCM connections is way lower power than superbuffers going off chip. Monolithic will always be cheapest, which is more where the APUs are targetted.

    I don't know if you saw this the other day, it was the first I had heard of it. Looks like a big GPU like this would be illegal to sell in California on the grounds that if we aren't careful computers will use every drop of power we generate around 2040, so I wonder if the rest of the world will follow suit:

    https://www.theregister.com/2021/07/26/dell_energy_pcs/

  6. #22
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by DanceswithUnix View Post
    MCM is usually compared with having lots of chips on a PCB, and driving MCM connections is way lower power than superbuffers going off chip. Monolithic will always be cheapest, which is more where the APUs are targetted.

    I don't know if you saw this the other day, it was the first I had heard of it. Looks like a big GPU like this would be illegal to sell in California on the grounds that if we aren't careful computers will use every drop of power we generate around 2040, so I wonder if the rest of the world will follow suit:

    https://www.theregister.com/2021/07/26/dell_energy_pcs/
    AMD made their APUs monolithic due to power considerations:
    https://www.anandtech.com/show/15708...0hs-a-review/2

    The downside of this chiplet design is often internal connectivity. In a chiplet design you have to go ‘off-chip’ to get to anywhere else, which incurs a power and a latency deficit. Part of what AMD did for the chiplet designs is to minimize that, with AMD’s Infinity Fabric connecting all the parts together, with the goal of the IF to offer a low energy per bit transfer and still be quite fast. In order to get this to work on these processors, AMD had to rigidly link the internal fabric frequency to the memory frequency.

    With a monolithic design, AMD doesn’t need to apply such rigid standards to maintain performance.
    You saw the issue with the first TR/Eypc CPUs,a large percentage of the power draw was the IF.

    So in the case of AMD,I suspect they are willing to take the powerdraw hit of a large MCM,if it means they can fab relatively small GPU chiplets.

  7. #23
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by CAT-THE-FIFTH View Post
    AMD made their APUs monolithic due to power considerations:
    https://www.anandtech.com/show/15708...0hs-a-review/2
    That's quite the oversimplification Monolithic just makes sense for current APUs. You are scaling the design right down to dual core Athlons, not right up to multi core servers. If you don't *need* all those cores, then the support circuitry just all gets in the way. You end up burning silicon area, performance and packaging complexity for nothing, so you just pack what you need on a single die.

    Note that GPU chiplets could easily throw that right back up in the air. Take a 5800X starting point and slap a GPU chiplet where the second compute die could go, and now you have an APU at scale beyond what makes sense for monolithic.

    So like all things in engineering, those GPU chiplets will need to be the *right* size. Too small and the overheads drown you. GPUs are more resistant to defects than a CPU die, you are more likely to be able to just disable a dead area and sell as a slightly lower end chip, but no doubt there will still be penalties in going too large. Someone somewhere will have a graph of costs, and know where the peak is.

    Note that the Anandtech article doesn't seem to be from AMD, just from Dr Cutress who I'm sure is a fine chemist and a good writer, but he isn't an engineer.

  8. #24
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Quote Originally Posted by DanceswithUnix View Post
    That's quite the oversimplification Monolithic just makes sense for current APUs. You are scaling the design right down to dual core Athlons, not right up to multi core servers. If you don't *need* all those cores, then the support circuitry just all gets in the way. You end up burning silicon area, performance and packaging complexity for nothing, so you just pack what you need on a single die.

    Note that GPU chiplets could easily throw that right back up in the air. Take a 5800X starting point and slap a GPU chiplet where the second compute die could go, and now you have an APU at scale beyond what makes sense for monolithic.

    So like all things in engineering, those GPU chiplets will need to be the *right* size. Too small and the overheads drown you. GPUs are more resistant to defects than a CPU die, you are more likely to be able to just disable a dead area and sell as a slightly lower end chip, but no doubt there will still be penalties in going too large. Someone somewhere will have a graph of costs, and know where the peak is.

    Note that the Anandtech article doesn't seem to be from AMD, just from Dr Cutress who I'm sure is a fine chemist and a good writer, but he isn't an engineer.
    Maybe but even Fujitsu said the connection fabric was a major issue with power too,so they spend a lot of resources on trying to make it as efficient as possible. It wasn't the core/gpu complexes in the A64FX which were the problem. Remember the AT testing power draw of the IF. It matches what I have read elsewhere,the the connection logic is actually the bigger issue with power consumption now.

    Also if you read more about the APUs,they can ramp the IF frequencies downwards and upwards as needed,which is not done on the CPUs. I also would think AT would have asked AMD about this - and Ian Cutress is just relaying what AMD told him. AMD is unlikely to allow non vetted technical information to be released by the press.

    You also need to consider that an APU is 2X the 7NM silicon of an AMD 7NM CPU. If anything you could argue AMD re-using a Zen2/Zen3 chiplet and making a combined I/O and GPU die on 12NM might have made more sense for costs(they could have literally ripped out the same part out of a Zen+ APU). AMD went with a smaller GPU shader count probably to keep size down.

    Even putting aside those discussions.

    I expect AMD to use two midrange GPU chiplets for its flagship. That would make sense from a power perspective. So the other issue here,is going to be binning. The mainstream graphics cards are probably going to use a single GPU chiplet,and the mainstream will get worse binned parts.

    So if the 400W rumour is true for the top model,it means probably over 200W at the mainstream still. Kalniel wasn't that happy with that being the case now,and an RTX3060TI is the limit of where I want to go too!
    Last edited by CAT-THE-FIFTH; 31-07-2021 at 11:38 AM.

  9. #25
    Registered User
    Join Date
    Aug 2021
    Posts
    1
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: Asus GeForce RTX 3080 Ti TUF Gaming

    Hey lovely Hexus Team,

    I got this card a month ago and I'm testing it myself.
    In my first test I tried out Control with everything maxed out at 1440p and I heard a coil whine in my case. First I thought it's my PSU, but it was the gpu.
    Did you have coil whine while testing this card?
    Also, my gpu is idling at nearly 53°C with a room temp of 25.5°C in the Fractal Meshyfi C case.
    Is this reasonable? This card will never go down in zero-fan mode I was a bit disappointed, because of that.

    I'm thinking about rmaing this card, but if I do, it will take for sure a long time till I get a replacement.

    Thanks for answering.

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •