Read more.Asserts it doesn't need a fan, and consumption isn't high due to PCIe Gen 4 device utilisation.
Read more.Asserts it doesn't need a fan, and consumption isn't high due to PCIe Gen 4 device utilisation.
Why would it be an m.2 that pushes the power up. Lane wise, it is insignificant. Put in a pair of x16 gen 4 cards and rev them up. The difference between 3 and 4 is so tiny, because percentage wise, as a piece of the entire pie, the amount is minuscule
Yes, but if an x16 graphics card only losses 1-2% of its potential speed going from PCIE 3.0 to 2.0, it follows that it will be very very hard to get a graphics card to really push PCIE 4.0.
NVMe drives on they other hand are currently using all which a PCIE 3.0 x4 slot can deliver.
I think he may have forgot something ..the gen4 nvme was run on it's own .. and got a load of 8.86 .. now if you ran that with the bottom yellow line would you not have to add them up ?
or is it a case of one or the other ?
What does it matter now if men believe or no?
What is to come will come. And soon you too will stand aside,
To murmur in pity that my words were true
(Cassandra, in Agamemnon by Aeschylus)
To see the wizard one must look behind the curtain ....
If you create a benchmark that just tests bandwidth then yes, you see an improvement with a PCIE 4 card on PCIE 4 over 3. That's all it does, it doesn't suggest there is any kind of improvement when bandwidth is not the limit, which it isn't in any game or other test.
Going on AMD's philosophy on why they still use blowers on their GPUs I'm guessing they only made the fan on the PCH a requirement because they can't know if the system it's going in has decent airflow.
No disrespect to der8auer but isn't it under nvme pcie 4.0 raid that is the heat generator, he is not utilising it fully so he is not getting the results defined by the board vendors at comoutex...
Because the PCIe lanes for the x16 slots come from the CPU and not the chipset, so using graphics cards won't stress the chipset at all. The only things that the chipset controls are the secondary NVMe slots (AFAIK, the primary slot is still direct to CPU), SATA, USB, Ethernet/WiFi, and other peripherals.
Yes, I am well aware of that, thank you.
I meant that the benchmark would better test the power consumption of PCIE 4.0 compared to 3.0 (the topic of this post), as it could happily saturate the bus.
Of course that was before I remembered the GPU slots are directly wired to the CPU and so are inconsequential regarding chipset power consumption.
Now this is interesting: https://www.techradar.com/uk/amp/new...y-power-hungry
Might be a turn around point for Steves aggravating remarks about power consumption. If the chipset is also pulling power from the EPS lines, that could explain the on the wire differences from stated TDP versus power consumed.
Last edited by Tabbykatze; 10-07-2019 at 10:02 AM.
All I need to know is am I going to lose much performance, sticking a 3000 in to a 370x mobo? Because, if not , I'll be happy to stick with that.
Cheers. I'm happy with the features my Asus Prime 370 has. Doubt I'll be buying a PCI-E 4 card any time soon. Certainly not one that'll saturate it. So I'll stick with my 370 for a few years.
There are currently 1 users browsing this thread. (0 members and 1 guests)