And? That doesn't change the fact that GPUs can run circles around CPUs in the floating-point stakes, simply because of the raw number of computation ICs they have, even if clocked slower, are simply brute forcing their way through highly parallel problems by sheer numbers. Think of it this way, RX Vega 64 has 4096 stream processors, each doing FP math. How could you ever expect a CPU to compete with that? It's got to the point that it's barely worth the time to utilise CPU floating-point operators and they're practically relegated to the role of a glorified task master/data loader. That's why GPU miners are just throwing basic Pentium CPUs into their mining rigs, and using PCIe extension cables to jam the motherboard completely full with GPUs.
The biggest Power/Opteron/Xeon based supercomputers were designed to simulate nuclear explosions. The processors weren't doing their own thing, they were all running the same program just modelling the bit of volume assigned to them for the simulation. ASCI Red was 1 TFLOP which seems funny these days, it had 1.2TB of ram though spread across so many CPUs I expect a lot of that just contains duplicate code, and took 850KW of power. Of all the metrics, it sounds like it is the power consumption that really pushed these machines to use graphics cards.
There are currently 1 users browsing this thread. (0 members and 1 guests)