Given recent price hikes, I suspect modern GPUs are at much higher profit margins than historically (at least, non-ray tracing ones with sensible die sizes). With that kind of competition there's a lot of room for cheap cards to still turn a profit (followed by AMD cutting 5700 and 5500 prices to match, and nvidia sweating nervously). Selling at a loss will probably make current 570 prices look steep
Let's be really clear. Intel is a midddling memory foundry with a CPU which got lucky thanks to IBM, Microsoft AND the existence of AMD (IBM required there be a second source cpu supplier)
There were better CPUs, there were better architectures and there were definitely more power efficient systems available - the x86 was one of the slowest (per clock) and least power efficient out there so it should be no surprise that Intel was able to make them faster and sippier over the years - The business competition got swept away by the power of the the 900 pound IBM marketing gorilla and then the rest of the world got wiped out by the avalanche of the Microsoft yeti.
Intel hasn't had to innovate much over the years, except to keep AMD from nipping at its heels - and the amount of unlawful activity it's shown to be involved in to keep AMD out of retail space shows how scared it's _really_ been of the competition.
If Intel is seen to be leveraging its near-monopoly in the CPU space to enter and become a major player in Graphics space, the competition regulators in a bunch of jurisdictions are going to start looking _very_ closely at their activities again. As such, you're unlikely to see "cheap Intel cards" for the simple reason that the moment sales of them start impinging significantly on sales of AMD/Nividia discrete parts those competitors will be down at the regulator offices thumping on the counters.
Adding to my last comment:
One of the things that _really_ irritates the guys I work with (large planetary imaging datasets) is how Nvidia keeps dumping driver support for older devices.
Cuda might be "cool" and the "only way" for now, but everyone's frantically learning to use OpenCL for portability and trying to port as much as they can to not be Cuda-dependant.
They've been looking longingly at Xeon Phi units for a while. We even have a couple of eval cards kicking around - which although long in the tooth now, are better supported than Nvidia cards of the same age.
In the end the things that power this kind of thing may not even _be_ GPGPUs, but being able to treat your "gpu" as a cluster of x86s has its own programming simplicity for number crunching work when you're not processing vectors and triangles.
There are currently 1 users browsing this thread. (0 members and 1 guests)