Read more.System utilises four TSMC-made 7nm Graphcore Colossus Mk2 GC200 IPUs.
Read more.System utilises four TSMC-made 7nm Graphcore Colossus Mk2 GC200 IPUs.
so if I read this right... this IPU undercuts Nvidia x 12 in price?
But one of the big reasons Nvidia has won in compute is support and surrounding infrastructure. AMD has had awesome compute cards but Nvidia had everyone coding using CUDA and so people can't easily switch. Even if the hardware cost is low, the total cost (which includes the business growth lost as devs / users are putting resources into learning and optimising the new stuff rather than developing / growing) of changing everything surrounding those compute cards as well as the unknown that is support (Nvidia seems to support pretty well, but you'd be mad to hope for similar support from AMD who can't even code a driver properly).
Hopefully they can nab some business from start ups, but they'll have to prove they can support their product properly before anyone will take the risk from a known quantity like Nvidia.
***The above is opinion based on idiocy and may in no way reflect reality
Maybe so... Nokia said the phone they made would never be needed more.. then came the iPhone.. in ways... Nvidia is not competetative at all and I can assure you the big customers will go where they get the best power and value for money spendt, so they could buy this 12x... and get for for 36millions of dollars worth of power for 3millions.
I still don't understand why these "sophisticated" technologies are not being exploited by classic CPU industry?
TSMC are doing a good job with very large die sizes it seems..
While AMD is arguably the better compute card, they've primarily chosen the open-cl as their software and well the feature set of that is incredibly poor compared with cuda, so to be fair you can't blame people for using cuda imo.
Several of my programs can run on either open-cl or cuda and the open-cl versions lack features found on cuda...so why would you use open-cl over cuda.
I'm sure if AMD spent some time working with open-cl group to add 'missing' features compared to cuda they'd be used more but for some reason AMD don't seem that interested.
Last edited by LSG501; 18-07-2020 at 11:10 AM.
Because open-cl is just that totally open and not by AMD. Sure they could help out but then it would just be a poor CUDA rival. Open-cl is in use on many more devices than CUDA but does appear to lack some clout in the retail market. Mind you it's the same with Vulkan right now....
Old puter - still good enuff till I save some pennies!
Last I checked they are an 'active' part of the group developing open-cl, as are nvidia, arm and intel.
Now don't get me wrong, I'm all for 'universal' code but if you want to compete with something like cuda, you need to actively develop open-cl to keep pace if nothing else... I'm not even sure there's even been any real updates since open-cl was released (I'm looking primarily at my use case scenario of 3D rendering on gpu etc)..
Part of the reason cuda is 'as good as it is' is because Nvidia have been putting money into developing it and adding features etc and while Nvidia isn't exactly short on cash, the Khronos group isn't exactly poor either considering all they really do is these open acceleration projects...including vulkan. Mind you if you look at the open-cl group and how convoluted it all looks I can understand why people will just pick the much easier to use cuda code.
edit> Just looked seem to have had 3.0 released this year and after reading about it it basically makes me understand why people are using cuda. The people in charge don't really seem to have a clue imo and can't seem to make their minds up on how they're 'developing it', at least nvidia is consistent.
Old puter - still good enuff till I save some pennies!
Because most people only need compute power when playing games, and most games programmers seem to struggle to get their code to run on 4 cores let alone the 1472 cores that this monster can manage.
The 900MB of SRAM is interesting though, would make one heck of an L3 cache on a CPU.
The thing is that open-cl is 'supposed' to be the 'cross platform' version of cuda but doesn't even come close to feature parity, like I say if you want people to use it you need to be at least comparable to your main competitor. I'd say cuda is more useful for most people, it's being developed in a way that is improving on the last version and adding features etc...open-cl is just all over the place at the moment.
Even Apple, who were the biggest users of open-cl have shifted away from it, one of the biggest GPU based 3D renderers (v-ray) have stopped supporting it and even AMD's GPU based renderer (not a bad renderer in all honesty) is using an older version as it's base because it's 'better' than the newest one...
It's a bit like mozilla and firefox... it was a great product but has been, or is being depending on viewpoint (I'm currently a firefox user), really poorly managed which in turn is even causing the die hard firefox users to get annoyed with unnecessary changes etc.
There are currently 1 users browsing this thread. (0 members and 1 guests)