Read more.These pro Ampere (Quadro replacement) cards won't be available until Dec or Jan.
Read more.These pro Ampere (Quadro replacement) cards won't be available until Dec or Jan.
-delete me-
Yo, there is mistake shuld be 48GB not MB
Monster card ;D
Regards,
DR (05-10-2020)
I'm looking at it as a 3D designer,who this is arguably aimed at, and honestly, the A6000/A40 is not that impressive when you look purely at the specs and the 'performance' they list compared with a 3090.... it has about 300 extra cuda cores and double the ram (admittedly with ecc) over the 3090 but will likely be around £5000+.
The one key benchmark they mention is the keyshot one, a 3090 does around 80x, the A6000 is 91x... I know which way I'd be going if I used keyshot because it's pretty rare you'd see the need for more than 24GB of gpu memory....
I do however like the 2 slot design, although it seems a bit weird having a blocked off end, but I bet it will be really noisy...
The one thing this does raise the possibility of is a 3090 super, if AMD is good, because the 3090 doesn't use all the cuda cores of the ga102 chip
I'd say the main difference with 3090 are the workstation drivers. In relevant applications of course, the 3090 could be a lot slower than A6000 without them.
That is one pretty card. Beefy goodness.
Actually unless there's some artificial restrictions being put in place by the program (yes there are some that do) there is very little difference (usually down to clock speeds) between 'quadro' and geforce, at least in the software I use.
Workstation drivers don't mean as much these days either because of the geforce 'creator ready' drivers.
Zhaoman (06-10-2020)
kind of pointless spec's, and it will probably cost more then a 3090, which has more CUDA power, as memory isn't needed for compute task's as CUDA will use virtual memory for pooling if it has been set to system managed, it will scale across terrabyte's making 48GB onboard RAM worthless
30XX Ti should have nv-link
Last edited by me-yeah; 05-10-2020 at 11:15 PM.
it has SLI which just turn's the salve card into a dedicated compute unit, while the master card is a dedicated GPU, with the GPU not being sliced up to create CUDA core's
as far as HWInfo report's, VRAM doesn't get used while rendering, while CUDA will use your swap file instead
You need to go do some research because you're wrong in everything you've just said.... speaking as someone who knows first hand how it works and knows that sli connector was replaced by nvlink (admittedly 3 versions with differing bandwidth) with the rtx series.
While it might be labelled 'SLI' in the geforce control panel (and on motherboards) you've been able to 'merge' vram and cuda cores since rtx2000 series, it does have a small penalty in overall performance though due to cross communication and a small bit of duplication but at it's most simplest it essentially doubles vram for cuda and increases the cores too. Not all software can read the vram usage correctly when 'merged' via nvlink.
And to save you some time...
https://www.chaosgroup.com/blog/prof...idia-rtx-cards
There are currently 1 users browsing this thread. (0 members and 1 guests)