The Radeon Pro SSG is using a Fury GPU.
The Radeon Pro SSG is using a Fury GPU.
If it is happening directly on the GPU then I presume it is making full use of the HSA that AMD have been working on for years to treat VRAM as a cache on the SSD and page fault data in and out automatically. Tie that in with their asynchronous thread management to halt and restart threads blocked on paging faults, and it could be very nice to write for.
Or it could be a horrible cludge, I don't have $10K to go find out
Looks like a pro duo with the second GPU replaced with an SSD.
Now I thought I had read that somewhere but find where, but it does look like that. If you can read from one SSD whilst writing to the other then that might help performance a bit.
Edit to add: I read it on Ars Technica, though they say it is Polaris 10 based when others say it is Fuji. Charlie likes it: http://semiaccurate.com/2016/07/25/a...pus-calls-ssg/
Last edited by DanceswithUnix; 26-07-2016 at 02:53 PM.
Its Fiji based. Look at this comparison of the RX480 and the Fury Nano:
http://core0.staticworld.net/images/...67966-orig.jpg
Not saying they're not believable, just saying it seems something else is going on than just the simply swapping of what's presumably an M.2 SSD attached to the MoBo to it being attached to the GPU, even accounting for the data being routed via SSD, RAM, and across PCIe, something that i guess would add more to latency than cause a bandwidth bottleneck.
In other words i suspect the over 5x increase in fps is down to more than just the swapping of the storage from MoBo to GPU.
Latency generally *is* a bandwidth bottleneck.
There are several methods generally used to improve performance in computers:
Zero copy I/O.
Simplify the data path for the most common case.
Find processing that isn't strictly necessary, and remove it.
Pre-fetch data before you need it to where it will need to be.
Reduce interrupts/CPU context switches.
Avoid locks on a single resource.
I suspect this helps with all those techniques.
Pleiades (26-07-2016)
That's not my understanding of latency, I've always considered latency the time delay between the cause and the effect, bandwidth on the other hand is how much data can be sent, if i sent 10 4TB HDD via snail mail that would be high latency and high bandwidth, although maybe not very practical.
Bandwidth is data over time. Snail mail of high volume may still give a relatively high bandwidth, but you can increase the bandwidth further by reducing the latency, because latency is part of the time measure.
Put another way, reducing latency *always* increases bandwidth, all other things being equal.
Like i said that not my understanding, latency is just the time taken from cause and effect, reducing that only increases bandwidth if you making multiple requests from different sources, whether you send 4TB of data via snail mail or a broadband connection you're still sending 4TB, it's just the former has a latency of days and the later has a latency of milliseconds.
Besides is there even enough latency in the example they used to account for a 5 fold increase.
EDIT:What you seem to be describing there is throughput, that certainly does increase when you reduce latency.
Last edited by Corky34; 26-07-2016 at 05:53 PM.
I guess an analogy would be that the largest bandwidth (I guess that would be the right term?) available is a FedEx plane, according to QI. In theory, you can transport a massive amount of data from one place to another (let's ignore the overhead of actually connecting and reading all those drives), but the latency would be hours!
Pleiades (26-07-2016)
With BeSang super Nand aiming for 2c per GB it would be interesting fitting a deasktop card with a couple of hundred GB's
http://hexus.net/tech/news/storage/94762-besang-incs-3d-super-nand-costs-just-2-per-gigabyte/
There are currently 1 users browsing this thread. (0 members and 1 guests)