Read more.As found on the upcoming Radeon R9 390X 'Fiji' GPU.
Read more.As found on the upcoming Radeon R9 390X 'Fiji' GPU.
I'm holding off getting a new graphics card as I'm eagerly anticipating AMDs new high end cards. Here's hoping they're competitive.
In addition there are rumours that AMD is trialling dual-link HBM - two stacks of DRAM (2GB) upon a single base logic die that performs the routing logic between the DRAM stacks. This would allow them to get to 8GB with Fiji, but maybe it will be a month or two later than the 4GB versions.
Can't this HBM memory technology be applied to everything? I.e. RAM too? Or does GDDR and HBM differ too much from conventional desktop RAM? But then again you could argue that the RAM is very rarely a bottleneck for desktop systems and even increasing the speeds will not give much performance increase compared with the CPU or GPU for most tasks.
Possible typo - use 2Gb slices resulting in 256MB - is this meant to be 256 mb slices resulting in 2GB?
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Yes, as alluded to by this line near the end of the article:
A memory interface is a memory interface. Where AMD are right now they could make an entire system on a single interposer, with APU, southbridge, HBM - they could probably even work out a way to stick 32GB of flash memory on there too. Whether such a device would be economically viable is another matter entirely. And of course some people will always want their system RAM upgradeable, which an HBM-based system wouldn't be. But assuming AMD actually get this to market in decent volume, the proof will be there for it be expanded to other products. An APU with 1GB of HBM on-interposer as a graphics cache would be pretty remarkable, for instance....... AMD has apparently solved the HBM design and implementation problem that will eventually fall to all who need to use memory for processors. The implications are far-reaching, from Nvidia to Intel to ARM, but, right now, HBM may not quite be that appetising free lunch it first appears.
I'm still not sure. We have seen what happens to an APU when the RAM bandwidth limitation is pretty much removed (PS4) and the difference isn't night and day. PS4 uses GDDR5 and has more streams, yet is only slightly faster than the XB1 with DDR3 RAM.
My hope for this tech is probably pie in the sky: Move RAM from the GPU to the system and then enable HSA on non-APU systems. I.e. a universal system-wide memory architecture.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Remember when Nvidia had the issue with GPU solder joints on laptop gpu's failing over time. Something in the back of my head is telling me HBM may have similar problems down the line.
Hope not but I'm wary tbh.
Sykobbe your right I read that a few months ago too, its taken a long 7 years to develop these 1st gen HBM and they had many issues with them. Lets see if the higher bandwidth these provide makes a difference at being capped at 4GB. I got a feeling it wont make a marginal difference to initial performance but rather to Cost/ per watt. Putting x2 of these on Crossfire should be real intresting
Big problem with moving RAM away from the GPU is that you have round trips across whatever interface the GPU is connected by to factor in. Even if PCIe 4 was available now and GPU manufacturers started using the full permissible 32 lanes (which would require Intel and AMD to make those available on their chipsets/motherboards/CPUs) you'd be topping out at 63GB/s bandwidth, which is less than available to the xbox one. And AFAIK PCIe is fairly high latency, so you'd need very good prefetch routines to avoid stalling.
As to HSA for non-APU systems, AFAIK it already exists: HSA is theoretically hardware-vendor agnostic, as long as that hardware meets the system architecture requirements for HSA. It doesn't actually matter how the memory is attached to your devices: the whole point of HSA is to view the aggregate resources in your entire system as one space: one set of memory, one set of processors, then allocate the tasks to whichever processors are best suited to handle them. it doesn't require the memory to not be attached to certain devices, as long as those devices allow the HSA software to access the memory correctly.
There are currently 1 users browsing this thread. (0 members and 1 guests)