Looking at some of the more enthusiast forums with people running more than one high end card,it seems the Core i7 5820K is might be avoided,as it means bandwidth is the same as the cheaper consumer platform. You cannot run two high end cards at PCI-E 3.0 16X,and it appears the slot assignment is 16X,8X and 4X too.
It would not surprise me if Broadwell or Skylake increase the amount of PCI-E 3.0 lanes on the consumer platform.
Edit!!
IB-E has 40 PCI-E 3.0 lanes available down to the cheapest SKU.
LOL at Intel product segmentation.
At least offer 32 lanes.
Last edited by CAT-THE-FIFTH; 26-08-2014 at 06:36 PM.
You don't need to run two cards in 16x at the moment, and 28 lanes is still way more than the consumer platform which only has 16 along with some PCI-E 2.0.
With the 5820 you might run 2 cards at 8x (plenty), an SSD at 4x and you'd still have another 8 lanes of PCI-E 3.0 left for fast peripherals.
Yes but the problem is that companies like AMD and Nvidia are moving towards PCI-E mediated Crossfire and SLI,which complicates things.
Plus when you add the cost of DDR4 into the mix,it makes the saving over IB-E six core setups probably non-existent and for that kind of outlay I don't think 28 lanes is enough. The previous generation had 40 and this is more stupid Intel product segmentation.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
So for graphics cards the extra lanes aren't that helpful, fine. What about niche users though? I'm planning to upgrade my current NAS (just a standard Ivybridge board atm) with a 36 port SAS card and a 40gbit infiniband card, each of those devices eats 8 pcie 2 lanes out of the 20 I have available. (can't magic up more lanes because cpu is 3.0 and cards 2.0, and 3.0 cards cost many times more). That leaves just about enough lanes to keep the on-board network and sata ports running.
The other end isn't too crazy, just another infiniband card and a couple of graphics cards, but that alone will use all the pcie lanes of a standard CPU, leaving nothing in case I add a pcie ssd.
Intel is coming soon with Powerfull CPUs and to Build up a Machine gonna be expensive i think !
Doubt its worth getting straight away........
DDR4-2133 is a lot faster than DDR3-1600 in fact it's as fast as DDR3 ever officially went. I suspect a 6 core chip wont notice the difference but it may help feed an 8 core cpu.
In terms of pci-e lanes the low end chip is just a cost cutting measure and that the top end chip is just status quo.
It could start to be a bottleneck at 4k resolutions as it'll need to shift a lot more data across the bus.
tehe.
After over a decade of multi-core on the desktop and over sixty (yes, 60) years of research into practical multi-threaded implementation of general, ie not explicitly suited to parallel implementation, sw algorithms we live in a world where most software cannot take full advantage of multi-core cpus.
The benefits of going from 4 core to 8 core are circumstantial. The benefits of a 100% increase in instruction throughput apply all sw.
Instruction throughput is IPC * clock speed.
It's 2014, I think it is time for the next big jump in per-core throughput, for the sake of the PC market if nothing else![]()
Aside from problems that are heavily procedural, most applications which sees a tangible gain from threaded programming, has been optimised as such. And there's no actual consumer single threaded application that comes to mind that's seriously held back due to a single core not able to do better than the turbo boost clock. It mightn't bench as well as a 10.2jigahurtz monster for that fringe application nobody uses, might have, but in everything else the current CPUs clears the floor. And that's to say nothing of the added gains from the new SIMD and other extensions added since the P4 room heaters. Even with every other core tied behind its back, a single devil's canyon core makes an utter embarrassment out of the prescott.
Which they'll carry on doing by the more efficient means of microarcitecture optimisation. Not by winding up clocks into transistor melting frequencies.
Again, I'm completely flabbergasted that this still needs to be explained to people in 2014.
There are currently 1 users browsing this thread. (0 members and 1 guests)