Because USB 2.0 doesn't have the interference issues that USB 3.0 does with adapters for wireless keyboards and mice for starters. Unless they're properly shielded a wireless adapter in a USB 3.0 makes it useless
Printable View
Charlie says that Skylake is much cheaper to make than Haswell.
Otherwise, he hasn't got his Mr Happy hat on even for Charlie :D http://semiaccurate.com/2015/08/05/i...ake-stupidity/
Good catch. CCL seem to the 'cheapest at 289.99 now. One of the reviews I read stated these two chips are going to be in short supply for a while - being charitable the retailers could just be setting the pricing at a level where they won't run out of supply before their next shipment.
I do wonder how anyone can buy at these prices though. If you want similar performance buy Haswell, Z97, DDR3 and you'll save around a hundred quid. If you need the extra Z170 features such as more PCI-E lanes and DDR4 then buying X99 and a 5820 seems the smarter choice.
Broadwell was only out for 5 minutes. I wonder if we'll see Skylake-E or if they'll skip it and just release Kaby Lake-E or Cannon Lake-E even.
The price has dropped back down on ebuyer to £268 but they are showing they won't have any stock until the 31st.
I wonder how many first time builders are going to get caught out by the retail versions of Skylake chips not coming with a heatsink anymore. Or maybe that just shows how long it has been since I built my last PC.
In general the 'blame AMD for Intel progress' is misinformed nonsense in terms of outright performance as I've said a few times now. Things like pricing are affected by competition and easy to change, core architectures are most certainly not.
It takes years from start to finish to design and manufacture a processor core; the manufacturer needs to plan years in advance and predict what sort of applications they'll target, and if they get it wrong it takes years to change things as the following designs will also be too far down the pipeline. See Pentium 4 and Bulldozer as examples of where this happened to varying degrees, and for different reasons.
Processor designed is essentially pipelined - future designs will be at an earlier stage of development before the earlier uarch is released, so even if a major issue is realised on a shipping processor, it may not be rectified until a couple of generations later. A recent example of this is the TSX instruction bug on Intel processors - it was discovered in Haswell but the bug was also present in earlier Broadwell steppings, and this was a relatively minor bug which would have been fixable later in the production pipeline.
Simply put, the assumption that Intel is simply not bothering to push some performance metric because of competitive reasons makes little sense from a development standpoint and TBH it's probably a little insulting to the engineers working at the likes of Intel. As a company you'd be essentially stuck with that decision for many years to come - you can't just push out an 'oh wait we need something faster now' SKU overnight.
Even for things like core count, last-level cache size, uncore which can be changed relatively quickly, it would still be on the order of years as it would have to go through the manufacturing stages.
Random thought....
We hit peak clock speed in about 2004.
Maybe, we've hit peak (or near peak) IPC in 2011-2014(ish).
Therefore there is only one direction left, more cores.
This page has various scaling metrics shown on a graph, though it's a bit dated now: http://www.extremetech.com/computing...re-still-stuck
well its better than my P4. I will upgrade but when !
I'd wait for the retailer price gouging to subside first, and maybe for the non-k versions unless you want to OC.
Well IPC and cores are somewhat interchangeable - if your code is multithreaded you can increase IPC by increasing cores. If it's not, then IPC increases through other architectural tricks aren't necessarily guaranteed either. Intel do appear to be testing the waters of an AMD like approach of having the 'GPU' carry out vector stuff but that could be classed as just increasing the cores for a particular type of problem I guess.