Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Frankly, and no offense meant here, that's nonsense. Even an undergrad CS student can program for multiple threads - it's really quite easy in most modern languages. The issue is that many tasks either can't be parallelised, or don't benefit from it. You reach a point where the overhead of handling more threads outweighs any gain you get by adding more threads.
Plus, some tasks are by their nature serial - you have to go through each step one at a time (usually because the next step depends on the outcome of the previous one) so there's no way to break that down and run segments a the same time. Trying to multithread those tasks would be utterly counter-productive.
So it's really not as simple as "more threads is better" - it depends on the task in hand. And given that quad-core desktop processors have been available for pushing five years (and multiprocessor workstations for a lot longer), if heavily threading game engines produced a significant benefit a lot more people would have developed heavily threaded games.
you also get a small speed bump over the 2500k 100 mhz but you get hyperthreading so if you gonna use programs that uses the addtional cores go for the 2600k
2500k vs E-1230? I think those are more equal........the 2600k is obviously better than 2500k
They are totally different items aimed at different segments of the market.
The Xeon has a lower TDP of 80W as opposed to 95W found on the 2500K.
Xeon has ECC support whereas the 2500K doesn't and the major difference is the the Xeons do not have any Intel HD graphics built into the CPU at all.
The xeon also has support for Intel vPro Technology, Intel Virtualization Technology for Directed I/O (VT-d), Intel Trusted Execution Technology whereas the 2500K doesn't.
For more information I'd point you towards the ark system on intel for their easy to view compare ;
http://ark.intel.com/compare/52210,52271
I prefer the 2600k. they are a good price for what they are I guess and having the 2600k doesnt seem a bad option because of it.
Now that thats decided the question is whther its worth waiting for the Ivy Bridge processors. I heard the prices are suppose to be pretty similar to whats out there at the moment. Considering my old rig has lasted 4 years or more (probably more) whats a few more months.
Will the newer Ivy Bridge chips be any better for gaming? or is this yet to be analysed?
I would wait for Ivybridge because either they will be better or sandybridge will be cheaper and isn't IB coming out in a few months?
Totally agree
There are currently 1 users browsing this thread. (0 members and 1 guests)