Read more.Sources say that Samsung lost in its bid for part of the manufacturing action.
Read more.Sources say that Samsung lost in its bid for part of the manufacturing action.
i wonder will using 16nm cause any problems instead of using 14nm?
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Do we know who AMD are using for their next gen GPUs yet?
Would I be guessing right that they maybe going with GF 14nm, if so it should make for some interesting comparisons.
It`ll be late then and low yields - cause TSMC have been so reliable on new process haven't they
I don't remember any credible claims that Samsung would be making Nvidia's GPUs anyway, only nebulous 'Samsung making Nvidia stuff' which was far more likely to be related to their Tegra processors. But considering Qualcomm (whose mobile capacity completely dwarfs Nvidia's) may be moving a large amount of capacity to Samsung for their FinFET APs and Nvidia have decided to try suing Samsung, I'm not sure they'd be first in line for that either.
With regard to 14nm vs 16nm, really don't pay too much attention to it - they're just names of the node really and are having less and less relationship to any actual measurements. For example, Samsung/GloFo's 20nm are far more dense than Intel's 22nm, but Intel's 14nm is considerably more dense than Samsung/GloFo's 14nm. But there's more to a node than just transistor density, and indeed both TSMC and Samsung/GloFo have at least two versions each of their FinFET nodes.
I hope these new chips are full dx12 compatibele.
They are? I thought some guy from AMD said no current GPU is fully DX12 compatible.
Compatibility is very different than "fully compatible" though, the former just means able to be used with a specified piece of equipment or software without special adaptation or modification, the latter means completely or entirely; to the fullest extent.
I still don't understand why DX feature levels are getting so much attention for DX12, I don't remember people worrying or complaining so much about 11_2 (which Nvidia never supported)/11_1 vs 11_0, 10_1 vs 10_0, etc. They're just an arbitrarily selected set of features which might be useful for developers in some cases, stuck into a table by MS. Consumers really needn't worry about it, at least not to the extent it seems to be getting publicised lately.
Feature level != DX version support. Cards going back a few generations from both Nvidia and AMD support DX12 (with FL <12_0); they won't all implement the more recently specified feature levels of course, but they can still use and benefit from the newer API. 'Fully support' is a bit of an ambiguous term, the meaning of which can be mangled one way or another as it has indeed been for marketing reasons on both sides of the argument. From my perspective it appears both brands 'fully support' DX12 in the same way that the 5870 and 480 'fully support' DX11, not claiming they don't because they're 'only' FL 11_0. It's really that silly.
There are currently 1 users browsing this thread. (0 members and 1 guests)