Read more.With Apple supposedly moving to TSMC, what happens to its incumbent customers?
Read more.With Apple supposedly moving to TSMC, what happens to its incumbent customers?
The problem is Apple may buy a lot of wafers but they are still not most of the market though. So basically TSMC is playing a very dangerous game in the long run. They risk not only loosing the not insubstantial custom of both AMD and Nvidia in the long run but also many other companies which design SOCs for many other devices too.
The other fabs are probably rubbing their hands in glee. TSMC better hope their current process does not have the problems and delays the 40NM ones had, otherwise the competition will catch up or expand their capacity(they are not the only company working on 28NM and smaller processes).
I suppose in one way AMD may not be as bad off as they do have experience with Llano being built on another process.
Last edited by CAT-THE-FIFTH; 09-09-2011 at 05:31 PM.
GF and UMC will likely both benefit - especailly since UMC annonced:
http://www.eetimes.com/electronics-n...venues-fab-use
decline in fab use and revenue - now they even might have short use or even stopped lines..... makes for ` cheaper offers` to amd and nv who want to move
Well we should just do away with discrete graphics cards. Bung a decent one in with the CPU and keep the development rate stable over a 5 year period so there's some level of performance parity across the market.
TSMC also stated a similar thing recently:
http://www.xbitlabs.com/news/other/d...e_in_2011.html
I seriously doubt Samsung would be bothered in all of this as crApple sues them for anything and everything, and I doubt AMD will be upset because they can us GloFo for everything if they wanted...so that really leaves NV and the other SoC people struggling.....which is what Apple want of course
Old puter - still good enuff till I save some pennies!
The problem with that is you would have way too many different dies for fabs to produce, or alternatively one huge 4 billion transistor die with a massive TDP and low yield - it's just not cost effective for anything other than the sort of IGP Llano has.
Nowhere near, consoles are very underpowered even compared to modern IGPs - they run with low settings/low resolution and low framerates are considered acceptable.
going off http://www.hexus.net/content/item.php?item=30964&page=8 even though black ops gets 60fps at 720p, it's at low/medium settings so it's not that much better than the console settings, and this is the best IGP around - although thanks for making me look it up, for some reason i had it in my head that they were at most half of that
still though, the other thing he mentions "keep the development rate stable over a 5 year period so there's some level of performance parity across the market." games development would suffer as developers weren't able to push the envelope and implement newer tech to give a better game
Firstly, consoles don't render all games at 720p and usually upscale the image. Secondly,most of the console games are not running at 60FPS;many seem to run at around 30FPS.
Many slightly older games will run at 1680X1050 at around 30FPS to 40FPS it seems even on an HD6550D.
Left 4 Dead (Latest Update)
Resolution: 1680x1050
Filtering: 0X AA / 0X AF
Graphic Settings: Medium
Shader Detail: Medium
Test 1: HWC Custom Timedemo
Comparison: FPS (Frames per Second)
World in Conflict v1.010
Resolution: 1680x1050
Anti-Aliasing: 0X
Anisotropic Filtering: 0X
Graphic Settings: Medium (DX10)
Test 1: Built-in Benchmark
Comparison: FPS (Frames per Second)
X3: Terran Conflict 1.2.0.0
Resolution: 1680x1050
Anti-Aliasing: 0X
Anisotropic Filtering: 0X
Graphic Settings: Medium
Test 1: Built-in Benchmark
Comparison: FPS (Frames per Second)
Last edited by CAT-THE-FIFTH; 11-09-2011 at 08:54 PM.
Different dies isn't an issue - no need for extra dies you just standardise some level of performance. Size of die.. maybe, though I think current designs are actually pretty efficient and you can produce a decent performing chip with very reasonable die size. Anyway, they're moving to bigger wafers
How long is it since the X360 tech changed? Yet games revenue on that system is many many times higher than on the PC. A stable development rate allows devs to code for a consistent performance level and eek the most out of it, with far lower QA and performance concerns.
But what would this 'standard' be? You get decent entry level gaming performance with Llano, but there's no way it's going to replace the high-end discrete market - if these hypothetical CPUs had anything other than top-end performance, there will still be a market for discrete cards. And as I said, putting a huge high-end GPU on to the CPU die is completely infeasible for now. Regardless of the size of the wafer, TDP and yield would be massive problems - CPU die size has barely changed from the P3 days, even gone down in some cases. Also a 300W TDP CPU isn't going to be terribly easy to accommodate either thermally or electrically.
Sure, you can beef up IGPs, which is where AMD is going, but it's not going to dent anything other than the low-end GPU market for the time being.
TBH,a discrete GPU release slowdown might be a good thing IMHO. Maybe it will mean PC games will be better optimised.
Who knows? Why not target next gen console specs for starters.
We have 300W GPU chips already, but there's no need to put a 580 on there. There are great CPUs around the 65W mark, add in a 200W GPU portion and you've got less than even current gen GPUs. I think a <300W CPU+GPU combination is entirely feasible with great performance, certainly in the future.And as I said, putting a huge high-end GPU on to the CPU die is completely infeasible for now. Regardless of the size of the wafer, TDP and yield would be massive problems - CPU die size has barely changed from the P3 days, even gone down in some cases. Also a 300W TDP CPU isn't going to be terribly easy to accommodate either thermally or electrically.
Possibly, but a slowdown from one side would be an opportunity for the other to gain a monopoly.
As I said, one MFR would see an opportunity to produce high-end GPUs without competition, and console specs wouldn't keep PC gamers satisfied for long. Powering/cooling a CPU is different to a GPU, for starters GPUs are happy running much hotter and with the trend towards lower and lower power CPUs, a beefy GPU wouldn't be a welcome addition to the mainstream. I can see it being feasible in the future but not the near future, rather I see CPUs and GPUs becoming one and the same thing eventually (despite AMD's APU terminology, there's still a discrete GPU and CPU on the die and separate programming languages for each). Intel has some serious catch-up work to do on the GPU side though!
There are currently 1 users browsing this thread. (0 members and 1 guests)