Read more.WARP10, a software rasterizer found in Windows 7, demonstrates the rendering ability of powerful CPUs. But what does it tell us?
Read more.WARP10, a software rasterizer found in Windows 7, demonstrates the rendering ability of powerful CPUs. But what does it tell us?
This could turn out to be useful when products like AMDs Fusion start to ship.
It does mean they could use simple DX10 effects on the desktop without having to require a DX10 supporting graphics card in the machine...
And presumably it could be easily adapted to boost/augment the GPU in new chips which integrate one.
this is drawing a parallel between a penryn/i7 and a Larrabee's core that wont be there.If a CPU with 8 virtual cores can achieve DirectX 10 framerates of around 7fps, what might a many-core GPU with texture sampling units be able to throw out?
run it on an atom 330 (2 core) and see how it fares against a core2duo (2 core) when both clocked at 1.6, what? not so well? but they both have 2 cores! this is so confusing!
VodkaOriginally Posted by Ephesians
I just see it as a failsafe.
Now, it doesn't matter what video card you have, nor whether you have installed (the correct) drivers.....Microsoft can use DirectX features where ever they want.
I foresee the desktop getting very 'active'
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
but if we could run graphics on the fusions CPU cores, then all we would need is a kernel that can run on its GPU cores!
i must sell this idea to intel, they will LOVE it
VodkaOriginally Posted by Ephesians
You joke, but in 5 years or so the GPU will be to the CPU what the FPU is to the ALU... Once it didn't exist, then it was a separate processor, then it was on the same chip, then people forgot the two were ever separate.
Spot on.
This development has very little to do with high-end gaming, and everything to do with the basic desktop experience.
With Vista, there were two levels of branding - one for those systems with a GPU, and one for those without. The much-touted, and heavily promoted "vista experience", however, was only available on the more fancy, GPU-present, systems. This was because the enhanced desktop features ran via DirectX.
Now, it seems, Microsoft has realised this problem, and also noted that if a CPU rasteriser can hit 7 frames a second in games, it would be able to cope with advanced desktop rendering without any real trouble.
More importantly, Windows 7 is seeing the launch of two new Windows DirectX APIs, Direct2D and DirectWrite, which aim to surpass the old GDI drawing systems and provide enhanced text rendering. Both of these systems use Direct3D behind-the-scenes, yet offer such a substantial increase in quality, far greater than what the Aero desktop offered.
As a developer, I am not surprised by these moves. The latest APIs for DirectX are extremely easy to use, and there are many benefits to be had in trying to switch developers over to these new systems. Not only do they make software easier to produce, but they result in increased quality products, further strengthening the Microsoft Windows platform. Now, it seems, Microsoft are planning to make life easier for both consumers by developers by attempting to eliminate much of the confusion over if a software package will run on their system or not. Games will likely still have minimum specifications, but general software should hopefully be free to dip into more advanced features as they need.
Hello,
thought the idea with vista was to use the GPU to offload some of the demands?
so the cpu has less to do?
now they are creating technologys that use the cpu more?
maybe at somepoint we will see a processer with both gpu and cpu on one chip.
Absolutely.
And just in the way SX's had software to emulate DX. This is just that. MS don't have the luxury mac have of demanding expensive hardware. They have to please their industry partners and fundementally provide low price.
Technology like this will really help the lowest common denominator.
throw new ArgumentException (String, String, Exception)
It's been mooted for a while but i'm not sure if your timescales ring true - it may take a little longer. I think discreet graphics have a fair old life in them yet - particularly at the high end (gamer). How much is a CPU going to cost if it has the same capabilities as a high end gpu? Will you be happy about swallowing a much larger pill when upgrade time comes around (in one go anyway)? How long till system memory has the same kind of bandwidth that high end GPU's feature (on board) today?
How big's the die going to be?
The truth is, it's pretty hard to buy a non-directx 10 card incapable of the (relatively) simple acceleration that Vista requires (not to mention 9.1c is the actual requirement). And they're dirt cheap. I like the idea of integrated cpu/gpu for low-end or specific requirements (versus onboard graphics) such as a media centre needing low-power gpu assisted HD decoding but not so much for the main rig. So yes, sure for your bog standard stuff but for mid to high end gaming?
The only manu making a non-aero gpu now is matrox isn't it?
what is " Intel DX10 Integrated "? did they forget what they were using?
Last edited by MadduckUK; 02-12-2008 at 01:00 PM.
VodkaOriginally Posted by Ephesians
And charging an absolute fortune for it.
But given that Dell + HP all ship boards with 2 PCIe as pretty much standard on the Xeon range (staple of the 4 TFT user industries, finance + medical etc). I wouldn't expect their sales to keep on going at £400, when you can get 2 better cards at £50 each....
throw new ArgumentException (String, String, Exception)
I disagree. GPUs are generally a process behind CPUs, and yes, high end GPUs are gigantic in terms of transistors, but so are sex-core processors.
Now I do agree that there'll still be an add-in card to be bought for high-end use. But in 5 years it won't be a GPU. Already the 'GPU' does more than just graphics, so in 5 years who knows what they'll market it as.
Care to take a bet on that? I would. I'll warrant we'll still have mid to high end GPU cards being sold as such in five years. Look at the size of the GTX - now think about coupling that with a multi-core CPU (presuming you're not suggesting GPU's will be generalized enough to run out of order tasks in five years) and then solving the inherent bandwidth problem. Will GDDR sit on the motherboard? What bus will take care of that? Will it be seperate from the FSB/system memory (which is far too slow even in DDR3 guise)? Ok, scale it back - how about the 260 instead? That's mid range. Is transistor count going to decrease, or increase? Why haven't AMD produced a 4870 level gpu/cpu?
Why is intel taking quite-so-very long to get to market with it's fusion product? What's the transistor count for a sex core btw, i've not seen mention of it as yet? Greater than 1.4 billion?
So what can I do with my GPU today? Well, in game physics are (frankly) nowhere - the amusing thing is we need a heck of a lot more GPU power to render those details physics simulations, even discounting the need to split the GPU's job between physics/graphics. Realistically dual GPU solutions don't make sense unless they're multi core, and far better resolved than SLI or Crossfire - and we've still to see even that. That leaves me with what? Folding? A photoshop filter? Neither are relevant to me. Then again - we have HD decoding - ideal when you've got low cost, entry level systems with low power/cost envelopes. And that's why I said, originally, it made a lot of sense at the low end. Heck, i'd definitly think about it for my next media center build
So let's look at it from the programmers point of view - what have i got? Well, competing standards for a start. So what does that mean? Yep - we need physics rolled into directx. When's that scheduled for exactly? How about, other, more generalized processing? We're in the early stages of seeing even multi-core CPUs being used in worthy fashion. I quite agree, given the willingness it could all happen in five years but that's ignoring the commercial factor, and the roadmaps of those involved. Yes things move fast in semicon, but these are still commercial entities with much self interest. Visit a few and it becomes rather apparent - especially when some of them are fighting for survival right now.
Remember, I question your given timescale - not the inherent transistion. And, as I also said, i'd stick money on it.
[written whilst at home with flu so excuse me i'm so poorly! aww!]
There are currently 1 users browsing this thread. (0 members and 1 guests)