I have spare CPU cycles (in spades!) and no free GPU cycles.
You can only get so much out of SMP in games (3 of the 4 games I run are SMP aware). The main grunt they require is gfx processing.
Pushing the resolution is a natural progression, everyone wants higher resolutions and more "omgzwtflolz settings", so where are you going to find the cycles on the GPU when the gfx card is going to be pushed harder and harder with every new game?
The cycles will be found on the CPU which is out-stripping the rate at which developers can use it.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
I didn't disagree with any of that - what I did say is that rendering all those wonderful effects means more GPU load which means less time for physics. What I also said was that the current resource that's more obvious in 2008 for doing it is the CPU. It would be far better to have an physics engine that can use SMP or ANY GPU than a closed system for one GPU vendor.
No, I don't. Pretty sure I didn't say that How many years has havok been around for example? Loads.
You're replacing the PPU with an extra GPU in 2008 - and therein lies the problem. Especially when you've got another processor(s) lying round that could do the job (as they're doing bugger all else).
I think we're talking at crosspurposes here - I agree with you almost all the way here - but i'm not likely to be impressed by clever physics if my GPU is rendering it at 5fps versus simple physics at 25fps. The GPU simply isn't a underused resource in gaming right now, the cpu is. Maybe the GTX280 will be so powerful as to be able to have all my eye candy on at 1600x1200 AND have tons of units left over for physics too (and be designed in silicon to do the latter) - but I kinda doubt it. I rather hope i'm wrong - i'd love crysis to be patched for nPhysics and run at 100fps Then again, if the AMD card is faster for graphics i'm screwed, right?
lol at least 2 things I mentioned are un-contestable!
Or are you going to tell me my resource monitor is wrong and my CPU is constantly maxed out? and that my graphics card is not being pushed to the limits, even though the only way to increase performance is to overclock it? Or perhaps you know of millions of people who want stagnation in grpahics, rather then advancement and better, more realistic effects?
If you want to be taken seriously, at least explain yourself rather then sounding like a know-it-all-but-can't-be-bothered-to-explain-it forum troll.
Last edited by shaithis; 22-05-2008 at 05:54 PM.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Sorry, by the tone you seemed to be of that impression, but of course you didn't explicitly say that.
I agree, at the moment CPU cycles are underutilised, at the moment, those ticks will easily be rapidly filled with significantly better AI, SP may actually become fun again if games developers stop thinking MP is the be all and end all of gaming.
I completely agree, that anything under 25-30fps is a bit nasty, but when you have people go buying quad crossfire and whatnot to get 100fps is ridiculous, it's wasted GPU cycles like these which could be better used processing physics related things, the main problem is as soon as any GPU performance advancement comes along, it's immediately wasted on bigger more bloated textures and hardly noticeable visual effects. I mean seriously, the massively parallel architecture of modern GPUs puts Sony/IBM's Cell architecture to shame, yet games developers can't run multiple discrete jobs in parallel, or even dynamically alter the graphical workload to maintain a certain framerate?..
One of the big ironies of the computer market is that we have reached a point where we are limited mainly by the average installed hardware, rather than the technical ability to produce fancy effects. When discussing this, we must remember that the average gamer is not a money-laden enthusiast, but rather using a system which is likely two years old. The average hardware lags behind the enthusiast's hardware by a fair margin.
With respect to graphics, this means that the effort needed to really use all of a 9800X2's power is on the whole a wasted investment. Few people have such a fancy level of graphics card, and typically the key sales period for a game is over the first year, within which it is unlikely a significant enough increase in the average kit will occur. As such, their core audience will never see the additional work put in to maximise graphical fidelity on the top-end kit. There are some good reasons for such an investment, but for the majority of games this route will be avoided. The players themselves do not help with this, either, since many desire to play their games on "ultra high" settings, and complain at the game if this is not possible, rather than more rightfully blaming their system. We all want to feel like we have a powerful and competent system, even if it isn't.
The field of 3D graphics rendering is well understood now, and it is very much the hardware that holds games back. One need only look at games like crysis, modded Oblivion, or FSX to see how easy it is for game developers to make an engine that will simply eat up all the graphics processing power you offer to it.
The calculations needed for 3d scene rendering, no matter what technique you choose, are heavily based upon floating point vector mathematics. The great thing about this field of calculation is how parallel the majority of the maths is, and how it only needs to use a few basic operations to perform all the fancy work. This is a key reason why GPU throughput has risen far more dramatically than CPU throughput. CPU designs must be able to deal with any series of inputs, whereas GPUs in general know that they will have a lot of relatively simple floating point calculations alone (and a few bits of housekeeping, of course).
Because CPUs may be fed all kinds of instructions to run any number of different programs, they have had to keep as generic as possible, focusing on the demands of the most commonly used types of programs first and foremost. These are, as a rule, not computer games, but the regular operating system, internet browsers, and office style software. None of these have typically needed much floating point processing power, and generally see performance improvements by optimising the processing of regular integer operations. This is why Core 2 can execute three integer operations per clock, but only at most two floating point operations. Furthermore, after a scan (not exhaustive, so I could be wrong), it seems that although the various SIMD additions to x86 are impressive, none of them offer in a single instrustion some of the essential vector mathematics operations that graphics cards excel at. Your CPU can only act like a single one of the stream processors within the graphics card, so even if it could properly communicate, access needed memory, and share effort with the graphics card (which I shall not even be getting into here, but needless to say, that is a major issue with all of this), it would only offer a small percentage of extra performance on top of what the GPU already offers.
If you are finding that your GPU is running at maximum tilt and your CPU is not, the only practical solution (without turning down the settings) is to get a better graphics card. Your CPU has no need to run any faster - at it's current speed, the GPU is only just able to process all the requested actions. If you consider how much more powerful the GPU is at it's job than the CPU, the poor CPU has in fact done it's absolute best, and if the GPU cannot cope, the CPU simply could not manage, either.
However, in other forms of processing, the CPU still has a lot of life left in it. Contrary to the beliefs of some, computer games are often highly parallel. It is not the games that cause problems with using all possible cores and SMT on a processor, but once again the installed user base - and in fact the current capabilities of even the best processors on the market are currently too low. I talked about this in more depth in another thread, and I would recommend you read that post for more on the challenges with SMT.
Similar to as talked about in that thread regarding SMT, a problem exists for advanced physics within games. This is not because it is too hard, or that there is no applications for such detailed and complex things, but rather because it is simply pointless putting them in at the moment. If you look at some of the games slated for release in this coming year, and at the physX demo levels for UT3, it is quite apparent that there are real applications available for advanced physics within games, that can offer new and interesting gameplay. However, just as with graphics cards and processor cores, the current installed market sucks. Physics calculations, it must be remembered, are very similar indeed to those that a graphics card exists to perform. As such, although they can and will often make use of available cores (but see my comments on the current state of SMT development), they would be much more effectively run on hardware similar to a graphics card.
In terms of the current state of the market, physics systems really are stuck in a very hard place. As discussed, only the enthusiast market has any spare graphics card power, but they form a small minority of the total gamer market. Without that spare power, games simply will not do as complex as theoretically possible physics systems. And without games featuring complex physics systems, gamers will not see the reason for spending more for graphics cards that can truly do amazing physics simulations. But unless the player base as a whole has such spare capacity, developers will never risk using it.
There is however a solution to several issues available here, however it will take a couple of years for the effects to change the face of the majority of computer games themselves. As some game developers are happy to pander to the enthusiast gamer market, GPU manufacturers know that there is at least some small demand for better physics processing, and that the enthusiast market really isn't too fussy about cost or, frankly, being sensible (£500 graphics cards, anyone? ).
The GPU manufacturers, in wanting to take advantage of that enthusiast market and to prepare for the market in a couple of years time (remember, old high-end cards or designs often become the new mid-range offerings), are always wanting to increase the performance of their hardware. As already stated, they have had great success so far by embracing superparallelism. However, there reaches a point where, due to distances, support features, and testing needs, that it becomes increasingly harder to add more parallel units within a design. We have already seen this with respect to CPUs, as they have moved to dual core and beyond by stopping to develop ever more complex cores, and instead placing more, simpler, cores together (as instead of having to integrate in more powerful sub-parts fully into an existing design, they only need a little glue to join two complete components). We have seen the GPU market over the years attempt this move to some degree with SLI, Crossfire and then X2 cards, however there is a penalty for going off-chip then back onto another. Despite the rumours that nVidia's next offering will be a "dual core" GPU, such a design will still suffer from similar problems to that found with SLI setups - the current rendering systems used work amazingly in a single discrete graphics core, but things get a little strange if you try and add more to process the same data. So, it is unlikely that it would be easy and immediately productive to try and add additional graphics cores to a GPU, however there is another, very similar component they could drop in instead. As the systems needed for physics are so very similar to those for graphics processing, existing graphics core designs could be modified and optimised for physics processing. As this extra physics core will have it's own special means to access it separate from the rendering system (but may have the ability to effect rendering data, depending upon ambition), it will not suffer from the hiccups that SLI type additions would.
However, and you'd be right to point this out, a physics core would be basically useless for most existing games This is were nVidia's purchase of PhysX comes in handy for them, as they would have a number of big games already ready to take advantage of the power. nVidia has also shown itself willing in the past to closely work with developers to help them use nVidia features, and are happy pandering just to the high-end enthusiast market with some of their product lines. With the passage of time, they will be hoping that they can filter the physics core addition down the product lines into cheaper and cheaper cards, and that games will have been shipped that offered to make use of the enhanced physics on the old high-end cards (creating demand for the feature to be retained in cheaper models).
The big downside to all of this, as has been already pointed out, is for the consumer. Sadly hardware manufacturers are prone to designing vendor lock-in - and who could blame them? Every company wants to maximise their trade, and open standards are very scary indeed and are a major risk. But without an open standard for all hardware manufacturers (not just nVidia and AMD) to implement, games developers will not be assured that the technology they want to use will be present. Whilst this will probably never worry some enthusiast-friendly developers at all, it will make encouraging the use of advanced physics (and hence the need for physics cores on GPUs) very difficult, since at least half the market will not be able to accelerate the processing at all.
It's interesting to consider what might be considered the precursor to all of this, the 3d graphics libraries. After a long period of discord and vendor lock-in, SGI released OpenGL, and later 3d gaming on home computers really took off not because of vendor libraries, but because of a vendor-independent push with DirectX. It is for this reason that I hope that Microsoft add advanced physics features straight into the next version of DirectX, standardising the physics interface once and for all, and giving developers an assurance of minimal features always being available.
aidanjt (23-05-2008)
There are currently 1 users browsing this thread. (0 members and 1 guests)