I would have thought they stuck with Kepler given the proximity to its release and because they used Nvidia last time around?
I would have thought they stuck with Kepler given the proximity to its release and because they used Nvidia last time around?
Kalniel: "Nice review Tarinder - would it be possible to get a picture of the case when the components are installed (with the side off obviously)?"
CAT-THE-FIFTH: "The Antec 300 is a case which has an understated and clean appearance which many people like. Not everyone is into e-peen looking computers which look like a cross between the imagination of a hyperactive 10 year old and a Frog."
TKPeters: "Off to AVForum better Deal - £20+Vat for Free Shipping @ Scan"
for all intents it seems to be the same card minus some gays name on it and a shielded cover ? with OEM added to it - GoNz0.
Except that deferred rendering buffers consist other other information such as geometry that could be spread across cards. There's also the option of split-frame rendering (assuming it was available to the team), which would reduce resources on the non-primary cards/allow for an element of spreading. This all also assumes that Epic didn't break from the traditional pipeline and use elements of CUDA/OpenCL for a bespoke implementation.
Don't take anything I'm saying to mean I think Kepler will not be a good GPU; I'm hoping and expecting it to outperform Fermi by a fair margin, re-establishing competition and progressing in general. But companies can and do omit some information or compare apples to oranges to show new products in a better light, whether it's cherry-picked benchmarks or optimal setups, it's best to wait for thorough reviews before you decide on anything.
There is only 1 way I can think of that a "single card" early kepler will be anywhere near GBTX 580 Tri-SLI... and that is if it's the dual GPU single card GK104 rumoured to be in the planning
in which case what we are talking about here is a dual GPU card vs. last gen tri-sli, which would be a totally predictable level of advancement and not the oh my god wow single-card single-gpu is now as fast as tri-sli that the wording on this article suggests
Last edited by andyb123; 09-03-2012 at 05:09 PM.
Thank goodness I'm not the only person to think this.
I currently work with the Unreal 3 Engine on a daily basis and worked for a company that was a licensee in the past (as well as other engines). I'm not a GPU programmer, but I have a fairly solid grasp of most of the engines pipelines. From experience I've usually found the following:
- Unless you're targeting a specific platform (like the iPhone), putting manufacturer level optimisations into the engine renderer (or any other aspect to be honest) is generally not done. The more manufacturer code paths that you have at the engine level, the bigger the opportunity for bugs to occur (and it's hell to debug). No programmer wants to maintain lots of code paths like that, doubly so when you consider these 'optimisations' are likely to change with driver and GPU revisions. It's just a nightmare.
- Most engine renderers will stick to the standard APIs for what they are using (D3D, OpenGL ES...and so on), that way if anything goes wrong, they can shout at the relevant party to fix it and keep their codebase clean.
- The optimisations people are talking about are generally done at the driver level. How many times have we seen incidents of "cheating" in benchmarks when the drivers have been "optimised" for a certain executable? It's just the driver kicking in and loading custom profiles / code for that game, although you can normally define these yourself these days in the NV/ATI control panel.
- Nvidia are very aggressive in terms of getting their tech used by developers. A few of the guys at our place were doing GPU programming on a NV card and dropped them a mail with some fairly high end and complex questions a couple of years back. The response? The questions were answered, code samples were provided, and new GPUs were shipped (for free) because they said the project was 'interesting'
This has the upshot of them now writing PhysX code currently for a project that will probably make significant revenue. Nvidia getting in there so quickly meant that it's unlikely the devs will move away from them at the moment, which in turn means that if nVidia wanted to get early code from those devs to optimise things their end, they could.
I know the angle you're going for here Alistair, but you really need to be careful with how you spin the relationship between entities here.
Using a certain GPU for a tech demo of a new engine doesn't give any information (never mind a strong indicator) away in terms of what can run it better, ATI or nVidia. You must be careful with these things as it's dangerously close to extrapolating data which could simply be false.What Epic did confirm was that the demonstration of UE4 shown behind closed doors was indeed powered by NVIDIA Kepler technology, strongly indicating, that at this stage, Epic is able to achieve the greatest performance from the upcoming architecture as opposed to AMD's Radeon HD 7xxx series, an indication which bodes well for NVIDIA.
Traditionally, UE3 runs slightly better on ATI cards due to the raw ALU performance they currently have. With nVidia looking like they might go down the route of more, simple ALUs though, this may change of course
There are currently 1 users browsing this thread. (0 members and 1 guests)