Page 222 of 253 FirstFirst ... 122172182192202212219220221222223224225232242252 ... LastLast
Results 3,537 to 3,552 of 4036

Thread: AMD - Piledriver chitchat

  1. #3537
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD - Piledriver chitchat

    Yeah at least with HEDT they're addressing the system builder market which AFAIK AMD has generally done fairly well as they don't have to get the likes of Dell, HP, etc to stand up to Intel as they have to in the mainstream/mobile markets.

  2. #3538
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by scaryjim View Post
    FX 7600p + R7 260m laptop are already available, are relatively thin, run pretty cool, and are very capable gaming machines - at least on the low-end panels that get used in mainstream laptops

    AMD have been targetting the mainstream for a couple of generations - that's why Steamroller and Excavator have made it into APUs but not CPUs. The Kaveri mobility APUs run cool and can easily handle 720p gaming: more if they're coupled with a discrete card. Producing good mainstream processors has never been AMDs problem: getting them into OEM machines has. The success of Zen is going to be as much about the number of design wins as the performance of the cores.
    Carrizo looks actually OK:

    http://www.notebookcheck.com/Test-HP....157955.0.html

    In CB the A12-8800B is around 20% slower than a Skylake Core i3 6100U but the difference is much lower for X264. Its also around 30% faster for IGP performance too and around the same as a Kepler based GT920M discrete card which has 384 shaders.

  3. #3539
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by CAT-THE-FIFTH View Post
    In CB the A12-8800B is around 20% slower than a Skylake Core i3 6100U but the difference is much lower for X264.
    2 cores with SMT vs 4 real integer cores - x264 really doesn't care for hyperthreading. Which is one of the reasons I don't get the craving for i7 in so many systems - besides gaming (where they're also near-identical in performance), one of the few CPU intensive things many people do is video transcoding, and the i7's SMT really doesn't help much at all, most of the difference coming from the slight clock speed difference. Even then, I personally don't think encoding taking a few percent longer is worth nearly double the price.

    I mean SMT is nice to have at times to get more throughput per core under certain workloads, but I think the premium Intel charge to not turn it off is pretty crazy, especially considering the benefit for client workloads is... limited...

    The other main difference between the desktop i5 and i7 being cache size, which doesn't seem to make much difference either as can be seen when comparing the 3MB/4MB i3 processors.

    Food for thought - I wonder if the i7 would be as popular vs the much cheaper i5 were it not for the i5/i7 marketing? E.g. if they had it like the different cache size i3 variants (e.g. 4130 vs 4330) with a slightly different model number, say i7 6600k and i7 6650k? Don't get me wrong there are workloads where it helps, just those cases seem to be fairly limited from what I can see, and for the most part probably not worth the extra cost.

    It's even worse on the mobile side, from what I can tell the U versions of the i5 and i7 (which seem to be the most commonly used bins now) are identical dual core processors aside from slightly different clock speeds on the CPU and GPU, and some of the i5s have 3MB instead of 4MB cache, but not all of them. The i3's 'big' change is to drop turbo and TSX. Which brings me on to another point I'm far from the first to make - I don't get Intel's turning off its new extensions on its lower-priced CPUs - they're not going to gain widespread use until they're widely available in mainstream processors, and they're not exactly a selling point for the higher-end parts while there's no software taking advantage of them!!!

  4. #3540
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by watercooled View Post
    ... I don't get Intel's turning off its new extensions on its lower-priced CPUs - they're not going to gain widespread use until they're widely available in mainstream processors, and they're not exactly a selling point for the higher-end parts while there's no software taking advantage of them!!!
    I suspect that for the first couple of generations, new extensions are far more likely to be relevant to developers, HPC and workstation users - who will generally be using either Xeons or i7s (maybe i5s, I guess). I know Intel use a lot more different silicon than AMD (who usually just roll one die off the line then disable bits of it to get lower-end parts), so it's possible that dropping the new instruction support makes the silicon cheaper/easier to produce, or perhaps affects the binning of the final silicon so they can qualify more dies.

  5. #3541
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD - Piledriver chitchat

    Good points - that's a possibility, but some of the things they disable can't really be broken in isolation (IIRC AES-NI and SMT fall in to that category). But for others like AVX, considering how Intel had to increase core voltage for those workloads on the early implementations I can understand that having an impact on binning, regardless of whether or not they can be truly broken. I'll do some digging and see if I can find out more about which extensions this might affect.

    With AMD though, they never really disable features on their processors, it's always whole cores from what I can think of. But then again, Intel don't seem to do much core fusing apart from on their 2011 processors, so I guess where something might be broken on an extension and Intel can disable that extension, AMD would just disable the whole core.

  6. #3542
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by watercooled View Post
    ... With AMD though, they never really disable features on their processors, it's always whole cores from what I can think of. But then again, Intel don't seem to do much core fusing apart from on their 2011 processors, so I guess where something might be broken on an extension and Intel can disable that extension, AMD would just disable the whole core.
    As I said, I think a big bit of that is that AMD rarely seem to do different silicon for their lines: AFAIK the 1module/2core APUs are actually the same silicon as the 2module/4core APUs, presumably because their margins and market share over the last few years couldn't justify the expense of fabbing multiple variants of the silicon. And if you're using the same silicon for all your processors, then it's going to have the option to keep all the same features, like the ECC support on my Sempron 140.

    OTOH I'm pretty sure that Intel have actually fabbed different silicon for various different configurations in the last few generations (a quick google suggests there were at least 4 different Ivy Bridge dies). The question is, how easy is it to fab silicon that doesn't have the ability to handle a particular instruction, yet is otherwise architecturally identical? If you're doing a new die for consumer i3s and leaving an instruction out makes it either cheaper or easier to fab, why wouldn't you?

  7. #3543
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD - Piledriver chitchat

    Yeah Intel do have a few different dies for the client segment e.g. 2C/4C versions and now versions with different sized GPUs. However the cores stay the same between dies; changing core features or extensions would take a lot of work as you'd be essentially designing a separate core/microarchitecture, features like this aren't modular blocks you can just pull out, even if you could justify the cost of fabbing yet another variant of your dies. I don't think there's any reason to do that - even if a feature is breakable based on yield, unless it's huge it makes a lot more sense to just fuse it off.

    E.g. desktop i5 and i7 (talking only about 115x sockets) share an identical die, that's fairly well-known. The i5 has some cache disabled, and lower clock speeds, which are both attributable to binning, but SMT isn't, at least at an architectural level. Although considering SMT allows increased power consumption under common workloads, there's that.

    Further, desktop i3 and mobile i7 have had shared dies, although there are varying GPU configurations now but I think it's still the case for some parts, they're just binned for higher clocks or lower power depending on market.

    Some of the dies are listed in a table here: http://www.anandtech.com/show/9505/s...ckage-analysis
    Last edited by watercooled; 21-01-2016 at 12:45 AM.

  8. #3544
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Piledriver chitchat


  9. #3545
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    13,008
    Thanks
    781
    Thanked
    1,568 times in 1,325 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by scaryjim View Post
    If you're doing a new die for consumer i3s and leaving an instruction out makes it either cheaper or easier to fab, why wouldn't you?
    Because it would be a new CPU variant, which means you start again from scratch with your validation efforts. That design testing takes months, not worth it.

    I think Intel's upgradable cpu system gives a hint as to how they just disable bits.

    http://arstechnica.com/gadgets/2010/...r-catastrophe/

    Kind of odd that died off, perhaps the market for upgrading a really cheap cpu into something better wasn't big enough, perhaps Intel made more money by forcing people to buy an i3 up front or making them buy two processors to upgrade, or perhaps Intel were greedy with the upgrade cost.

  10. Received thanks from:

    scaryjim (21-01-2016)

  11. #3546
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by DanceswithUnix View Post
    Because it would be a new CPU variant, which means you start again from scratch with your validation efforts. ...
    Fair enough - it's a long way from my area of expertise so I wasn't sure if it was a simple change or would need the additional validation.

    The upgradable processor concept was a very odd one, IMNSHO; it doesn't really fit the consumer market.: enthusiasts would balk at the idea ofd paying extra top unlock features that are already present on the processor, whilst ordniary consumers tend to treat PCs more like consumer electronics and don't want to upgrade, prefering to buy the machine with the highest numbers in the spec list (hence low-end GPUs with 4GB DDR3).

    It DOES make sense as a scheme for OEMs, OTOH: invest in a single build process, putting together black boxes with a single processor, then unlocking additional features via a microcode update when consumers order higher-specced machines. Or perhaps as a scheme for Intel's BGA processors? Low cost laptops that can be software upgraded? Could see that being popular!

  12. #3547
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    13,008
    Thanks
    781
    Thanked
    1,568 times in 1,325 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: AMD - Piledriver chitchat

    I expect a lot of Pentium chips go into office PCs. The idea of buying cheap will sit well there, an upgrade may well come out of a different budget, and then there is the idea that office PCs are often bought on some kind of lease deal with extended warranty both of which mean you can't go opening the box up and changing components but paying an upgrade license fee and running a utility to install it is quite possible. It could even allow a hardware upgrade without the sysadmin leaving their desk by using remote desktop.

    IBM have been selling extra unlockable capacity in their big iron systems for years. I think in some cases you can pay for extra cores by the hour despite them being already there in the machine but just usually locked.

    So yeah, it probably rubs consumers up the wrong way, but for business it kind of made sense. Never saw how much the upgrade license was charged out at though.

  13. #3548
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by DanceswithUnix View Post
    ... Never saw how much the upgrade license was charged out at though.
    I did a quick google earlier and I saw the fee quoted @ ~$50. A quick overview of Clarkdale on ark tells me that the G6951 (price not listed) had identical specs to the G6950, which had a recommended box price of $96. The cheapest i3 (530) - which was identical to the unlocked Pentium barring an extra 133MHz on the CPU clock - is recommended at $117: so $21 more expensive. Makes the G6951 + upgrade look ... pricey...
    Last edited by scaryjim; 21-01-2016 at 03:57 PM. Reason: adding ars link for upgrade pricing

  14. #3549
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Piledriver chitchat

    Looks like the latest TR game is sponsored by Nvidia and apparently does not use Async shaders like the XBox One:

    http://www.eurogamer.net/articles/di...er-pc-face-off


    Console equivalent settings take a long time for us to hammer down, and there may be some small differences, but generally speaking, you're getting a very, very close facsimile of how the game looks running on Xbox One. Identifying these settings is important - it shows us how the developer of the game has prioritised visual features when running on a box with a set level of GPU power. On the PC space, it's a good start in setting a base level of graphical quality, then customising to suit your hardware.

    Resolution: 1920x1080 (though cut-scenes render at 1440x1080)
    Texture quality: high
    Anisotropic filtering: looks like 2x
    Shadow quality: high (in terms of resolution but missing details)
    Sun soft shadows: off
    Ambient occlusion: BTAO (not available on PC)
    Depth of field: very high
    Level of detail: high
    Tessellation: off (but adaptive tessellation is used for snow deformation)
    Screen-space reflections: on
    Dynamic foliage: medium
    Bloom: on
    Vignette blur: on
    Motion blur: on
    Purehair: on
    Lens flares: on
    Screen effects: on
    First and foremost, the venerable DF budget PC with an i3 processor and GTX 750 Ti finally met its match with Rise of the Tomb Raider. Running with settings similar to Xbox One, we found that testing areas saw the game turn in frame-rates between 13 and 25fps. In comparison, the AMD test featuring an R7 360 fared even worse with performance numbers dropping all the way into the single digits at these settings.
    So,basically console users get far more optimisation due to petty infighting between companies in the PC arena,it appears.

    At least AMD Purehair seems to have a much smaller performance hit now:

    http://www.pcgameshardware.de/Rise-o...marks-1184288/

  15. #3550
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,038
    Thanks
    1,878
    Thanked
    3,379 times in 2,716 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Looks like the latest TR game is sponsored by Nvidia and apparently does not use Async shaders like the XBox One:[/url]
    They say it does in the same article:

    Quote Originally Posted by eurogamer
    The contest is even more interesting in that Rise of the Tomb Raider is known to use asynchronous compute - a hardware-level feature absent on Nvidia hardware and fully implemented on the R9 390.

  16. #3551
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Piledriver chitchat

    Quote Originally Posted by kalniel View Post
    They say it does in the same article:
    Its been covered in the computerbase.de article that the PC version does not use async shader but it also covered the use of tessellation in the game which matches with the settings the console versions appear to use.

    Crystal Dynamics uses in Rise of the Tomb Raider in the snow presentation and some objects, such as trees and surfaces consisting of both the DirectX 11 tessellation technology. The optical effect of tessellation is small. When snow is the difference, if any, can be seen only with a magnifying glass. Although trees and some (few) surfaces consisting of both in fact get a little more depth and more realistic surfaces, however, the optical gain is low.
    So,another idiotic PC optimisation adding pointless levels of tessellation to shift higher end cards - awesome!

    So,it seems we have another ****-poor PC port,and even on a GTX750TI which is more powerful and has better tessellation support than the R7 260X GDDR3 in the XBox One,with tessellation off and technically a faster CPU,we are having much worse performance on the PC.

    But this is a telling statement from the article. Crystal Dynamics who made the game said async shaders helped improve performance quite a bit with the Xbox One,but are saying now that DX12 does not improve performance much on cards - so I wonder if that is AMD or Nvidia ones?

    I expect Nvidia ones,as they don't apparently do that well when async shaders are used.

    Even though Rise of the Tomb Raider already has a very good graphics, it is quite possible, according to Crystal Dynamics, that this is again improved. So it is currently examining whether there will be a patch for the new DirectX 12 API. Internally leads with the new interface already experiments, however, was able to achieve no improvement so far, so there is still no definitive statements in this regard.
    So,I get the impression no DX12 version of the game for a while.

    No DX12 = no Async shaders.

    Its why AMD really need to start getting in bed with developers more - ATM,Nvidia is putting tons of money into Gameworks and IMHO,that means technology like async shaders will be probably held back.

    The sad thing is the last TR game run reasonably well on all hardware and was AMD sponsored. Now,Nvidia gets involved we have yet another Gameworks title which runs piss poor on all hardware and it's starting to become a trend now.

    Yeah,eventually performance does improve,later.

    Even look at Purehair/TressFX 2.0 - you are now only looking at a less than 10% drop in performance now.

    Compare to AMD titles like Star Wars:Battlefront which seems to run very well on most hardware and still looks very pretty.

    Edit!!

    Its starting to become a trend now - ARK was meant to get a performance enhancing DX12 patch which is now AWOL,and that is Nvidia sponsored and has terribad performance too.

    Its starting to make me think whether Nvidia is having issues in DX12 performance.

    But with AMD,apparently not even trying to engage with devs it appears,Nvidia is probably going to get away with it until Pascal drops.

    They should be pushing to get stuff like async shaders used in more games,because it makes them look at least better on a PR level if they are highlighting weaknesses in their competitors.

    Nvidia has no problem doing this,so they even will try and tessellate the bloody air in the game if it make them look better.

    AMD seems to have the talent but not a clue on how to actually market or sell their products.
    Last edited by CAT-THE-FIFTH; 28-01-2016 at 02:07 PM.

  17. #3552
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Piledriver chitchat

    Forgot to post the links to some of those statements:

    Quote Originally Posted by Dygaza View Post
    Was there actually any rumors this should have been DX12 tittle in the first place? And where did you heard dev saying DX12 didn't give any boost?
    http://www.pcgameshardware.de/Rise-o...marks-1184288/

    https://translate.google.co.uk/trans...%2F&edit-text=

    In an interview, Gary Snethen says in connection with the illumination used in Rise of the Tomb Raider also: "On the Xbox One and for Direct X 12 Async Compute is used with Direct X 11, the calculation is running synchronously on the other hand." Possibly comes later support the low-level API via patch.
    http://www.computerbase.de/2016-01/r...-benchmarks/2/

    https://translate.google.co.uk/trans...-text=&act=url

    Even though Rise of the Tomb Raider already has a very good graphics, it is quite possible, according to Crystal Dynamics, that this is again improved. So it is currently examining whether there will be a patch for the new DirectX 12 API. Internally leads with the new interface already experiments, however, was able to achieve no improvement so far, so there is still no definitive statements in this regard.

Thread Information

Users Browsing this Thread

There are currently 6 users browsing this thread. (0 members and 6 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •