There is rumour that Intel have some backwards compatibility too
https://www.tweaktown.com/news/60034...ard/index.html
There is rumour that Intel have some backwards compatibility too
https://www.tweaktown.com/news/60034...ard/index.html
MLyons (08-12-2017)
I am happy to read that ryzen 2 will be supported in the am4 board. i was actually concerned about the amount of money i would need to cough up to get a beneficial upgrade without needing to do a complete new build. it undermines the 'upgrade' process by forcing a new board every new CPU release. i stuck with AMD a number of yrs ago because of the wide array of Intel options from 2008 or 09 to '13 confused me ,really. AMD was just plain simple as well as cost effective, if not the fastest or best on the block. i never had the disposable income available to afford the top tier Intel anyway. not even one of either CPU, GPU at once forget about at the same time.
I will be getting the am4 board this week with ram which has jumped in price since my last build by double. ssd came down and ram went up. I just build for friends, family and myself when i can afford to and was getting bored with the dearth of AMD new and better options till recently
Excellent thread. I also see where rainman is coming from in a way (though he's not actually correct - sorry boss) because if you buy high end you are unlikely to want to replace your CPU even in 3 years, dependant on your use case.
But! My brother bought R7 - 1200 and will now be able to upgrade to Zen 2 8 core at a minimum in a couple of years for not a lot of dosh (ok that's subjective but from my point of view, if her ever actually USES the darn (is darn allowed?) PC, it's a worthwhile splurge).
So that's the facts of the matter there. Good news from AMD.
Breaking out your crystal balls people, what frequency range and Instructions Per Clock increase are you predicting from Zen 2? I can (activating mental power) foresee 4.5-4.8 GHz and 6-12%
hexus trust : n(baby):n(lover):n(sky)|>P(Name)>>nopes
Be Careful on the Internet! I ran and tackled a drive by mining attack today. It's not designed to do anything than provide fake texts (say!)
Millennium (08-12-2017)
Half dev, Half doge. Some say DevDoge
Feel free to message me if you find any bugs or have any suggestions.
If you need me urgently, PM me
If something is/was broke it was probably me. ¯\_(ツ)_/¯
Millennium (08-12-2017)
Not surprising after this too:
https://forums.hexus.net/cpus/380639-really-intel.html
Consider that is for the Z270,its not like the Z170 is massively different either!
Half dev, Half doge. Some say DevDoge
Feel free to message me if you find any bugs or have any suggestions.
If you need me urgently, PM me
If something is/was broke it was probably me. ¯\_(ツ)_/¯
TBF,when it comes to GPUs they are all refreshes to some degree at a uarch level,since it is easier to add features one bit at a time,and with AMD its bad enough their drivers seem to lag their hardware even then!!
I had some interesting insights into talking to a dev at a LAN a few years ago - apparently(at that time),Nvidia had a more walled garden approach in what devs could do with their hardware,whereas AMD were a tad more lax in that way,so you could do some interesting things with it,but you needed to know what you were doing in the first place,otherwise you could hit problems.
I think I've took pops at every rep that has come into our office about something they have done. AMD always get it the worst from me because at heart I'm actually a massive fanboy. When VEGA launched and they came to the office and I made a comment that mentioned the 1080 that wan't exactly in a "professional manner" they were a little hacked off at me but imo they deserved it. Then recently I actually mentioned their GPU drivers and brought up an issue that has been present for years and they have no knowledge of it. Like lol wot.
Last edited by MLyons; 08-12-2017 at 01:05 PM.
Half dev, Half doge. Some say DevDoge
Feel free to message me if you find any bugs or have any suggestions.
If you need me urgently, PM me
If something is/was broke it was probably me. ¯\_(ツ)_/¯
They have made much bigger strides more recently,as in some newer games its Nvidia who needs to push out a gaming reading driver to redress the balance(IIRC,Destiny 2 was one of them),but the issue is I think its down to resources and RTG admitting AMD thought discrete cards were a dead end a few years ago,and they put all their effort into CPUs(which TBH probably is the bigger market).
It makes me wonder whether AMD had cut the number of dev teams to a bare minimum to save money so had no back up in case HBM2 had issues.
Part of the problem is sadly for AMD,Nvidia has reorganised their lineup starting with Maxwell by splitting the FP32 optimised cards as one line and the rest as another line. Games ATM tend to favour FP32 performance.
For instance when the GM204 was launched,Nvidia tweaked big Kepler in the form fof the GK210,and that was their best commerical GPU until big Pascal was released(GP100),even though there was big Maxwell. Big Pascal looks like the GP102(GTX1080TI,etc) in terms of shader counts,but uses HBM2 and has significantly higher FP16 performance,but has at least 3 billion more transistors,and is over 600MM2 in size(much bigger than the GP102).
AMD,OTH is trying to make GPUs which are good at FP32,and solid at FP16 and OK at FP64 type operations,which means a kitchen sink approach since it obviously costs more money to do separate lines,but,sadly this means they can't really win at any of these,and its partially why their high end cards have tended to lose performance/watt too since they essential just pre-overclock the GPUs to match Nvidia. I mean even Polaris has a built in SSD controller,which does nothing for gaming,but adds to die area and power consumption just sitting there.
They also have the double whammy due to WSA of using GF to fab their GPUs and it has been shown the Samsung process its derived from is worse in performance/watt over the equivalent 16NM TSMC process. Even the Nvidia GPUs made at GF are less efficient than the TSMC ones(it was seen with the Apple chips too),and GF is probably behind Samsung too,as they needed to license it much later on.
If you look back to when Terascale was winning against Fermi,it was ATI/AMD who had very gaming optimised GPUs against the Fermi ones which tend to be better at non-gaming stuff,and Nvidia has learnt from that.
Last edited by CAT-THE-FIFTH; 08-12-2017 at 01:08 PM.
I fully agree that they are trying very hard and have made big progress. I just find it funny to put reps on the spot when they're expecting a nice casual chat. If they fix the issue i mentioned I'll be over the moon and i might even consider them for my next purchase.
Half dev, Half doge. Some say DevDoge
Feel free to message me if you find any bugs or have any suggestions.
If you need me urgently, PM me
If something is/was broke it was probably me. ¯\_(ツ)_/¯
You keep saying that Cat, but it just isn't that simple (life seldom is). GTX 460 had 3 blocks of shaders, only one of which was fp64 capable so that was the point at which Nvidia tried (with success) splitting the line into CUDA/non-CUDA cards, and yet the 460 was still for its price point a big lump of silicon. The embarrassment of the GTX 480 leg heater and the push for Tegra to take over the mobile world while unifying with the desktop products threw up a lot of changes.
I see Nvidia have announced a new Titan V card at $3000, I have to admire their cheek to charge that much,but it shows that fp64 just makes a card expensive not slow. https://wccftech.com/nvidia-titan-v-...-announcement/
Half dev, Half doge. Some say DevDoge
Feel free to message me if you find any bugs or have any suggestions.
If you need me urgently, PM me
If something is/was broke it was probably me. ¯\_(ツ)_/¯
Interesting that it's clocked lower than the titan XP
Not the same in any way - Nvidia has two different high end lineups now.
Don't believe me??
1.)8800/9800 series - G80 was the same GPU in both high end consumer and commerical
2.)GTX200 series - GT200 was the same GPU in both high end consumer and commerical
3.)GTX480 - GF100 was the same GPU in both high end consumer and commerical
4.)GTX580 -GF110 was the same GPU in both high end consumer and commerical
5.)Kepler - GK110 was the same GPU in both high end consumer and commerical.Big move by Nvidia to shifting scheduling to mostly software.
Then:
6.)Maxwell - GM200 was the high end consumer GPU,GK210 was the high end GPU in commerical. Incorporates tiled rendering in consumer lines.
7.)Pascal - GP102 was the high end consumer GPU,GP100 was the high GPU in commercial
Look at the shader counts of the GP102 and GP100 - they are the same.
Yet,the GP100 is 610MM2,ie 30% bigger with 3 billion more transistors,and you are on purpose ignoring the fact that HBM2 is used since it means you save on having to dedicate transistors on the physical GPU so you can pack in more shaders,etc.
Do you honestly think if adding 3 billion transistors more,does nothing for gaming performance,Nvidia wouldn't keep to just one line and sell that GPU instead of the GP102??
Nvidia spent serious money making TWO lines,since that GP100 would probably do worse in gaming than the GP102,since it probably won't boost as much.
That GPU is over 800MM2 - do you honestly think,that without all that additional transistors towards FP64,it would be quicker with it??I see Nvidia have announced a new Titan V card at $3000, I have to admire their cheek to charge that much,but it shows that fp64 just makes a card expensive not slow. https://wccftech.com/nvidia-titan-v-...-announcement/
Whats the likelihood when the consumer cards come along,which are FP32 optimised they end up having better performance/mm2 and better performance/watt??
Its called Nvidia Ampere(apparently) so it looks like Volta is being skipped for consumer cards,but what is the likelihood they exist together?
I expect on the interwebs people will be amazed at the improvements in performance/watt over Volta,etc because it will be a gaming card against a non-gaming card.
Its been pointed out too by me and Scaryjim,that AMD has dedicated resources towards things like FP16 and SSD functionality which takes up die space and consumes more power than need for gaming stuff.Originally Posted by SJ
Plus as I have told you multiple times,AMD also at the same time simply does not dedicate enough die space.
When Vega came out,it was fighting:
1.)471MM2 FP32 optimised GP102,and is utterly dire at FP16 and does FP64 stuff at a worse rate
2.)610MM2 FP16 optimised GP100 which is faster at FP16 and FP64
3.)815MM2 FP16/FP64 optimised GV100
So honestly,how can AMD outperform an FP32 optimised GP102 with a similar die size,when they are trying to use the same GPU to target the 610MM2 GP100??
Nvidia has split their high end lines now,and I don't even understand when it is so black and white is not obvious to you.
Have you not heard of Nvidia Ampere?? Seems to me again Nvidia is going to keep Ampere for gaming and Volta for computer:
https://www.tweaktown.com/news/59816...018/index.html
AMD is trying to use one GPU to target areas Nvidia has THREE GPUs,are in.
So how can anybody expect in things like gaming for AMD to be able to match what Nvidia is doing.
Its call a jack of all trades,master of none. This is why the AMD cards are clocked so highly in the first place.
It worked out OK until Maxwell,but at the same time AMD decided to cut back on GPUs,Nvidia had been spending serious money doing the opposite.
FFS,Nvidia had two 600MM2 GPUs in production during the Maxwell era,ie,GM200 and GK210. The GK210 never ended up in a gaming card ever.
Plus where are all the GP100 based gaming cards?? Oh wait,it seems Nvidia must have thought they would have not have been any improvement.
Last edited by CAT-THE-FIFTH; 08-12-2017 at 07:34 PM.
There are currently 1 users browsing this thread. (0 members and 1 guests)