Read more.Quote:
First out of the door will be Zen SR7 HEDT chips with a 8C/16T configuration.
Printable View
Read more.Quote:
First out of the door will be Zen SR7 HEDT chips with a 8C/16T configuration.
8 core Zen and a Radeon 490 could be my next gaming / video editing rig, lets just hope the prices are good!
As with all of Zen. I hope it will be good, but till we see the reviews and actual hardware its just hope.
I want Zen to impress me and show me that AMD is still pushing forward. I am ready to finally move to Intel but I've held off long enough and am thoroughly looking forward to the reviews, benchmarks and hopefully a reasonably clean launch. It's been in development long enough!
I really do hope that AMD's top of the line zen is comparable to intels socket 2011 offerings while being at those low prices mentioned. That would really shake up the prices for new pc's again..especially for those of us who can make use of all those cores :)
Fingers X for some competition. Looking forward to actually having a choice. I'm in need of a dev and VM workstation with a high amount of cores.
If the 8 core can overclock to 4.0ghz which they should, And do it for less than Intel then AMD has a winner!
Don't think there is much need for more over 4.0ghz these days, Especially if you are on resolutions higher than 1080p where gpu will do most of the work.
I currently run 5960x OC to 4.0ghz, Have not had any need to run higher in any game on 1440p, Run 2x980ti gpu's in sli.
I wish AMD the very best, Really would love to see a return to the good old days, Plenty of cpu competition to bring the prices down for everyone.
I hope Zen and Vega can pull AMD back into competition with Intel and nVidia. They've been becoming more and more naughty of late.
Well AMD are already very competitive in the graphics card market and the APU market seems to be growing very well for AMD. Competing with Intel in the CPU market is probably not possible. AMD could take a bigger slice, but even if AMD's CPU market share was ten times what it is now, Intel would still dwarf AMD.
In terms of market share, sure, especially if you count console APUs in that. But in terms of technology they're far behind Pascal and their discrete market share has suffered as a result. Right now nVidia literally owns the high-end market space.
Right, they mightn't catch up with Intel's market share any time soon. But the important thing is being technologically competitive. If they can do that, and stay affordable, they'll eat up market share at a nice pace.
In both cases we really need AMD to come up with the goods to keep those two in check.
Technology wise AMD are ahead of Nvidia. The question is the release timings and move to gloflo. Polaris seems to have been a 14nm pipe cleaner for AMD and is still very competitive with Pascal, offering a lot more in many areas that Nvidia should have adressed.
It would be great if AMD could compete with Intel but AMD simply don't have the production volume to ďo that, but nor do they need to.
What you need to keep in mind is production cycles. Nvidia stayed at TSMC and was already releasing it's cards ahead of AMD while at TSMC.
Bummer, I was hoping the chips were strong enough to warrant pricing DIRECTLY related to Intel's $400-1700 range for the 8 core. I really hope they are not pissing away profits in order to be "cheap". Screw affordable talk, people pay what a product is WORTH and if you can't, get a better job. If you can beat a 6850 in many things, you should price it at near Intel pricing. If you beat them in more things than you lose price it equal. Intel will respond at some point, and once they do, profits will be over again. Make your cash to pay off your debt while you can. Take a good look at both Intel/Nvidia pricing and how well their products are selling. It is not their job to be our friend, rather their job is to MAKE MONEY. I like great pricing as much as the next guy, but I will pay what a chip is worth, especially for a severely weak AMD. If the chip is great, it will sell without a discount. See TitanX/1080/1070 sales (or Intel HEDT sales). Look at NV's quarterly report this month (and their stock price in the last year!), margins, profits, revenue all massively up. If you make a great product, sell it at the appropriate price and collect your just rewards.
Intel doesn't have a current HEDT chip under $430. Prices are $1723 $1089 $617 $434. IF the one chip is as good as a 6850, it had better be priced near $600, and if it's better, price it $615+! That is still a DEAL, compared to Intel if it's FASTER in most stuff than Intel. The deal is good enough already if you're better for the same money. Sucks that it's 8 core vs 6 core though, seems it will be slower in many things the gaming/home market does (that doesn't use more than 4 core very effectively STILL today). I hope I'm wrong, the dies are HUGE (at or above Intel's 246mm^2, preferably at or near massive xbox1/ps4 sizes 363/348!), and it's faster across the board, not just blender. I hope the $200-300 range is the junk end. AMD is bleeding to death because they have no current cpu above $150 (rightly so, they suck). They need to fix this. I hope vega rocks, as currently they're getting their clock cleaned on gpus too. Not die costs don't matter if it performs. Remember ps4/xbox1 are made for $90-100 and sold for $100-115 for each console. So you can easily make money on a 363mm^2 die (even at $115). I hope they went HUGE! No point in "competing", they needed to BEAT Intel while they can.
AMD haven't released the higher end of it's next generation. We know how strong the last generation was and how well those cards are holding up. Polaris is giving Pascal a though enough time on the immature 14nm process and giving away a lot of clockspeed. AMD are clearly ahead of Nvidia technologically. Nvidia haven't even got Asynchronous compute capability on on chips yet and still and still need to use an expensive off chip solution to implement VVR.
Yeah, that's kinda the problem.
Not very, even on the last generation, AMD's best offerings lagged behind nVidia's. All AMD had was pitching their offerings on the cheap. Pascal has since catapulted them ahead way way more.
No, it isn't. Polaris doesn't scale, at all, it ends where Pascal practically begins.
And? It's not like there's much of any software taking advantage of async compute yet. And it hasn't really done anything to keep AMD on par with nVidia even on synthetic benching, has it? All the supercomputers are gobbling up nVidia cards, despite the lack of async compute.
Who cares? It works, and works well. nVidia is killing it in the VR space, such as it is. Sometimes an ASIC solution is the best solution, especially for a technology that hasn't been established as being able to outlive the fad phase.
So...
I think we've firmly established that this statement is false.
Not until Nvdia fix those problems and lower prices. Nvidia are simply getting murdered on the technology front. Granted AMD's move to gloflo hasn't been great, but we are seeing improvements in the process.
How do you figure that? From all that I've seen, Nvidia's 16nm parts are more power efficient than AMD's 14nm parts and generally out-perform them dollar for dollar in the mid-range. Sure, the nVidia parts are expensive at the high end, but only because they are currently unchallenged.
Really looking forward to AMD Zen. It has been way too long of a wait for Zen. Probably will be my next upgrade providing the Zen chips compete with Intel equivalents.
Likewise. Especially if Vega can come up with the goods on 1440p. I'd really like to have an adaptive sync display, but I really don't want to give nVidia £100 just for a VESA scaler that only stands out by one single feature, even if it's a particularly good adaptive sync implementation.
Because AMD are ahead of Nvidia in just about everything apart from clock speed. Scale an RX480 to a GTX1080's clock speed and you'd have £230 cards giving everything Nvidia have a very hard time.
AMD have built in the Freesync True Audio and Asynchronous Compute engines while Nvida lack those. How power efficient and expensive would Nvidia be if the gsync module was added the the graphics cards. Plus 50 watts and another £200?
Sorry but Nvidia are behind AMD. Intel and AMD APU's have killed of most of Nvidia graphics card ranges in a couple of years. If Nvidia had the better technology that simply wouldn't have happened. AMD have 8 core 200~ watt APU's with 6-7 Tflops of graphics power in production.
Your opinion runs contrary to basically every benchmark and published opinion out there.
I recently purchased two video cards: An RX470 for me and a 1060 for my son. Prices were almost identical and I can tell you the 1060 kicks the 470's butt in everything we've tried to date. On top of that, the 1060 draws much less power than the 470. nVidia have their own version of FreeSync, called G-Sync - a competing standard, but pretty much the same thing.
Intel's APUs have rubbish video performance compared to both AMD and nVidia. The Iris has improved things a little, but they still don't rate. I'm baffled as to where you get your information from.
DISCLAIMER: I'm not arguing for AMD having better technology than nvidia here. That's a longer conversation than this, as both have wins depending on what you're looking at. But there's a few things in your reply that simply don't stack up:
I assume that's a 3GB GTX 1060? If not you were ripped off for the RX 470 ;)And on that basis I also assume you play no Vulkan or DX12 games - on those the RX 470 and GTX 1060 3GB are about even, with wins for both sides depending on game. In DX 11 and Open GL nvidia are faster, but those are older technologies - AMD appear to currently have the edge in new APIs. Hexus' reviews suggest the power difference is all of ~ 10W at load between those cards - less than 10% of the total PC power draw. Hardly "much less", IMNSHO...
"standard" is a tricky word here. AMD's Freesync uses a standard - VESA Adaptive Sync - that is therefore royalty free and available openly for everyone to use if they want to. It also adds no extra cost to the monitor - you can get freesync monitors for £100. G-Sync, on the other hand, is not a standard, nor based on one. It's a proprietary nvidia technology, so the only way to use it is to pay nvidia. It also requires additional hardware; between those it adds a significant cost - usually around £100 for equivalent monitors from the same manufacturer. So I'd hardly call it "pretty much the same thing": it's free open standards vs expensive proprietary technology...
Iris is as fast as AMD's top end APUs, which are as fast as entry/mainstream graphics cards (the A10 7850k graphics were faster than a DDR3 R7 250). And even Intel's ordinary HD graphics, which are indeed well behind Iris and A10 in terms of high-end gaming performance, are now more than adequate for the vast majority of people. Since around Sandy Bridge Intel HD has been capable of low-quality gaming and casual gaming (e.g. XCOM is very playable on a Sandy Bridge mobile i5 at low quality). There used to be a huge volume market for low end GPUs, but that's vanished over the last 5 years - mostly down to Intel's huge improvement in IGPs. Just because you wouldn't use one, doesn't mean it's rubbish. If it was genuinely rubbish AMD and nvidia would still be selling huge volumes of £50 graphics cards. They're not.
As I say, I'm not trying to argue that AMD are technologically ahead of nvidia; there are gives and takes depending on which technology you're looking at. nvidia are currently more power efficient, and their designs plus the process they're working on scale much better with clock speed. They also have a significant advantage in raw gaming performance. But part of the way they've done that is to move a lot of functions into software, whereas AMD have kept them in hardware. That means that for many compute workloads AMD cards are now better, and is part of the reason they also show much better Vulkan and DX12 performance. If you're a huge Doom fan, for instance, buy AMD. No question. If you're investing for the long term - and DX12/Vulkan is becoming more prevalent - AMD is probably also a better bet; not only are their current generation cards supporting Vulkan/DX12 better, but they've got a much better track record for maintaining performance optimisation of drivers in the long term (ask CAT ;) ).
And there is one area where AMD are unquestionably better than nvidia - openness. As I've already mentioned, freesync is a royalty-free implementation of an open standard, whilst g-sync is proprietary (and therefore expensive). Compute is another one: AMD has great OpenCL support and is making lots of tools freely available, whilst nvidia is still focussing on its proprietary CUDA.
Are you talking about the GTX1060 3GB??
I will add these old posts here then.
If the chap bought a RX470 for the same price as a GTX1060 6GB he overpaid for the former,or got a good deal for the latter.
GTX1060 6GB>>>>RX470 4GB for me.
If its a GTX1060 3GB,then meh - most people I know use cards for at least two years.
I have not even bothered looking more recently,but last time I checked there are quite a few newer games where the RX470 and indeed the GTX970 are doing better than the GTX1060 3GB especially when it comes to minimums and frametimes. These are some of the most intensive games out there. A GTX970 can even beat a GTX1060 3GB,so meh.
The 8800GT 256MB defenders were the same,saying it was 10x better than the HD3870 512MB and the 8800GT 512MB was not worth the extra dosh. Fast forward a few months and it was getting thrashed by the 9600GT 512MB(which was the same generation in terms of uarch and had nearly half the CUDA cores) and the HD3870 512MB and they all were suddenly quiet. The 8800GT 512MB was still top of the pile.
Like I said Digital Foundry said to get the 6GB model. Literally half the review sites out there have concerns about the 3GB model.
I would quite happily get a GTX1060 6GB over my GTX960,not a GTX1060 3GB. If you don't want an AMD card,I still recommend you get the GTX1060 6GB. It will be worth more secondhand in two years,and will give you a more consistent user experience. Save up for a month extra! ;)
Anyway,this is a Zen article,how did we get onto graphics cards?? I think we need to stay on topic here.