Read more.It assures that there will be enough "to meet demand from gamers".
Read more.It assures that there will be enough "to meet demand from gamers".
300W for a 7nm card.....something ain't right.
It has HBM, I could have told you (have told you repeatedly) that it would be in short supply and cause the price of the card to make it useless vs. the competition. Oh look, it's the same price as a card that has MANY more features, RT+DLSS+VRS+GSYNC etc. See the point? If this card had been say 8-12GB GDDR5x or GDDR6, it could have been sold for $350-550. But they decided to put 16GB (devs just migrating to 11GB, so no point for some time to come), and literally have a disadvantage without features to go with the $699 price tag. Which do you think will be used FIRST? 16GB vs. 11 or less, or RT, DLSS or VRS? I'm thinking the last 3 will be used FAR sooner than 16GB of overpriced vid memory with bandwidth no gamer needs today either.
Build what people want and they will come. This card just seems like a PR move, more than anything gamers would run to the store for. Not that I mind, I just think gamers WOULD run to the store for 12GB GDDR5x for $350-450 as opposed to $699 with missing features vs. NV cards with GDDR6 maybe landing $450-550 (depending on how much 8-16GB?)? I guess they think people will buy the past for $699 vs. the future for $699 (I doubt it). I would not be surprised if they built 5K of them, as I don't think more would sell well without a cut and I'm guessing these aren't much better than break even margin wise (the 16GB alone is $240+ here probably, 8GB was $180 in 2017, doubt it's dropped in half with nobody using it). This card could have sold in decent quantities, but not at $699 missing many features of the future that devs want. I'd expect some form of DLSS/RT/VRS on all future consoles as it really helps weaker units (handhelds, consoles, tablet/phone). Anything aimed at making the masses perform like rich people, will be a big hit, and that is what RT+DLSS and VRS helps. Better looking graphics on weaker hardware is a winner. AMD keeps using HBM as if it will sell a card, but it doesn't. You can't point to something on screen and say, see that's what HBM does, where you can do that for RT+DLSS+VRS. You get speed or looks, or a combo of both. 16GB should be reserved for work/pro cards where it's actually usable.
https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it
Stop doing this to your cards. Perhaps they are forced into this to lower power, as they couldn't build it with other mem (watts would be too high?). But 7nm watt issues vs. 12nm competition with much more features? You built it wrong
5000 cards is 4500 more cards than they expect to sell.
Opposite to ferrari where all limited editions they make exactly 1 less than they expect to sell
You never cease to amaze me with your tripe, nobodyspecial xD
Corky34 (19-01-2019),DanceswithUnix (20-01-2019),Iota (19-01-2019),Mr_Jon (25-01-2019),Smudger (25-01-2019)
Iota (19-01-2019)
Radeon VII has the same Teraflops amount of 2080ti.
And, while AMD reportedly reduced the FP64 capability, its still twice as much vs previous Vega GPU's.
This GPU is a monster compute card with higher clocks.
Though, we don't know if the TDP will be finalized at 300W.
And even if it is, remember that AMD doesn't usually optimise voltages on their GPU's which leaves room for personal tinkering to drop the voltage and vastly improve efficiency.
Also, AMD has plans to use that extra compute to do exactly what DLSS from NV does.
Plus, it could be that AMD is using the 7nm process for mobile when it comes to Vega (although why on Earth would they opt for this when the same process design from GLOFO was similarly limiting).
Just wait and see for final reviews.
To be fair, I did too wonder if it would be more attractive with less memory and with it a lower price point. Otherwise the 2080 seems more like a sound choice for gamers, and I'm not really happy about the prices.
While I honestly have to agree that the Radeon 7 GPU is very expensive and competes with an already overpriced product and that I do in fact agree with a lot of the things that you said.
Though there definitely seems to be a lot of untapped compute power available when looking at the +62% open CL performance and AMD is already working at competing technologies regarding DLSS and to a lesser degree ray tracing.
In many ways this GPU would definitely be the better long term buy than a RTX2080, since the actual bonus RT feature that you do get is not in a good enough state in my opinion, I'm quite sure Nvidia will release a 7nm RTX card that will instantly make those GPU's obsolete since the 20 series was there first consumer attempt at ray tracing.
So in conclusion I do agree that the Radeon 7 do seem somewhat out of place especially with the expensive 16GB's of HBM 2 memory, I also understand why AMD did it, if you watch Adored TV(Jim) and actually hardware overclockng's(Buildziod) content then things do make more sense.
I'm eagerly awaiting Navi and hoping for the best, but Radeon 7 does seem like a better buy than a 2080 if they are priced similarly, though the efficiency at-least in gaming tasks alone is definitely behind Nvidia, luckily I'm not really phased by that.
I think what people like nobodyspecial seem to forget is that GPUs are used for much more than just gaming these days.
Didnt someone say that this card is aimed at prosumer devs .. eg a lot cheaper than the mi50 on which its based , but also lacking in compute , so they don't cannibalize that card. Sorry no, aimed at datacentre for virtual gaming 16GB makes sense. Not really a gaming card unless you son of croesus
There are currently 1 users browsing this thread. (0 members and 1 guests)