noob question people
But is this Msi Gaming board better that a 990 then because it has all the bells and whistle's of a 990 board ??
The only real thing that separates 970 from 990/X/FX boards are the amount of PCI-E lanes on the chipset. Apart from that it is practically the same chipset.
Although MSI are kind of late to the show, their gaming range really has decent features but they should of brought this in a while back..
"If at first you don't succeed; call it version 1.0" ||| "I'm not interrupting you, I'm putting our conversation in full-duplex mode" ||| "The problem with UDP joke: I don't get half of them"
"I’d tell you the one about the CIDR block, but you’re too classy" ||| "There’s no place like 127.0.0.1" ||| "I made an NTP joke once. The timing was perfect."
"In high society, TCP is more welcome than UDP. At least it knows a proper handshake."
Looks like TR really is mucking around with their test benches now to make the G3258 look better:
http://techreport.com/review/26735/o...on-processor/3
Not only did they magically get their G3258 to 4.8GHZ(multiple people on OcUK are lucky to get even 4.6GHZ even with decent motherboards),it seems their magical G3258,is now Core i5 level for video encoding.
Going from their A8-7600 review,TR is using the x264 r2334 encoder:
http://techreport.com/review/25908/a...sor-reviewed/4
A lot of the X264 benchmarks have not been updated with the newer AVX2 supporting encoders,so TR updated to a newer one.
The only problem is that xbitslab also tested with newer encoders.
They also seem to on purpose avoid Welcome to the Jungle in their Crysis3 testing which is what really pushes any dual core CPU,and pushed the system spec down to medium,which probably is pushing a lower CPU load. They also ignore their 16.7MS results too!!
FFS,Crysis3 is the reason I got a Xeon E3,since my Core i3 started to struggle on high settings especially in MP matches.
Even someone else has commented on that,and the reviewer has ignored this.
It really does seem TR is desperately trying to push their single threaded mantra,in a time of X86 consoles,Mantle and DX12. It really makes me wonder about them now and whether their new funding model is leading to issues.
PS:
Time to start a new video encoding thread methinks.
Might have to look at the X264 encoder now and see if magical dual cores are excellent in it now.
Edit!!
I posted a comment,asking about the X264 results and the Crysis 3 results. Going from the last time I commented I expect deflections and negative ratings from the symphocants.
Second Edit!!
It looks like they say thet are excited to recommend more budget builds based around the Pentium.
Kerching.
Last edited by CAT-THE-FIFTH; 09-07-2014 at 05:18 AM.
I'm all for a new x264 thread! When we decide on the rules I have access to a few systems to help get the ball rolling.
Encode speeds have changed a fair bit from our original thread as the codec has been updated.
We'd also have to find out if the software we use would try to use any hardware acceleration as it can affect quality and speed, skewing results.
It seems quite a few people have commented on their settings,and he is already in deflection mode. He still has not answered me about his weird video encoding results,which are Bit-tech level weird,or even the findings from other websites I have linked to. For all Scott Wasson's knowledge on some technical side of benchmarking and the great momentum he started,he really seems to be not that well clued up about simple basics in reviewing or even realising when some of his results seem a tad off. I still am LOLing at his board power rating=card power consumption confusion especially when his results said different,and other simply weird things when he seems to ignore his own results from his own reviews. Plus the fact when it is brought up,he quietly sidesteps just like Bit-tech reviewers done several times.
The adage Garbage in, garbage out comes to mind,unless OFC its another case of a review site pre-judging something before a review is published.
The state of English language reviewing in the US is terrible TBH. This is fast going towards hifi and game reviewing level.
Plus,I am reading the G3258 thread over on overclock.net - a lot of people are looking at the G3258 and getting a wrong impression ! They are intentionally misleading people,by their choice of settings and location. It grates me as I have played it both in SP and MP mode and it is an engine that can push 4 to 8 threads easily.
It makes me wonder whether he actually plays half the games he actually benchmarks,and this is an issue in its own right - reviewers need to understand the games they are benchmarking. He seems to be entirely clueless about what other sites have tested with the game,which is just as bad.
Right,watercooled what we need to do is either use TechARP x264 Benchmark or x264 FHD Benchmark 1.0.1,but we need to figure out how people have managed to update the internal x264 decoder to a later one.
I honestly like to see how an A10 7850K which barely gets out of 3.7GHZ Turbo is less than 20% slower than an FX8350 running at 4GHZ,with twice the number of cores and L3 cache too,in a benchmark which shows decent thread scaling.
Xbitslabs puts it at around 90% in favour of the FX8350 which is far more believable.
Last edited by CAT-THE-FIFTH; 09-07-2014 at 06:11 PM.
We could write our own TechARP-like script for a recent version of x264, although the stock one may be sufficient with just a swapped-out exe. Generally, the default settings are sensible with x264 (Handbrake high profile is pretty much default), but IIRC the compiled Win32 binaries want raw/YUV input, and I can't remember how the TechARP benchmark handles decoding; I'll download it in a sec and have a look.
We could just use Handbrake again, but I don't know off the top of my head which x264 version it's using, and some people may not have it installed, and then there was the issue of having to look through the logs to find the FPS.
Edit: Just downloaded the benchmark to check, and it relies on avisynth to decode the source and pass the decoded stream to x264. So even with that, there's still the need to install something. I mean we could test using a raw YUV file, but I don't think requiring everyone to download a 10GB source file is too practical, or that representative of general usage.
Last edited by watercooled; 09-07-2014 at 06:56 PM.
We need something that can take advantage of AVX2 and FMA.
We could always break it up into two sets of benchmarks though - one for HandBrake and one for x264 if we can get the latter to support AVX2 and FMA.
AFAIK x264 doesn't use FMA, nor can I see it being too useful - as I understand it, float instructions are generally avoided in codecs (at least in certain parts) as there's a possibility for rounding errors causing issues with data integrity, and especially for porting code to other architectures which may process FP differently and end up with different results. The heavy reliance on integer is likely why Bulldozer still performs excellently vs even far more expensive CPUs from Intel.
http://forum.doom9.org/showthread.ph...50#post1588150
https://mailman.videolan.org/piperma...hment-0001.pdf
AVX2 can offer speed-ups in some areas, it seems, but because a lot of the code isn't applicable to AVX2 optimisations, the overall speedup may not be huge. Reading between the lines, I think that pdf was written before the release of Haswell, but I'll try to see if I can find any recent comparisons.
Last edited by watercooled; 10-07-2014 at 12:22 PM.
Well we need at least something that can use AVX or AVX2 - since this is what the modded x264 benchmarks from xbitslabs and TR are using,and others use HB. Even if the HB thread is the bigger one,I still want to test TechARP x264 Benchmark or x264 FHD Benchmark 1.0.1 in some way or at least something comparable,as quite a few sites use them,plus TBH I am just intrigued by the TR results,as much as the HB results for Bit-tech.
If it means I need to download the 10GB file I will do it,but I only have a Xeon E3 1220 to test on. I might be getting an IB CPU at some point(hopefully a Core i7),and I am sure I can harrang one of my mates with an FX6300 and X4 760K,and another mate might be getting a Haswell Core i5 soon(I will try and harrang them too).
Going by the TR results,an FX6300 should be quite close to a X4 760K in the x264 test they are running.
Last edited by CAT-THE-FIFTH; 10-07-2014 at 12:31 PM.
Well we could literally just use the TechARP benchmark with a swapped out x264 binary, I assumed you just wanted to use the scripted benchmark for simplicity i.e. without having to install anything, which is why I pointed out the avisynth dependency.
I'm not too sure what to use as another benchmark - video encoding is something a lot of people can relate to so the numbers are directly meaningful. y-cruncher is a project I've been following for years now and it's a really impressive piece of software, but while it's a really stressful benchmark which uses lots of modern instructions where applicable, it's not really an everyday task to calculate constants to billions of decimal places.
Well we could do both - HB and the TechARP benchmark. There should not be much difference,but it will be interesting if there is any.
Also,PCPER managed to get both the A10 IGP and a GTX580 running under Luxmark:
http://www.pcper.com/news/Processors...Standalone-GPU
It managed to boost the score a decent amount.
Well a conclusion we could draw from the last HB benchmark run, was the results of all three clips we used seem to agree, even the slightly lower res clip.
There a few outliers, but even then the difference is pretty small; in fact I'd kind of expect a bit *more* randomness if anything, considering all the variables across testing systems, different software installed, different memory configurations, etc.
My point being, several tests at the same resolution may be somewhat redundant, so we could fall back to just the one clip for simplicity, or perhaps use different resolution/framerate clips, or different encode settings, to see what sort of effect that has?
It seems Anandtech actually is starting to use HB and bothers to say what files they are using too:
http://www.anandtech.com/show/8227/d...and-i5-4690k/3
I can start to see what TR might have done,also known as a Bit-tech. They have probably taken a low resolution file and converted it to something even lower resolution.
Also,it seems the FX8350 does very well for x265 conversion too.
Yeah I've noticed Anandtech seem to be improving on a lot of their 'real world' benchmarks and being more open about settings used.
Another example is their games benchmarks which they now run at more sensible settings.
After all, we've moved on from the days of playing AAA PC games at 640x480 and transcoding videos to similar or lower resolutions. One might claim it stresses a CPU differently, but in reality that makes it more of a synthetic benchmark and not representative of the real-world performance they're trying to assess. I mean, really, who cares if CPU x can transcode a single macroblock video twice as fast as CPU y; it's not going to be used for that!
Last edited by watercooled; 11-07-2014 at 07:38 PM.
AMD Publishes Open-Source Linux HSA Kernel Driver:
http://www.phoronix.com/scan.php?pag...tem&px=MTczOTY
A10 7800 spotted in Japan:
http://akiba-pc.watch.impress.co.jp/...03_656244.html
I've just spotted something interesting on this article: http://www.extremetech.com/computing...come-to-market
The bit about Atom using x86 instructions internally - I wasn't aware of that (well I hadn't given it much thought TBH)! I can't seem to find any more details on it though, I wonder if it applies to old or new Atom, or both?
There are currently 2 users browsing this thread. (0 members and 2 guests)