I think it is quite a bit interesting.... maybe they win over Nintendo as well? who knows.
"it is reasonable to assume that AMD is at least pondering over moving its Ryzen processors to a hybrid core architecture – like Intel's Alder Lake – in the coming months / years."
- If the patent was filed nearly two years ago it is safe to assume it is already finished and ready to go, I think AMD will have a big/little core CPU in the next generation of chips and laptop or APU chips are a good bet I suspect.
Same goes for the earlier comment (not yours) about swapping workloads from, CPU into a GPU. I presume by that they meant something like migrating graphics from IGP to GPU and back as required, not moving an instruction stream from CPU to a GPU with a completely different programming model let alone instruction set?
It was your comment, sorry that was lazy of me not to look that up!
All things are possible; but can you translate the work well enough to make use of the facilities of the GPU such that it would go faster without using so much silicon that you would be better off putting that silicon to use elsewhere?
If the GPU was basically a Larrabee setup using AVX for rendering then it would be easier, but probably still not actually that useful and we know how well Larrabee ended.
I do agree that they are alien architectures and without the code paths being optimised, it could be trying to shovel a single threaded task through a parallelised architecture (for example)
But then again, we are looking at this traditionally, I mean Intel is developing the OneAPI which will allow hardware agnostic development so if you write some software that does a lot of math, it will (see: should) work on CPU, GPU, defined ASIC, etc but will just work at different speeds. Granted the OneAPI is way way above this kind of hardware thread management. Maybe AMD are trying to take it a step lower and do high level observation on thread actions.
That sounds well into the realm of dynamic code translation, which is complex enough that it has always been done in software.
BTW, I think you are mixing up ASIC (application specific integrated circuit; which pulls in network switch chips, modems, all sorts of stuff) and custom/specialised cpu instruction extensions.
... another thought on this, I wonder what is in this patent that isn't already done by Nvidia in some of their old Tegra parts which ISTR had a transparent thread migration as part of its clock scaling.
As I understand it, correct me if I'm wrong, but AVX2/3 etc are Instruction Sets but on Intel they are colloquially utilised on an ASIC specialised for running those specific instruction sets. From what I have learnt, an ASIC is general term applied to specialised silicon that is designed to perform the function or within a scope of something highly repeatable. i.e. the AES-NI is an instruction set that is normally coupled with the extra hardware on the CPU designed to accelerate portions/all of the instruction set.
I admit that I am being loose with the utilisation of the ASIC term but I don't believe I am mixing up the two and am aware that specialised instructions and specialised silicon occupy very different areas (but as mentioned above, specialised instructions and specialised hardware seem to be codeveloped often).
We've seen ridiculous advancements in code compilation, operation and running in the past 40 years, maybe this is the next advancement? Pure conjecture, I don't think AMD have cracked that walnut but is an interesting thought experiment.
I think that is being pretty liberal with the ASIC vs CPU discussion. Whether it is tightly integrated or not doesn't make it not an application specific integrated circuit, it just adds an extra integrated in front of of the ASIC abbreviation
But then again, what's become the "minimum expectation" of a CPU has grown over time, maybe an ASIC like the AVX accelerators do become part of the minimum expectation of a CPU makeup.
There are currently 1 users browsing this thread. (0 members and 1 guests)