Why do most people seem to refer to the entire Core 2 Duo range as Conroe?
Why do most people seem to refer to the entire Core 2 Duo range as Conroe?
Conroe is the internal name for Core 2 Duo, and as such it's the only name that we knew it by for several months. The name Core 2 Duo is both very new, and also in my opinion quite a mouthful (2 Duo??) not to mention very similar to Core Duo. Simpler to keep refering to it as Conroe.
(You might also hear about internal names for other processors such as San Diego, Venice etc.)
Another reason is that companies change their chip cores but don't change the market name - there are lots of different cores that make up Pentium 4 and Athlon 64 chips, and calling them by their internal name helps us identify which core we're actually talking about, as there can be some quite large differences.
edit: or do you mean why do we say Conroe rather than Allendale for the 2mb cache one? Dunno. We're misinformed/lazy?
Last edited by kalniel; 13-07-2006 at 10:52 AM.
Yep. That's what I was getting at! Of the 7 products currently planned, 5 are Allendale, and only 2 are Conroe.Originally Posted by kalniel
I think the name has just kinda stick.
Just call core 2 duo: 2D from now on ye?
(\__/)
(='.'=)
(")_(")
As Intel have already included the number of cores in the name of the product, I wonder if they'll have the sense to keep doing this as more cores are added (e.g. core 4, core 16). Or are we more likely to be subjected to bizarre naming (e.g. QuadraCore, HexCore).
their budget line will be single core, and will be core2solo
then the dual cores are core2duo
so kentsfield should really be called core2quattro which will mean a big corporate battle with audi
Ah yes, my mistake. Of course the '2' in Core 2 Duo refers to the core version, not to the number of cores.
I assume there will be an optimum number of cores beyond which there will be relatively little gain for 99% of users as it will be too much effort to use any additional cores efficiently. I'd guess this number would be quite low (4 or 8?). What then? It seems we're already approaching something of a clock speed limit. Will some quantum leap in technology be required to keep Moore's Law going?
No more than there's an optimum speed in Hz beyond with there's relatively little gain - once we've cracked parrellisation that is.
I can see us heading for microcores - so instead of having a range of computers going from 1-3ghz, you go for between 2 and 16 cores say. More cores would give more performance (isn't the case today though).
Once the scaling's cracked it's far easier to scale up just be adding more cores than it is trying to run the same core at ever increasing speeds. So I think Moore's law is going to be safe for quite a while.
Intel's new Core/Core 2 naming system is rather bizarre to say the least; within the range, we will have Memron and Conroe, so it will mean that when configuring a laptop you'd have to make sure you weren't being shipped the desktop equivalent processor (as desktop replacements often do). I definitely wish Intel would step away from marketing names such as vPro/Core/Vive and go back to naming the processors they make, and not the kind of PCs they'll be in...
New Sig on the Way...
(in answer to kalniel) So you're saying the clever bit we're missing at the moment is the embedded controller, and/or software toolchain, that takes full advantage of the n cores available so that any specific application runs 100 (ish) times faster on a 100 core system than it does on a single core system?
they are always claiming amazing new pcb and transistor technology, like having a transistor made of a mix of air and special detergent molecules... i think there will be quite a way to go and then theres fibre-optics and quantum switches
do multi-core chips count in terms of moores law anyway? there have always been supercomputers which push ordinary chips to extreme functions which get ignored in moores law, so why is using more than one core any different to that?
Yeah - we already parallise extremely well at the low level - graphics cards achieve their speed more through parallising tasks than raw speed (think the numbers of pipelines), and there is some level of that in the depths of CPUs as well, but it's not on the same scale. If we can achieve a much higher level parallisation through controllers/software then yep, moore's law is safe.Originally Posted by DerbyJon
Once we have a large number of powerful cores, will the component count of a PC reduce dramatically? Given enough spare grunt, all the graphics, sound, networking etc could be replaced by software. Maybe just need CPU, non-volatile and volatile storage, connectors and power?
Actually I think volatile storage may be a thing of the past in the not too distant future - I can only see it remaining in very small localised caches. But otherwise no, I don't think it's going to go that far regarding software replacing hardware. It's simply too cheap, quick and energy efficient to have a well defined function on it's own hardware. The software approach would be most useful where you have a less well defined or changing function, but it would be too wasteful to dedicate expensive flexible chip estate to meanial tasks.
I reckon the external devices will be ditched once sufficient spare cycles are available. I can't think of a single argument for them to stay. Each one consists of specialised hardware for implementing a particular version of a very narrow task. Too inflexible and very inefficient compared with concentrating all resources in one place.
Inflexible, yes. But inefficient? Remember the hardware designs, masks etc. are already done. They just have to stamp out as many as needed. The only cost is the material cost, which as they are implementing a narrow task is usually quite efficient.
Using a centralised resource to provide the same function is only possible if there are spare cycles as you say, but what about tasks which are more than just very temporary? You don't want your network to cut out just because you've thrown a hard sum at your CPU. So you'd have to reserve resources for these tasks all the time. It doesn't matter what bit actually gets reserved, the point is you are still reserving something that was designed to do more than you're actually using, therefore it's inefficient. If you're going to reserve something, you might as well have it be the smallest, most efficient for the task, thing possible.
There are currently 1 users browsing this thread. (0 members and 1 guests)