Page 1 of 2 12 LastLast
Results 1 to 16 of 17

Thread: Conroe/Allendale pedantry

  1. #1
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts

    Conroe/Allendale pedantry

    Why do most people seem to refer to the entire Core 2 Duo range as Conroe?

  2. #2
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    28,997
    Thanks
    1,473
    Thanked
    2,904 times in 2,353 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    Conroe is the internal name for Core 2 Duo, and as such it's the only name that we knew it by for several months. The name Core 2 Duo is both very new, and also in my opinion quite a mouthful (2 Duo??) not to mention very similar to Core Duo. Simpler to keep refering to it as Conroe.

    (You might also hear about internal names for other processors such as San Diego, Venice etc.)

    Another reason is that companies change their chip cores but don't change the market name - there are lots of different cores that make up Pentium 4 and Athlon 64 chips, and calling them by their internal name helps us identify which core we're actually talking about, as there can be some quite large differences.

    edit: or do you mean why do we say Conroe rather than Allendale for the 2mb cache one? Dunno. We're misinformed/lazy?
    Last edited by kalniel; 13-07-2006 at 10:52 AM.

  3. #3
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by kalniel
    edit: or do you mean why do we say Conroe rather than Allendale for the 2mb cache one? Dunno. We're misinformed/lazy?
    Yep. That's what I was getting at! Of the 7 products currently planned, 5 are Allendale, and only 2 are Conroe.

  4. #4
    lazy student nvening's Avatar
    Join Date
    Jan 2005
    Location
    London
    Posts
    4,656
    Thanks
    196
    Thanked
    31 times in 30 posts
    I think the name has just kinda stick.

    Just call core 2 duo: 2D from now on ye?
    (\__/)
    (='.'=)
    (")_(")

  5. #5
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts
    As Intel have already included the number of cores in the name of the product, I wonder if they'll have the sense to keep doing this as more cores are added (e.g. core 4, core 16). Or are we more likely to be subjected to bizarre naming (e.g. QuadraCore, HexCore).

  6. #6
    Senior Member
    Join Date
    May 2006
    Posts
    305
    Thanks
    0
    Thanked
    0 times in 0 posts
    their budget line will be single core, and will be core2solo

    then the dual cores are core2duo

    so kentsfield should really be called core2quattro which will mean a big corporate battle with audi

  7. #7
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts
    Ah yes, my mistake. Of course the '2' in Core 2 Duo refers to the core version, not to the number of cores.

    I assume there will be an optimum number of cores beyond which there will be relatively little gain for 99% of users as it will be too much effort to use any additional cores efficiently. I'd guess this number would be quite low (4 or 8?). What then? It seems we're already approaching something of a clock speed limit. Will some quantum leap in technology be required to keep Moore's Law going?

  8. #8
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    28,997
    Thanks
    1,473
    Thanked
    2,904 times in 2,353 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    No more than there's an optimum speed in Hz beyond with there's relatively little gain - once we've cracked parrellisation that is.

    I can see us heading for microcores - so instead of having a range of computers going from 1-3ghz, you go for between 2 and 16 cores say. More cores would give more performance (isn't the case today though).

    Once the scaling's cracked it's far easier to scale up just be adding more cores than it is trying to run the same core at ever increasing speeds. So I think Moore's law is going to be safe for quite a while.

  9. #9
    Xcelsion... In Disguise. Xaneden's Avatar
    Join Date
    Nov 2004
    Location
    United Kingdom
    Posts
    1,699
    Thanks
    0
    Thanked
    0 times in 0 posts
    Intel's new Core/Core 2 naming system is rather bizarre to say the least; within the range, we will have Memron and Conroe, so it will mean that when configuring a laptop you'd have to make sure you weren't being shipped the desktop equivalent processor (as desktop replacements often do). I definitely wish Intel would step away from marketing names such as vPro/Core/Vive and go back to naming the processors they make, and not the kind of PCs they'll be in...
    New Sig on the Way...

  10. #10
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts
    (in answer to kalniel) So you're saying the clever bit we're missing at the moment is the embedded controller, and/or software toolchain, that takes full advantage of the n cores available so that any specific application runs 100 (ish) times faster on a 100 core system than it does on a single core system?

  11. #11
    Senior Member
    Join Date
    May 2006
    Posts
    305
    Thanks
    0
    Thanked
    0 times in 0 posts
    they are always claiming amazing new pcb and transistor technology, like having a transistor made of a mix of air and special detergent molecules... i think there will be quite a way to go and then theres fibre-optics and quantum switches

    do multi-core chips count in terms of moores law anyway? there have always been supercomputers which push ordinary chips to extreme functions which get ignored in moores law, so why is using more than one core any different to that?

  12. #12
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    28,997
    Thanks
    1,473
    Thanked
    2,904 times in 2,353 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    Quote Originally Posted by DerbyJon
    (in answer to kalniel) So you're saying the clever bit we're missing at the moment is the embedded controller, and/or software toolchain, that takes full advantage of the n cores available so that any specific application runs 100 (ish) times faster on a 100 core system than it does on a single core system?
    Yeah - we already parallise extremely well at the low level - graphics cards achieve their speed more through parallising tasks than raw speed (think the numbers of pipelines), and there is some level of that in the depths of CPUs as well, but it's not on the same scale. If we can achieve a much higher level parallisation through controllers/software then yep, moore's law is safe.

  13. #13
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts
    Once we have a large number of powerful cores, will the component count of a PC reduce dramatically? Given enough spare grunt, all the graphics, sound, networking etc could be replaced by software. Maybe just need CPU, non-volatile and volatile storage, connectors and power?

  14. #14
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    28,997
    Thanks
    1,473
    Thanked
    2,904 times in 2,353 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    Actually I think volatile storage may be a thing of the past in the not too distant future - I can only see it remaining in very small localised caches. But otherwise no, I don't think it's going to go that far regarding software replacing hardware. It's simply too cheap, quick and energy efficient to have a well defined function on it's own hardware. The software approach would be most useful where you have a less well defined or changing function, but it would be too wasteful to dedicate expensive flexible chip estate to meanial tasks.

  15. #15
    Member
    Join Date
    Jul 2006
    Posts
    135
    Thanks
    0
    Thanked
    0 times in 0 posts
    I reckon the external devices will be ditched once sufficient spare cycles are available. I can't think of a single argument for them to stay. Each one consists of specialised hardware for implementing a particular version of a very narrow task. Too inflexible and very inefficient compared with concentrating all resources in one place.

  16. #16
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    28,997
    Thanks
    1,473
    Thanked
    2,904 times in 2,353 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    Inflexible, yes. But inefficient? Remember the hardware designs, masks etc. are already done. They just have to stamp out as many as needed. The only cost is the material cost, which as they are implementing a narrow task is usually quite efficient.

    Using a centralised resource to provide the same function is only possible if there are spare cycles as you say, but what about tasks which are more than just very temporary? You don't want your network to cut out just because you've thrown a hard sum at your CPU. So you'd have to reserve resources for these tasks all the time. It doesn't matter what bit actually gets reserved, the point is you are still reserving something that was designed to do more than you're actually using, therefore it's inefficient. If you're going to reserve something, you might as well have it be the smallest, most efficient for the task, thing possible.

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •