Results 1 to 13 of 13

Thread: What's the difference between a CPU & GPU?

  1. #1
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts

    What's the difference between a CPU & GPU?

    I know what a CPU and GPU is, but I'm just trying to get a better understanding of the differences in the processors...

    For instance, why is a graphics card better than a second CPU that is delegated by the OS to perform all graphics-related functions?

    Is there a difference in the instructions like 3DNow! and stuff? Do they somehow have a better relationship with DirectX and OpenGL?

    I realize that modern graphics cards use onboard GDDR3 RAM, rather than the shared system DDR RAM, but I'm more interested in the actual Processing Unit, not how much or what kind of RAM it has access to.

    Thanks

  2. #2
    YUKIKAZE arthurleung's Avatar
    Join Date
    Feb 2005
    Location
    Aberdeen
    Posts
    3,280
    Thanks
    8
    Thanked
    88 times in 83 posts
    • arthurleung's system
      • Motherboard:
      • Asus P5E (Rampage Formula 0902)
      • CPU:
      • Intel Core2Quad Q9550 3.6Ghz 1.2V
      • Memory:
      • A-Data DDR2-800 2x2GB CL4
      • Storage:
      • 4x1TB WD1000FYPS @ RAID5 3Ware 9500S-8 / 3x 1TB Samsung Ecogreen F2
      • Graphics card(s):
      • GeCube HD4870 512MB
      • PSU:
      • Corsair VX450
      • Case:
      • Antec P180
      • Operating System:
      • Windows Server 2008 Standard
      • Monitor(s):
      • Dell Ultrasharp 2709W + 2001FP
      • Internet:
      • Be*Unlimited 20Mbps
    CPU is general purpose (i.e. can do any instructions)
    GPU is not, its only for graphics, although its getting closer to general purpose now, i.e. h264 hardware decoding, physics calculation(?)

    And GPU have a lot more transistors than a CPU. But of course those GPU transistors run a lot slower than CPU transistors and geared toweard parallel processing (i.e. 24 pixel shaders)

    Given its all about business I doubt AMD, Intel will step into graphics field to compete with Nvidia/ATi, and vice versa.

    Its not about intergrating cpu&gpu is being less efficient or whatever, but you need one company good at both to get it integrated.
    Workstation 1: Intel i7 950 @ 3.8Ghz / X58 / 12GB DDR3-1600 / HD4870 512MB / Antec P180
    Workstation 2: Intel C2Q Q9550 @ 3.6Ghz / X38 / 4GB DDR2-800 / 8400GS 512MB / Open Air
    Workstation 3: Intel Xeon X3350 @ 3.2Ghz / P35 / 4GB DDR2-800 / HD4770 512MB / Shuttle SP35P2
    HTPC: AMD Athlon X4 620 @ 2.6Ghz / 780G / 4GB DDR2-1000 / Antec Mini P180 White
    Mobile Workstation: Intel C2D T8300 @ 2.4Ghz / GM965 / 3GB DDR2-667 / DELL Inspiron 1525 / 6+6+9 Cell Battery

    Display (Monitor): DELL Ultrasharp 2709W + DELL Ultrasharp 2001FP
    Display (Projector): Epson TW-3500 1080p
    Speakers: Creative Megaworks THX550 5.1
    Headphones: Etymotic hf2 / Ultimate Ears Triple.fi 10 Pro

    Storage: 8x2TB Hitachi @ DELL PERC 6/i RAID6 / 13TB Non-RAID Across 12 HDDs
    Consoles: PS3 Slim 120GB / Xbox 360 Arcade 20GB / PS2

  3. #3
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    So business practices aside, it's possible that in the future, I could have a multi-core CPU or a multi-CPU computer, and just assign one or more of the cores/CPUs to handle all graphics, while the rest do general purpose?

    The benifit of this would be if I'm using desktop applications, I could have most cores assigned to be multi-purpose, but if I want to play a game, I could assign several cores to be "GPUs" for the time being.

    For instance, I would go into task manager and set the "affinity" of the game process(es) to the various cores... hopefully by that time, games will be split into different threads for graphics rendering, physics, netcode, engine, etc, so I'll be able to set the graphics processes to the cores that I want to handle graphics. That would be awesome, wouldn't it?
    Last edited by latrosicarius; 05-05-2006 at 02:58 AM.

  4. #4
    Treasure Hunter extraordinaire herulach's Avatar
    Join Date
    Apr 2005
    Location
    Bolton
    Posts
    5,618
    Thanks
    18
    Thanked
    172 times in 159 posts
    • herulach's system
      • Motherboard:
      • MSI Z97 MPower
      • CPU:
      • i7 4790K
      • Memory:
      • 8GB Vengeance LP
      • Storage:
      • 1TB WD Blue + 250GB 840 EVo
      • Graphics card(s):
      • 2* Palit GTX 970 Jetstream
      • PSU:
      • EVGA Supernova G2 850W
      • Case:
      • CM HAF Stacker 935, 2*360 Rad WC Loop w/EK blocks.
      • Operating System:
      • Windows 8.1
      • Monitor(s):
      • Crossover 290HD & LG L1980Q
      • Internet:
      • 120mb Virgin Media
    pretty much no, gpus are designed to get a lot of vector operations done very very fast indeed, and a lot of them done at once too, doing it on a general processor would be tremendously inefficient.

  5. #5
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)
    The 3dnow instructions were originally intended to do the matrix arithmetic needed for stuff like rotating a 3d model to place it in a scene. Again, a dedicated transform and light unit in the GPU has now taken over many of those uses, though I expect the program can stull use 3dnow/sse to work out which surfaces are not worth sending to the gpu to transform and render as they cannot possibly be in view (eg surfaces that are pointing away from you).

    The instructions you need to run in order to shade pixels are a very different mix to the instructions required to run a "normal" computer program. As processors are very much tuned to the workloads they handle, the two will always be bad at doing the work of the other.

  6. #6
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)
    Quote Originally Posted by arthurleung
    But of course those GPU transistors run a lot slower than CPU transistors and geared toweard parallel processing (i.e. 24 pixel shaders)
    Twenty transistors in a line will, overall, take twice as long to generate a result as ten transistors in a line. That would give you a pipeline stage that runs at a 2x clock speed difference, even though the individual transistors work at exactly the same speed. The lower GPU clock speed just shows that the pipeline stages have a lot of transistors in them.

  7. #7
    Senior Member
    Join Date
    Jul 2004
    Location
    Probably Poole
    Posts
    386
    Thanks
    0
    Thanked
    5 times in 5 posts
    • Hottentot's system
      • Motherboard:
      • Asus P5Q Pro
      • CPU:
      • Q9550 at 3.8 GHz
      • Memory:
      • 8 GB
      • Storage:
      • SSD + HDD
      • Graphics card(s):
      • ATI 7950
      • PSU:
      • Corsair 650TX
      • Case:
      • CM HAF 932 (watercooled)
      • Operating System:
      • Windows 7 (x64)
      • Monitor(s):
      • NEC 2690WUXi
      • Internet:
      • Virgin 10Mb
    The present CPUs most people use are x86 compatble ie designed to run x86 code whether its 16 32 or 64 bit version.

    The GPUs are designed to run Open GL or Direct X code. This means that there are specific hardwired features in the GPU that can be called upon by Direct X/Open GL APIs. These features execute fast and although you could get a CPU to simulate (in software) these features it would be much much slower.

    Saying that if you had a lot of CPU cores maybe they could run the graphics code as fast as a GPU. I suspect it would be still be cheaper to use a single GPU.

  8. #8
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    I see, thanks for the info, guys. I was just curious as to whether there might be a better way to do things than currently, but I guess there's a reason for doing them this way.

    ty

  9. #9
    Senior Member
    Join Date
    May 2006
    Posts
    305
    Thanks
    0
    Thanked
    0 times in 0 posts
    anyone who was around when the 3dfx card came out will appreciate what a piece of **** the cpu is for handling graphics

    3d cards used to be chipset based (no main processor) which was alright, then nvidia brought out the GeForce which had a proper GPU on it

    so basically the whole point was to get the graphics away from the cpu in the first place and i cant see them going back

  10. #10
    Member
    Join Date
    Aug 2005
    Location
    London
    Posts
    151
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by DanceswithUnix
    Twenty transistors in a line will, overall, take twice as long to generate a result as ten transistors in a line. That would give you a pipeline stage that runs at a 2x clock speed difference, even though the individual transistors work at exactly the same speed. The lower GPU clock speed just shows that the pipeline stages have a lot of transistors in them.
    Um, larger pipelines allow faster clocks on the whole - Prescott has a 32 Stage pipeline and hits 3.6GHz, The Athlon FX has about 14 stages and hits 2.6GHz. The whole point of Netburst as an architecture was to increase the number of pipeline stages to allow the whole thing to run faster.

    The reasons why GPUs run at lower clock speeds are primarily to do with the design parameters and processes used - it's too complicated to go into here!

    In answer to the original question, CPUs have to be general purpose processors by definition. If you were to optimise parts of a CPU to be particularly good at any one area, others would suffer. Certain areas of computing; graphics processing, sound processing and physics processing to name the main ones, can be done <much> faster by building specialised hardware. If you try to use these for anything other than their intended purpose they'll suck though.

  11. #11
    Senior Member
    Join Date
    Mar 2005
    Posts
    1,117
    Thanks
    8
    Thanked
    10 times in 9 posts
    Wouldnt comparing the normal and cpu tests in 3dmark give an idea of just how well optimised the gpu is? I think its pretty well established that while dedicating processors can waste potential processing it is overall the best solution.

    For example, when not playing games im sure itd be possible to get the gpu to help with general calculations, but itd be so bad at them you might as well leave it dormant doing its general 2D duties. The same goes for the audio processor and so on.

    This thread has inspired me to read up on Cell processors which i never got round to doing.

  12. #12
    Senior Member
    Join Date
    Jan 2013
    Location
    West Sussex
    Posts
    530
    Thanks
    50
    Thanked
    44 times in 33 posts
    • Chadders87's system
      • Motherboard:
      • Asus P8Z77-I Deluxe
      • CPU:
      • Intel i5 3570k
      • Memory:
      • Corsair Vengeance 8GB (2x4GB) 1600mhz
      • Storage:
      • Western Digital Caviar Black 1TB (Sata3)
      • Graphics card(s):
      • Sapphire AMD Radeon 7870 2GB
      • PSU:
      • BeQuiet 450w (140mm)
      • Case:
      • BitFenix Prodigy
      • Operating System:
      • Windows 7 Home Premium
      • Monitor(s):
      • Samsung 21.5'
      • Internet:
      • Sky Unlimited

    Re: What's the difference between a CPU & GPU?

    Quote Originally Posted by latrosicarius View Post
    So business practices aside, it's possible that in the future, I could have a multi-core CPU or a multi-CPU computer, and just assign one or more of the cores/CPUs to handle all graphics, while the rest do general purpose?
    Was this a prophecy as now we have CPU's with additional cores designated purely to process graphics; Intels HD 3000 / HD 4000 chips?

    Sorry to raise a dead thread, but it highlights how the future of some technologies can change direction when you don't expect it!

  13. #13
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,231
    Thanked
    2,291 times in 1,874 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: What's the difference between a CPU & GPU?

    Quote Originally Posted by Chadders87 View Post
    Was this a prophecy as now we have CPU's with additional cores designated purely to process graphics; Intels HD 3000 / HD 4000 chips?
    Well, not really - HD300/HD4000 are still GPUs, they're just stuck on the same piece of silicon. The architecture is very different the CPU cores. In fact, the opposite thing has happened really - we've taken the GPU, and started giving it more of the "normal" processes that used to be run on a CPU. And let's not give Intel all the love - AMD have done the same thing, and with much more capable graphics components

    This thread was started 6 months before the 8800GTX - the first unified shader, DX10 card - was launched. DX9 graphics cards weren't great at performing general computation, because the hardware was specifically designed for shading either pixels or vertices. When DX10 launched with unified shaders, those differences were lost and you had a lot of pipelines, each of which could process floating point calculations. That made it much easier to run normal "programs" on GPUs. AMD are now moving towards an architecture and proamming paradigm where the GPU just becomes another set of computation units, accessing the same memory space as the CPU.

    Here's the genuinely interesting comment in this thread (emphasis added):

    Quote Originally Posted by Scarlet Infidel View Post
    ... when not playing games im sure itd be possible to get the gpu to help with general calculations, but itd be so bad at them you might as well leave it dormant doing its general 2D duties. ...
    That's less than 7 years ago, and GPUs were considered so bad at general calculation that it wouldn't be worth using them. Now we expect a lot of software to be accelerated by GPUs. Makes you wonder what the next big paradigm shift in computing will be...

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. What's the best thermal paste to use for CPU and GPU?
    By 8bit in forum PC Hardware and Components
    Replies: 13
    Last Post: 23-01-2006, 11:45 PM
  2. which cpu speed?
    By mak_attack in forum PC Hardware and Components
    Replies: 11
    Last Post: 12-02-2005, 10:05 AM
  3. 10C cpu temp increase after installing modem/router
    By ives in forum Networking and Broadband
    Replies: 4
    Last Post: 04-07-2004, 02:21 PM
  4. CPU TIM Guide
    By Steve in forum Help! Quick Relief From Tech Headaches
    Replies: 0
    Last Post: 30-05-2004, 02:59 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •