Page 2 of 2 FirstFirst 12
Results 17 to 25 of 25

Thread: PhysX on graphics card

  1. #17
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: PhysX on graphics card

    Quote Originally Posted by shaithis View Post
    Then why is my CPU at 30%ish while gaming, overclocking my CPU further has no effect on framerate but pushing my gfx card further shows immediate benefits?
    Because a), the game engines you run don't understand SMP, and b) your resolution is too high with every omgzwtflolz setting on top.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  2. #18
    Anthropomorphic Personification shaithis's Avatar
    Join Date
    Apr 2004
    Location
    The Last Aerie
    Posts
    10,857
    Thanks
    645
    Thanked
    872 times in 736 posts
    • shaithis's system
      • Motherboard:
      • Asus P8Z77 WS
      • CPU:
      • i7 3770k @ 4.5GHz
      • Memory:
      • 32GB HyperX 1866
      • Storage:
      • Lots!
      • Graphics card(s):
      • Sapphire Fury X
      • PSU:
      • Corsair HX850
      • Case:
      • Corsair 600T (White)
      • Operating System:
      • Windows 10 x64
      • Monitor(s):
      • 2 x Dell 3007
      • Internet:
      • Zen 80Mb Fibre

    Re: PhysX on graphics card

    Quote Originally Posted by aidanjt View Post
    Because a), the game engines you run don't understand SMP, and b) your resolution is too high with every omgzwtflolz setting on top.
    I have spare CPU cycles (in spades!) and no free GPU cycles.

    You can only get so much out of SMP in games (3 of the 4 games I run are SMP aware). The main grunt they require is gfx processing.

    Pushing the resolution is a natural progression, everyone wants higher resolutions and more "omgzwtflolz settings", so where are you going to find the cycles on the GPU when the gfx card is going to be pushed harder and harder with every new game?

    The cycles will be found on the CPU which is out-stripping the rate at which developers can use it.
    Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
    HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
    HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
    Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
    NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
    Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive

  3. #19
    Senior Member Hicks12's Avatar
    Join Date
    Jan 2008
    Location
    Plymouth-SouthWest
    Posts
    6,586
    Thanks
    1,070
    Thanked
    340 times in 293 posts
    • Hicks12's system
      • Motherboard:
      • Asus P8Z68-V
      • CPU:
      • Intel i5 2500k@4ghz, cooled by EK Supreme HF
      • Memory:
      • 8GB Kingston hyperX ddr3 PC3-12800 1600mhz
      • Storage:
      • 64GB M4/128GB M4 / WD 640GB AAKS / 1TB Samsung F3
      • Graphics card(s):
      • Palit GTX460 @ 900Mhz Core
      • PSU:
      • 675W ThermalTake ThoughPower XT
      • Case:
      • Lian Li PC-A70 with modded top for 360mm rad
      • Operating System:
      • Windows 7 Professional 64bit
      • Monitor(s):
      • Dell U2311H IPS
      • Internet:
      • 10mb/s cable from virgin media

    Re: PhysX on graphics card

    as far as i remember the 8800series has physix capabilities already, simply nvidia need to activate them, the 8800gts/gt etc not the old ones.
    Quote Originally Posted by snootyjim View Post
    Trust me, go into any local club and shout "I've got dual Nehalem Xeons" and all of the girls will practically collapse on the spot at the thought of your e-penis

  4. #20
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: PhysX on graphics card

    Quote Originally Posted by shaithis View Post
    I have spare CPU cycles (in spades!) and no free GPU cycles.

    You can only get so much out of SMP in games (3 of the 4 games I run are SMP aware). The main grunt they require is gfx processing.

    Pushing the resolution is a natural progression, everyone wants higher resolutions and more "omgzwtflolz settings", so where are you going to find the cycles on the GPU when the gfx card is going to be pushed harder and harder with every new game?

    The cycles will be found on the CPU which is out-stripping the rate at which developers can use it.
    Not one thing you said there had an ounce of correctness to it.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  5. #21
    Lovely chap dangel's Avatar
    Join Date
    Aug 2005
    Location
    Cambridge, UK
    Posts
    8,398
    Thanks
    412
    Thanked
    459 times in 334 posts
    • dangel's system
      • Motherboard:
      • See My Sig
      • CPU:
      • See My Sig
      • Memory:
      • See My Sig
      • Storage:
      • See My Sig
      • Graphics card(s):
      • See My Sig
      • PSU:
      • See My Sig
      • Case:
      • See My Sig
      • Operating System:
      • Windows 10
      • Monitor(s):
      • See My Sig
      • Internet:
      • 60mbit Sky LLU

    Re: PhysX on graphics card

    Quote Originally Posted by aidanjt View Post
    Sorry, I just plain can't disagree with that more. The calculations that figure out how to perform accurate raytracing, collision detection, tesiliation, and shadow texturing is all physics, so to are the fluid dynamics that makes water appear more realistic. Complex physics is an intrinsic part of 3D gaming, always has been, and it will only evolve even more in the coming years.
    I didn't disagree with any of that - what I did say is that rendering all those wonderful effects means more GPU load which means less time for physics. What I also said was that the current resource that's more obvious in 2008 for doing it is the CPU. It would be far better to have an physics engine that can use SMP or ANY GPU than a closed system for one GPU vendor.

    Quote Originally Posted by aidanjt View Post
    You think physics is a new fad in gaming, it really isn't, it's always been
    No, I don't. Pretty sure I didn't say that How many years has havok been around for example? Loads.

    Quote Originally Posted by aidanjt View Post
    there, lurking behind the scenes, now it's just jumping out and yelling 'rowr' because of the more obvious gravitational and 'bounce' effects in modern games. GPUs can handle it, it's the proper place for it, there's no need for a PPU co-processor (as we saw with ageia, that was a flop), or waste CPU cycles that could be better spent on things like AI and complex audio algorithms.
    You're replacing the PPU with an extra GPU in 2008 - and therein lies the problem. Especially when you've got another processor(s) lying round that could do the job (as they're doing bugger all else).
    I think we're talking at crosspurposes here - I agree with you almost all the way here - but i'm not likely to be impressed by clever physics if my GPU is rendering it at 5fps versus simple physics at 25fps. The GPU simply isn't a underused resource in gaming right now, the cpu is. Maybe the GTX280 will be so powerful as to be able to have all my eye candy on at 1600x1200 AND have tons of units left over for physics too (and be designed in silicon to do the latter) - but I kinda doubt it. I rather hope i'm wrong - i'd love crysis to be patched for nPhysics and run at 100fps Then again, if the AMD card is faster for graphics i'm screwed, right?
    Crosshair VIII Hero (WIFI), 3900x, 32GB DDR4, Many SSDs, EVGA FTW3 3090, Ethoo 719


  6. #22
    Anthropomorphic Personification shaithis's Avatar
    Join Date
    Apr 2004
    Location
    The Last Aerie
    Posts
    10,857
    Thanks
    645
    Thanked
    872 times in 736 posts
    • shaithis's system
      • Motherboard:
      • Asus P8Z77 WS
      • CPU:
      • i7 3770k @ 4.5GHz
      • Memory:
      • 32GB HyperX 1866
      • Storage:
      • Lots!
      • Graphics card(s):
      • Sapphire Fury X
      • PSU:
      • Corsair HX850
      • Case:
      • Corsair 600T (White)
      • Operating System:
      • Windows 10 x64
      • Monitor(s):
      • 2 x Dell 3007
      • Internet:
      • Zen 80Mb Fibre

    Re: PhysX on graphics card

    Quote Originally Posted by aidanjt View Post
    Not one thing you said there had an ounce of correctness to it.
    lol at least 2 things I mentioned are un-contestable!

    Or are you going to tell me my resource monitor is wrong and my CPU is constantly maxed out? and that my graphics card is not being pushed to the limits, even though the only way to increase performance is to overclock it? Or perhaps you know of millions of people who want stagnation in grpahics, rather then advancement and better, more realistic effects?


    If you want to be taken seriously, at least explain yourself rather then sounding like a know-it-all-but-can't-be-bothered-to-explain-it forum troll.
    Last edited by shaithis; 22-05-2008 at 05:54 PM.
    Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
    HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
    HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
    Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
    NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
    Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive

  7. #23
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: PhysX on graphics card

    Quote Originally Posted by dangel View Post
    No, I don't. Pretty sure I didn't say that How many years has havok been around for example? Loads.
    Sorry, by the tone you seemed to be of that impression, but of course you didn't explicitly say that.

    Quote Originally Posted by dangel View Post
    You're replacing the PPU with an extra GPU in 2008 - and therein lies the problem. Especially when you've got another processor(s) lying round that could do the job (as they're doing bugger all else).
    I agree, at the moment CPU cycles are underutilised, at the moment, those ticks will easily be rapidly filled with significantly better AI, SP may actually become fun again if games developers stop thinking MP is the be all and end all of gaming.

    Quote Originally Posted by dangel View Post
    I think we're talking at crosspurposes here - I agree with you almost all the way here - but i'm not likely to be impressed by clever physics if my GPU is rendering it at 5fps versus simple physics at 25fps. The GPU simply isn't a underused resource in gaming right now, the cpu is. Maybe the GTX280 will be so powerful as to be able to have all my eye candy on at 1600x1200 AND have tons of units left over for physics too (and be designed in silicon to do the latter) - but I kinda doubt it. I rather hope i'm wrong - i'd love crysis to be patched for nPhysics and run at 100fps Then again, if the AMD card is faster for graphics i'm screwed, right?
    I completely agree, that anything under 25-30fps is a bit nasty, but when you have people go buying quad crossfire and whatnot to get 100fps is ridiculous, it's wasted GPU cycles like these which could be better used processing physics related things, the main problem is as soon as any GPU performance advancement comes along, it's immediately wasted on bigger more bloated textures and hardly noticeable visual effects. I mean seriously, the massively parallel architecture of modern GPUs puts Sony/IBM's Cell architecture to shame, yet games developers can't run multiple discrete jobs in parallel, or even dynamically alter the graphical workload to maintain a certain framerate?..
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  8. #24
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: PhysX on graphics card

    Quote Originally Posted by shaithis View Post
    If you want to be taken seriously, at least explain yourself rather then sounding like a know-it-all-but-can't-be-bothered-to-explain-it forum troll.
    Me not explaining, and you not understanding, are two entirely separate things.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  9. #25
    Senior Member
    Join Date
    Jul 2007
    Posts
    524
    Thanks
    79
    Thanked
    67 times in 47 posts

    Re: PhysX on graphics card

    One of the big ironies of the computer market is that we have reached a point where we are limited mainly by the average installed hardware, rather than the technical ability to produce fancy effects. When discussing this, we must remember that the average gamer is not a money-laden enthusiast, but rather using a system which is likely two years old. The average hardware lags behind the enthusiast's hardware by a fair margin.

    With respect to graphics, this means that the effort needed to really use all of a 9800X2's power is on the whole a wasted investment. Few people have such a fancy level of graphics card, and typically the key sales period for a game is over the first year, within which it is unlikely a significant enough increase in the average kit will occur. As such, their core audience will never see the additional work put in to maximise graphical fidelity on the top-end kit. There are some good reasons for such an investment, but for the majority of games this route will be avoided. The players themselves do not help with this, either, since many desire to play their games on "ultra high" settings, and complain at the game if this is not possible, rather than more rightfully blaming their system. We all want to feel like we have a powerful and competent system, even if it isn't.

    The field of 3D graphics rendering is well understood now, and it is very much the hardware that holds games back. One need only look at games like crysis, modded Oblivion, or FSX to see how easy it is for game developers to make an engine that will simply eat up all the graphics processing power you offer to it.

    The calculations needed for 3d scene rendering, no matter what technique you choose, are heavily based upon floating point vector mathematics. The great thing about this field of calculation is how parallel the majority of the maths is, and how it only needs to use a few basic operations to perform all the fancy work. This is a key reason why GPU throughput has risen far more dramatically than CPU throughput. CPU designs must be able to deal with any series of inputs, whereas GPUs in general know that they will have a lot of relatively simple floating point calculations alone (and a few bits of housekeeping, of course).

    Because CPUs may be fed all kinds of instructions to run any number of different programs, they have had to keep as generic as possible, focusing on the demands of the most commonly used types of programs first and foremost. These are, as a rule, not computer games, but the regular operating system, internet browsers, and office style software. None of these have typically needed much floating point processing power, and generally see performance improvements by optimising the processing of regular integer operations. This is why Core 2 can execute three integer operations per clock, but only at most two floating point operations. Furthermore, after a scan (not exhaustive, so I could be wrong), it seems that although the various SIMD additions to x86 are impressive, none of them offer in a single instrustion some of the essential vector mathematics operations that graphics cards excel at. Your CPU can only act like a single one of the stream processors within the graphics card, so even if it could properly communicate, access needed memory, and share effort with the graphics card (which I shall not even be getting into here, but needless to say, that is a major issue with all of this), it would only offer a small percentage of extra performance on top of what the GPU already offers.

    If you are finding that your GPU is running at maximum tilt and your CPU is not, the only practical solution (without turning down the settings) is to get a better graphics card. Your CPU has no need to run any faster - at it's current speed, the GPU is only just able to process all the requested actions. If you consider how much more powerful the GPU is at it's job than the CPU, the poor CPU has in fact done it's absolute best, and if the GPU cannot cope, the CPU simply could not manage, either.

    However, in other forms of processing, the CPU still has a lot of life left in it. Contrary to the beliefs of some, computer games are often highly parallel. It is not the games that cause problems with using all possible cores and SMT on a processor, but once again the installed user base - and in fact the current capabilities of even the best processors on the market are currently too low. I talked about this in more depth in another thread, and I would recommend you read that post for more on the challenges with SMT.

    Similar to as talked about in that thread regarding SMT, a problem exists for advanced physics within games. This is not because it is too hard, or that there is no applications for such detailed and complex things, but rather because it is simply pointless putting them in at the moment. If you look at some of the games slated for release in this coming year, and at the physX demo levels for UT3, it is quite apparent that there are real applications available for advanced physics within games, that can offer new and interesting gameplay. However, just as with graphics cards and processor cores, the current installed market sucks. Physics calculations, it must be remembered, are very similar indeed to those that a graphics card exists to perform. As such, although they can and will often make use of available cores (but see my comments on the current state of SMT development), they would be much more effectively run on hardware similar to a graphics card.

    In terms of the current state of the market, physics systems really are stuck in a very hard place. As discussed, only the enthusiast market has any spare graphics card power, but they form a small minority of the total gamer market. Without that spare power, games simply will not do as complex as theoretically possible physics systems. And without games featuring complex physics systems, gamers will not see the reason for spending more for graphics cards that can truly do amazing physics simulations. But unless the player base as a whole has such spare capacity, developers will never risk using it.

    There is however a solution to several issues available here, however it will take a couple of years for the effects to change the face of the majority of computer games themselves. As some game developers are happy to pander to the enthusiast gamer market, GPU manufacturers know that there is at least some small demand for better physics processing, and that the enthusiast market really isn't too fussy about cost or, frankly, being sensible (£500 graphics cards, anyone? ).

    The GPU manufacturers, in wanting to take advantage of that enthusiast market and to prepare for the market in a couple of years time (remember, old high-end cards or designs often become the new mid-range offerings), are always wanting to increase the performance of their hardware. As already stated, they have had great success so far by embracing superparallelism. However, there reaches a point where, due to distances, support features, and testing needs, that it becomes increasingly harder to add more parallel units within a design. We have already seen this with respect to CPUs, as they have moved to dual core and beyond by stopping to develop ever more complex cores, and instead placing more, simpler, cores together (as instead of having to integrate in more powerful sub-parts fully into an existing design, they only need a little glue to join two complete components). We have seen the GPU market over the years attempt this move to some degree with SLI, Crossfire and then X2 cards, however there is a penalty for going off-chip then back onto another. Despite the rumours that nVidia's next offering will be a "dual core" GPU, such a design will still suffer from similar problems to that found with SLI setups - the current rendering systems used work amazingly in a single discrete graphics core, but things get a little strange if you try and add more to process the same data. So, it is unlikely that it would be easy and immediately productive to try and add additional graphics cores to a GPU, however there is another, very similar component they could drop in instead. As the systems needed for physics are so very similar to those for graphics processing, existing graphics core designs could be modified and optimised for physics processing. As this extra physics core will have it's own special means to access it separate from the rendering system (but may have the ability to effect rendering data, depending upon ambition), it will not suffer from the hiccups that SLI type additions would.

    However, and you'd be right to point this out, a physics core would be basically useless for most existing games This is were nVidia's purchase of PhysX comes in handy for them, as they would have a number of big games already ready to take advantage of the power. nVidia has also shown itself willing in the past to closely work with developers to help them use nVidia features, and are happy pandering just to the high-end enthusiast market with some of their product lines. With the passage of time, they will be hoping that they can filter the physics core addition down the product lines into cheaper and cheaper cards, and that games will have been shipped that offered to make use of the enhanced physics on the old high-end cards (creating demand for the feature to be retained in cheaper models).

    The big downside to all of this, as has been already pointed out, is for the consumer. Sadly hardware manufacturers are prone to designing vendor lock-in - and who could blame them? Every company wants to maximise their trade, and open standards are very scary indeed and are a major risk. But without an open standard for all hardware manufacturers (not just nVidia and AMD) to implement, games developers will not be assured that the technology they want to use will be present. Whilst this will probably never worry some enthusiast-friendly developers at all, it will make encouraging the use of advanced physics (and hence the need for physics cores on GPUs) very difficult, since at least half the market will not be able to accelerate the processing at all.

    It's interesting to consider what might be considered the precursor to all of this, the 3d graphics libraries. After a long period of discord and vendor lock-in, SGI released OpenGL, and later 3d gaming on home computers really took off not because of vendor libraries, but because of a vendor-independent push with DirectX. It is for this reason that I hope that Microsoft add advanced physics features straight into the next version of DirectX, standardising the physics interface once and for all, and giving developers an assurance of minimal features always being available.

  10. Received thanks from:

    aidanjt (23-05-2008)

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Replies: 3
    Last Post: 03-04-2008, 05:06 PM
  2. No output from graphics card
    By fredered in forum Help! Quick Relief From Tech Headaches
    Replies: 2
    Last Post: 22-03-2008, 08:58 AM
  3. Replies: 9
    Last Post: 18-01-2008, 04:53 PM
  4. Wrong Graphics Card
    By 360bhp in forum SCAN.care@HEXUS
    Replies: 1
    Last Post: 21-06-2006, 09:29 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •