Page 2 of 2 FirstFirst 12
Results 17 to 22 of 22

Thread: shader model 4.0

  1. #17
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    30,757
    Thanks
    1,789
    Thanked
    3,289 times in 2,647 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish
    Quote Originally Posted by Andrzej
    But when the testing is done - SM2 mode is forced - instead of letting each card run the shader model that it was designed for

    Test everything with its 'natural' shader model + give marks for having features
    I think I agree with this. While SM1 vs SM3 might show a faster result for SM1, the looks should be convincing enough for SM3's arguement. I think it's a result of some games going down the SM1 or SM3 route, with no SM2 in there for the ATI cards (this generation). Maybe SM2 is slower for the same things as SM3 as some developers have mentioned, but its inclusion would let the cards be tested on merit. And if ATI's implementation of SM2 is faster than the same effects using SM3 then let it be seen.

  2. #18
    Sublime HEXUS.net
    Join Date
    Jul 2003
    Location
    The Void.. Floating
    Posts
    11,819
    Thanks
    213
    Thanked
    233 times in 160 posts
    • Stoo's system
      • Motherboard:
      • Mac Pro
      • CPU:
      • 2*Xeon 5450 @ 2.8GHz, 12MB Cache
      • Memory:
      • 32GB 1600MHz FBDIMM
      • Storage:
      • ~ 2.5TB + 4TB external array
      • Graphics card(s):
      • ATI Radeon HD 4870
      • Case:
      • Mac Pro
      • Operating System:
      • OS X 10.7
      • Monitor(s):
      • 24" Samsung 244T Black
      • Internet:
      • Zen Max Pro
    tbh, I think it's more likely to be a sort of SM3.5, ie a few add-on bits from the xb360 development?
    (\__/)
    (='.'=)
    (")_(")

  3. #19
    OMG!! PWND!!
    Join Date
    Dec 2003
    Location
    In front of computer
    Posts
    964
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by kalniel
    I think I agree with this. While SM1 vs SM3 might show a faster result for SM1, the looks should be convincing enough for SM3's arguement. I think it's a result of some games going down the SM1 or SM3 route, with no SM2 in there for the ATI cards (this generation). Maybe SM2 is slower for the same things as SM3 as some developers have mentioned, but its inclusion would let the cards be tested on merit. And if ATI's implementation of SM2 is faster than the same effects using SM3 then let it be seen.
    Practically every games uses SM2.. HL2, Doom3 (??).. everything...

  4. #20
    Sublime HEXUS.net
    Join Date
    Jul 2003
    Location
    The Void.. Floating
    Posts
    11,819
    Thanks
    213
    Thanked
    233 times in 160 posts
    • Stoo's system
      • Motherboard:
      • Mac Pro
      • CPU:
      • 2*Xeon 5450 @ 2.8GHz, 12MB Cache
      • Memory:
      • 32GB 1600MHz FBDIMM
      • Storage:
      • ~ 2.5TB + 4TB external array
      • Graphics card(s):
      • ATI Radeon HD 4870
      • Case:
      • Mac Pro
      • Operating System:
      • OS X 10.7
      • Monitor(s):
      • 24" Samsung 244T Black
      • Internet:
      • Zen Max Pro
    Yup, which was ATi's reasoning for not putting SM3 support in the x800 series, it saves silicon space by not implementing a function that (still) virtually nothing supports..
    (\__/)
    (='.'=)
    (")_(")

  5. #21
    Senior Member
    Join Date
    Aug 2005
    Location
    in a box
    Posts
    756
    Thanks
    14
    Thanked
    3 times in 3 posts
    By buying a card that supports SM3.0 are you providing any level of future proofing> i.e. are more games going to use it in the future or is it just something that’s going to be ignored and surpassed and thus become irrelevant? Should same generation cards that support SM3 over a cards that do not be given any more brownie points or is not worth considering?

  6. #22
    Rys
    Rys is offline
    Tiled
    Join Date
    Jul 2003
    Location
    Abbots Langley
    Posts
    1,479
    Thanks
    0
    Thanked
    2 times in 1 post
    ATI are just about to release their own Shader Model 3.0 parts, so you'll be buying it by default if you buy a graphics card in the next year. If you're buying one now, you'd be mad not to at least consider an NVIDIA offering with that capability.

    Shader Model 4.0 exists and has done since the inception of the latest D3D/WGF specification. A few minutes on Google would have told you a bit about the unification of vertex and fragment shader capabilities.

    The graphics card market over the last few years should have taught you that a critical mass of parts is needed installed in PCs before developers shift efforts to supporting their feature set. That's there, or very close. Expect the SM3.0-supporting games to carry on increasing (and there's plenty already with Google again showing you list upon list of them).

    As for this, which you've pushed a few times recently:

    Quote Originally Posted by Andrzej
    The concept of SM3 is that it allows 'really cool stuff' like texture fetches in the vertex engine and branching in the pixel shader...

    ...wouldn't it be funny if developers were steered away from using those features because existing implementations were just to slow
    I'd stop throwing it around, for fear of it embarassing you in the future The Chronicles of Riddick doesn't support SM3.0 either, being an OpenGL game. Try Shader Model 2.0-class with soft shadowing via fragment programs instead...

    And what's XDA? You mean XNA, which is just a toolkit/middleware? WGF and DX10 is next in the world of 3D APIs on Microsoft platforms.

    ATI are working with MS on this one, but you can be sure nvidia will be playing catch-up real quick if it proves a good move.
    Just because the API presents a unified caps and instruction set for both vertex and fragment units, doesn't mean the hardware has to have unified shader silicon. You can abstract it any way you like.

    Silly thread with silly things being said!
    MOLLY AND POPPY!

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Quick review: IcyBox HDD enclosure (revised model)
    By davidstone28 in forum Reader Reviews
    Replies: 15
    Last Post: 28-09-2006, 03:11 AM
  2. 2 identical model Sata disks....different sizes
    By ikonia in forum PC Hardware and Components
    Replies: 9
    Last Post: 17-04-2005, 05:24 PM
  3. Sony PSP Fashion Model Photo Gallery
    By Steve in forum HEXUS News
    Replies: 0
    Last Post: 21-03-2005, 09:29 PM
  4. 2X eVGA 6800U PCI-E SLI benchmarks
    By eva2000 in forum Graphics Cards
    Replies: 2
    Last Post: 06-03-2005, 05:57 AM
  5. ECTS 2004: ATI On Track With Shader Model 3.0
    By Steve in forum HEXUS Reviews
    Replies: 0
    Last Post: 02-09-2004, 11:58 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •