Page 2 of 3 FirstFirst 123 LastLast
Results 17 to 32 of 38

Thread: Nvidia publishes DX12 Do's and Don'ts checklist for developers

  1. #17
    Senior Member
    Join Date
    Mar 2010
    Posts
    2,567
    Thanks
    39
    Thanked
    179 times in 134 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by Corky34 View Post
    Didn't take long for the AMD fanbios to surface I see.
    Shame one doesn't even know async compute isn't a requirement of DX12.
    Check carefully if the use of a separate compute command queues really is advantageous

    Even for compute tasks that can in theory run in parallel with graphics tasks, the actual scheduling details of the parallel work on the GPU may not generate the results you hope for
    Be conscious of which asynchronous compute and graphics workloads can be scheduled together
    so Nv telling devs to not only use the DX12 *cough* NvAPI , they also tell them to not use Async , another non Dx12 thing....

    I`ll get that hat for you to eat then

  2. #18
    Senior Member
    Join Date
    Mar 2010
    Posts
    2,567
    Thanks
    39
    Thanked
    179 times in 134 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    oh this is golden:


    Resources

    Dont's

    •Don’t rely on being able to allocate all GPU memory in one go
    ◦Depending on the underlying GPU architecture the memory may or may not be segmented

    and

    Pipeline State Objects (PSOs)

    Don’ts
    •Don’t use D3D12_SHADER_VISIBILITY_ALL if not necessary
    ◦There is overhead in the driver and on the GPU for each shader stage that needs to see CBVs, SRVs, UAVs etc.


    resource level 3 is good for the above but shock , not RL2 or 1....
    Last edited by HalloweenJack; 28-09-2015 at 10:07 PM.

  3. Received thanks from:

    CAT-THE-FIFTH (29-09-2015),jigger (08-10-2015),Jimbo75 (28-09-2015),kalniel (28-09-2015)

  4. #19
    Senior Member
    Join Date
    Jan 2009
    Posts
    342
    Thanks
    0
    Thanked
    27 times in 23 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    This falls under "no ****, Sherlock". The entire POINT of DX12 is that the burden of architecture-specific optimisation is moved from the driver developer to game engine developers. And that doesn't make any sense whatsoever if GPU developers don't tell engine developers what to optimise for.
    Anyone who DEOSN'T want to modify their engine to suit the differing strengths of differing architectures should stay on the DX11 codepath. That's what DX11 is remaining in active development alongside DX12.

  5. #20
    Senior Member
    Join Date
    May 2015
    Posts
    359
    Thanks
    0
    Thanked
    7 times in 7 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by Corky34 View Post
    Quote Originally Posted by MarcelTimmer View Post
    It looks like Nvidia is not ready for dx12. I hear no complaining from AMD about it.
    Quote Originally Posted by Primey0 View Post
    Nvidia be all like

    Don't:

    * Use async compute. Pretty please.
    Quote Originally Posted by HalloweenJack View Post
    so yes that's `DO use what we can do well and Don't use other stuff our competition can use well
    Didn't take long for the AMD fanbios to surface I see.
    Shame one doesn't even know async compute isn't a requirement of DX12.
    And Async is built-in but currently disabled on maxwell. Possibly just because AMD didn't blow by them with FuryX etc. They'll likely turn it on when needed, if AMD ever musters drivers to beat them across the board. No point in giving more speed than is needed to beat the competition when profits have not hit 2007 levels for nearly a decade (at NV, never mind AMD just losing money quarter after quarter). They are both in business to make money, so get as much as you can while you can and milk those cows. R&D isn't free. One more point, programming for Async would make games run good on the ones that support it (remember NV has 82% of discrete right now, and only maxwell has it), so many would be left out. Perhaps NV is just waiting for a greater % of users to have it, before telling devs go all in. That IMHO would seem like a favor to users not on the bleeding edge (the vast majority of users from both sides). In the end though, I hope DX12 dies, and Vulkan takes over

  6. #21
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by Jimbo75 View Post
    I'm surprised you even want to talk about async compute after your last laughable rant about it all being down to Oxide and how Fable would be a true DX12 benchmark. How did that one work out?
    Sorry when did I say "Fable would be a true DX12 benchmark" Care to provide a link? I didn't even know Fable was going to be DX12 so I'm not sure how that happened, and even if I did perhaps you can provide some benchmarks to demonstrate the point you're trying to make, that's even if you know the point your trying to make yourself.

    Quote Originally Posted by watercooled View Post
    WRT 'async ain't DX12', Nvidia are suggesting that devs 'make use of NvAPI...'. That's really not part of DX12, and not platform-neutral either. As I've said before, not being a requirement of DX12 is pretty much just an academic argument if it's available to use and actively utilised by devs in parallel with DX12 features.

    Having done a quick ctrl+f for async, this is about all I found:

    Well yeah, assuming the Maxwell async deficiency is a thing, I'd say that's a round-about way of telling devs to be careful about it? Those 'actual scheduling details' being the crux of the matter.
    Yea something not being a requirement of DX12 is pretty much just an academic argument, and you'll have to forgive me for bringing it up, it's just I'm fed up with people incorrectly claiming async compute is part of DX12 when it's not, I guess that's mostly down to AMD's marketing though.

    I'm not sure Maxwell async deficiency is a thing, that unless you're talking about Maxwell 1 as Maxwell 2 is perfectly capable of doing async compute.

    AFAIK the reason Nvidia tell devs to be careful about it is more to do with how, in something like VR, a dev could stall the pipeline if they submitted a long drawcall or if they submitted a job to either the graphics queue or compute queue that had to wait for a specific job to finish, creating a stall in the pipeline would be really bad if you need to process a high priority task such as applying a post process render, if, as a dev, you submitted a single drawcall that takes 9ms you run the risk of not being able to interrupt that process, much better to break it down into 4 separate drawcalls, similarly if you submit a drawcall to the graphics queue that has to wait on the compute queue or visa versa you could introduce a stall that would prevent higher priority tasks taking precedence.

    EDIT: Just so I don't get accused of favoritism the above applies to both AMD and Nvidia hardware equally (afaik), submitting work to either the graphics or compute queues that either takes a long time to process or relies on the output from another queue before they can be completed is a bad idea as it locks that queue preventing it from processing higher priority jobs.

    Quote Originally Posted by HalloweenJack View Post
    so Nv telling devs to not only use the DX12 *cough* NvAPI , they also tell them to not use Async , another non Dx12 thing....
    Suggesting to make use of something when available is very different than telling people it must be used, or that a certain feature is a requirement.
    Last edited by Corky34; 29-09-2015 at 09:31 AM.

  7. #22
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by Corky34 View Post
    I'm not sure Maxwell async deficiency is a thing, that unless you're talking about Maxwell 1 as Maxwell 2 is perfectly capable of doing async compute.
    Neither am I which is why I'm always careful to say that. Is Maxwell 2 able to do async compute in hardware? I was under the impression that neither version of Maxwell is capable of it, replying instead on software scheduling and hence (or at least partly contributing towards) the results of things like AotS. As I think has been mentioned before, a compute deficiency in Maxwell wouldn't be all that surprising given the architecture's focus which seems to be strongly on graphics and less towards the compute/HPC market. (I wonder if Nvidia are likely to continue a compute-graphics-compute cadence?) Like a lot of new features though, not all games are necessarily going to use async compute to a great extent so either way the impact will likely vary between games.

    Quote Originally Posted by Corky34 View Post
    EDIT: Just so I don't get accused of favoritism the above applies to both AMD and Nvidia hardware equally (afaik), submitting work to either the graphics or compute queues that either takes a long time to process or relies on the output from another queue before they can be completed is a bad idea as it locks that queue preventing it from processing higher priority jobs.
    The quote is a bit ambiguous but that's not the way I interpreted it, even looking back at it now. In theory (and if I understand it correctly) dispatching work to separate queues shouldn't result in another thread stalling - hence the asynchronous part. There might be dependencies at a higher level e.g. everything needing to be done in time for the next frame, but worst-case that sort of thing would likely just mean you don't get much speed-up from dispatching separate queues, in line with Amdahl's law I guess.

    The way I read it (not saying I'm right), is the 'scheduling details' and workloads which 'can be scheduled together' implying something at the architectural level which fits in with what I've been reading about Maxwell's async compute ability - if there's only one hardware queue then you're likely to run into issues if you're trying to do different types of work in the same warp (or is it wavefront, I forget which one is the Nvidia term). Multiple hardware queues on e.g. GCN which we know exist as the ACE's allows, in theory, more granularity over the execution resources. Of course I could be miles off target though.

    Edit: I agree in principal with things like checking you're actually going to benefit from using a given feature, but of course it's important to consider more than one vendor's architecture. Just because you can use something doesn't mean you should - a certain way of implementing something sub-optimally might not have much of an impact on one architecture but could cripple another, and it's important to consider future architectures too, especially when you're working closer to the hardware with DX12.

  8. #23
    Member
    Join Date
    Mar 2012
    Location
    UK
    Posts
    133
    Thanks
    6
    Thanked
    5 times in 5 posts
    • Primey0's system
      • Motherboard:
      • Gigabyte X570 GAMING X
      • CPU:
      • AMD Ryzen 5 5600X
      • Memory:
      • Corsair Vengeance RGB Pro 32GB
      • Storage:
      • Corsair MP400 2TB NVME
      • Graphics card(s):
      • Palit GeForce RTX 3070 8 GB
      • PSU:
      • EVGA 850W GQ, 80+ GOLD
      • Case:
      • Lian Li PC-O11 Air
      • Operating System:
      • Windows 10

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by Corky34 View Post
    Didn't take long for the AMD fanbios to surface I see.
    Shame one doesn't even know async compute isn't a requirement of DX12.
    Oh yeah I'm totally a AMD fanboy. I don't have a 970 (3.5GB hurr durr) in my system or anything. Oh wait I do. Bashing Nvidia doesn't make you a AMD fanboy.

    Async Compute MAY not be a requirement but it's going to be used heavily

    Quote Originally Posted by nobodyspecial View Post
    They'll likely turn it on when needed, if AMD ever musters drivers to beat them across the board.
    I guess you've not seen the DX12 benchmarks for Fable Legends and Ashes of the singularity? AMD blows Nvidia out of the water. The only win Nvidia gets is with the 980 Ti.

  9. #24
    Member
    Join Date
    Mar 2012
    Location
    UK
    Posts
    133
    Thanks
    6
    Thanked
    5 times in 5 posts
    • Primey0's system
      • Motherboard:
      • Gigabyte X570 GAMING X
      • CPU:
      • AMD Ryzen 5 5600X
      • Memory:
      • Corsair Vengeance RGB Pro 32GB
      • Storage:
      • Corsair MP400 2TB NVME
      • Graphics card(s):
      • Palit GeForce RTX 3070 8 GB
      • PSU:
      • EVGA 850W GQ, 80+ GOLD
      • Case:
      • Lian Li PC-O11 Air
      • Operating System:
      • Windows 10

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    EDIT: Double post sorry

  10. #25
    Registered User
    Join Date
    Sep 2015
    Posts
    2
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Just thought I'd throw this out there for the uneductaed: Can't post links yet, but google "nvidia async compute" and click on the reddit page.

  11. #26
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,036
    Thanks
    1,877
    Thanked
    3,378 times in 2,715 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by kickbuttpotato View Post
    Just thought I'd throw this out there for the uneductaed: Can't post links yet, but google "nvidia async compute" and click on the reddit page.
    How does that help the uneducated? It's just another user posting their take on the benchmarks like we've already done here in other threads.

  12. #27
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by watercooled View Post
    Neither am I which is why I'm always careful to say that. Is Maxwell 2 able to do async compute in hardware? I was under the impression that neither version of Maxwell is capable of it, replying instead on software scheduling and hence (or at least partly contributing towards) the results of things like AotS. As I think has been mentioned before, a compute deficiency in Maxwell wouldn't be all that surprising given the architecture's focus which seems to be strongly on graphics and less towards the compute/HPC market. (I wonder if Nvidia are likely to continue a compute-graphics-compute cadence?) Like a lot of new features though, not all games are necessarily going to use async compute to a great extent so either way the impact will likely vary between games.
    AFAIK and I'd be grateful if someone could correct me if I'm mistaken, Maxwell 2 (9 series cards) were the first Maxwell's able to be supplied with compute tasks asynchronously with graphics tasks, that's not to say previous Maxwell's are incapable of doing compute tasks, just that Maxwell 2's were the first able to be supplied with a graphics task and a compute task at the same time.

    You're correct (afaik) when you say Nvidia has parts of the scheduling in software (drivers) and this is why I think they recommend dev's to break their work down into smaller jobs, once a job is submitted to the GPU software can't send an interrupt or pause a running job, it has to wait until the GPU signals its completed a job before being able to do anything else, send a long job to the GPU and you could block a higher priority task from being run, better to break a 10ms job down into 5x2ms jobs for example and every 2ms you have a chance to interrupt the GPU.

    Now I know AMD do their scheduling in hardware, it's one of the reason they run hotter, what I'm not sure on is how AMD handle their scheduling, it's possible it's very similar to how Nvidia handle it in that once a job is submitted to the GPU it can't be interrupted until it's done, although I find that doubtful as one of the biggest advantages of having hardware based scheduling is that in theory it can, or at least should be able to pause or interrupt an already running job if a higher priority job needs processing, although it would be nice if someone with more knowledge of AMD's scheduling could either confirm or deny that.

    Quote Originally Posted by watercooled View Post
    The quote is a bit ambiguous but that's not the way I interpreted it, even looking back at it now. In theory (and if I understand it correctly) dispatching work to separate queues shouldn't result in another thread stalling - hence the asynchronous part. There might be dependencies at a higher level e.g. everything needing to be done in time for the next frame, but worst-case that sort of thing would likely just mean you don't get much speed-up from dispatching separate queues, in line with Amdahl's law I guess.
    Yea I'm a little hazy on it to, while I think you're correct when you say dispatching work to separate queues shouldn't result in another thread stalling you have to take into account things like VR.
    Say all the work has been done on a frame that's going to be displayed, it's just sitting their waiting, now what happens if in something like VR the person moves their head, that frame now needs to be sent back to be adjusted to display the correct perspective and it need to be done PDQ, in theory if you're running a 10ms job on the GPU you have to wait until that's done before being able to re-render an already completed frame.

    Also the speed up, I would think is very dependent on what you're asking the GPU to do, if you're asking it to wait for output from graphics or compute before being able to start a job and not doing anything else during that time then that's time wasted, if you have to resubmit something, etc, etc.

    Quote Originally Posted by watercooled View Post
    The way I read it (not saying I'm right), is the 'scheduling details' and workloads which 'can be scheduled together' implying something at the architectural level which fits in with what I've been reading about Maxwell's async compute ability - if there's only one hardware queue then you're likely to run into issues if you're trying to do different types of work in the same warp (or is it wavefront, I forget which one is the Nvidia term). Multiple hardware queues on e.g. GCN which we know exist as the ACE's allows, in theory, more granularity over the execution resources. Of course I could be miles off target though.
    Yes and no, you can run into issues with a single queue if you don't sort things out before submitting them to that queue, if for example I have a 5ms compute task, a 10ms graphics task, and another 15ms compute task totaling 30ms of GPU time, if I split up those task and submit the first 5ms compute task and only half the graphics task at the same time I save myself 5ms, if I then submit the other half of the graphics task at the same time as i submit the the last compute task I've saves myself another 5ms reducing the total time taken from 30ms to 20ms, all while still only using a single queue that would have taken 30ms if I had submitted those task as a single job.

    There's no doubt having a single queue is not ideal but until DX12/Mantle that's all there was, dealing with a single queue when you can now submit jobs in parallel is far from ideal as it involves a lot of thought, work, and the juggling of both the jobs you submit to the GPU and the GPU's time.

    Quote Originally Posted by watercooled View Post
    Edit: I agree in principal with things like checking you're actually going to benefit from using a given feature, but of course it's important to consider more than one vendor's architecture. Just because you can use something doesn't mean you should - a certain way of implementing something sub-optimally might not have much of an impact on one architecture but could cripple another, and it's important to consider future architectures too, especially when you're working closer to the hardware with DX12.
    Indeed, with DX12/Vulkan the developers of the games have been given the power that was previously written into drivers, while a developer can spend more time optimising their game to get it running better than AMD's & Nvidia's driver development teams would have, they can also screw things up royally for either of them.

    Sorry for the rather long post, I think I got carried away.

  13. #28
    Registered User
    Join Date
    Sep 2015
    Posts
    2
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by kalniel View Post
    How does that help the uneducated? It's just another user posting their take on the benchmarks like we've already done here in other threads.
    You haven't read this thread then.

  14. #29
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by kickbuttpotato View Post
    You haven't read this thread then.
    I have to agree with kalniel in that I don't have a clue what point you're trying to make?

  15. #30
    Token 'murican GuidoLS's Avatar
    Join Date
    Apr 2013
    Location
    North Carolina
    Posts
    806
    Thanks
    54
    Thanked
    110 times in 78 posts
    • GuidoLS's system
      • Motherboard:
      • Asus P5Q Pro
      • CPU:
      • C2Q 9550 stock
      • Memory:
      • 8gb Corsair
      • Storage:
      • 2x1tb Hitachi 7200's, WD Velociraptor 320gb primary
      • Graphics card(s):
      • nVidia 9800GT
      • PSU:
      • Corsair 750w
      • Case:
      • Antec 900
      • Operating System:
      • Win10/Slackware Linux dual box
      • Monitor(s):
      • Viewsonic 24" 1920x1080
      • Internet:
      • AT&T U-Verse 12mb

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Quote Originally Posted by watercooled View Post
    I have to agree with kalniel in that I don't have a clue what point you're trying to make?
    I don't think the reddit commentary was really the best thing found with that Google search, but it was sort of interesting, in an off-hand kind of way. More of the yes it is, no it isn't kind of thing... It's all semantics, and how a company wants to define a word.

    Of course, following that Google result a little further, one would have noticed a rather interesting link to Guru3D... I'm pretty sure they're considered mostly, if not totally, neutral.

    Quote Originally Posted by Guru3D
    NVIDIA Will Fully Implement Async Compute Via Driver Support

    And they've got Oxide from Ashes of Singularity to confirm that. Oxide’s developer “Kollock” wrote that NVIDIA has not fully implemented yet Async Compute in its driver, Oxide is working closely with them in order to achieve that.

    “We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We’ll keep everyone posted as we learn more.”
    Guru3d Source

    So what that indicates to me is that, as per usual, another AMD rep has once again stuck their dung covered foot into their mouth. Not just one, but 2 of them, even if one of them is trying to pretend being both retired from AMD and neutral on the topic.

    The saddest part of all of this is what was once (possibly) an interesting set of talking points, which some people are actually trying to make it into, has turned into just another click-bait thread for the fan boys from either side of the aisle. And the hilarious part is, the ones that are screaming the loudest about who is best are the ones running hardware 3 and 4 generations old that were mid-rung on the ladder back in the day, and won't ever spend the money on the parts they are defending.

  16. #31
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    I think part of the argument is a little too broad i.e. either Nvidia support async or they don't. But in reality there's more to it than that; from what I can tell, async is or will be supported at a high level but the actual implementation is quite different to AMDs and hence performance might differ depending on individual applications.

    If that's correct so far, it doesn't sound all that different to the tesselation debate at a high level. AMD actually supported it in hardware first but IIRC it was seldom used until the APIs made it mandatory and hence Nvidia implemented it. However in that case it turned out to be Nvidia who ended up with higher tesselation performance. Not unlike what we're seeing now, the impact this had depended on the game - from what I saw both parties had more than enough tesselation performance for sane implementations, but a couple of games for whatever reason used silly levels which needlessly hurt performance, but more so on AMD cards then Nvidia. It was completely inexcusable where it was used in some cases so it was either a massive oversight by the devs or something less savoury was going on.

    That sort of thing overall just harms consumers. In theory, if AMD's async performance is much superior or if architectural idiosyncrasies cause problems with porting certain code over to Maxwell's implementation then it would be possible for an unscrupulous developer to deliberately de-optimse the code and have it harm Nvidia performance more than AMD. Hopefully we won't get that.

    On the other hand, as I think I've said before, I completely understand developers wanting to exploit a feature in order to improve performance and/or their workflow, just as long as it's for something useful - so yeah I agree with Nvidia's quote along the lines of 'check if it actually helps', but as long as that doesn't just apply to Maxwell. And if using a feature would lock out users of other architectures, then really you'd want an option to play without a given effect/feature.

    I also wonder about Kepler, as that's what a lot of gamers own.

  17. #32
    Registered+
    Join Date
    Oct 2011
    Posts
    96
    Thanks
    0
    Thanked
    0 times in 0 posts
    • canopus72's system
      • Motherboard:
      • Asus sabretooth X58
      • CPU:
      • I7-960
      • Memory:
      • 12GB Kingston HyperX @1600mhz
      • Storage:
      • 6TB
      • Graphics card(s):
      • 560 Ti SOC
      • PSU:
      • coolermaster gold 1200watt
      • Case:
      • CM HAF932
      • Operating System:
      • W7 64bit
      • Monitor(s):
      • Philips 234EL

    Re: Nvidia publishes DX12 Do's and Don'ts checklist for developers

    Useful do's and donts advice for NVidia -

    1) Don't lie to your customers.
    2) Don't rip off your customers bearing in mind the paltry performance of Maxwell in DX12.
    3) Don't steal HBM tech from AMD.
    4) Don't behave like a petulant, psychotic brat if your over-hyped gpu's cant walk the walk.
    5) Don't bribe game devs ('watchdogs'...ahem) to cripple AMD gpu performance.
    6) Don't emulate intel by using illegal methods to pilfer the lion's share of gpu market.
    7) DO GET SUED AS MUCH AS YOU CAN (970GTX RAM FIASCO, $1 BILLION CLASS ACTION LAW-SUIT).

Page 2 of 3 FirstFirst 123 LastLast

Thread Information

Users Browsing this Thread

There are currently 2 users browsing this thread. (0 members and 2 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •