Results 1 to 9 of 9

Thread: WS2000 cluster failover vs WS2003

  1. #1
    Senior Member
    Join Date
    Jul 2003
    Location
    Reading, Berkshire
    Posts
    1,253
    Thanks
    64
    Thanked
    53 times in 34 posts
    • tfboy's system
      • Motherboard:
      • MSI X470 Gaming Plus
      • CPU:
      • AMD Ryzen 7 2700
      • Memory:
      • 2x8GB Corsair Vengeance LPX)
      • Storage:
      • Force MP600 1TB PCIe SSD
      • Graphics card(s):
      • 560 Ti
      • PSU:
      • Corsair RM 650W
      • Case:
      • CM Silencio 550
      • Operating System:
      • W10 Pro
      • Monitor(s):
      • HP LP2475w + Dell 2001FP
      • Internet:
      • VM 350Mb

    WS2000 cluster failover vs WS2003

    Hi all. Wondered if someone can advise:-

    I have two separate clusters. One is running Windows Server 2000 (Advanced of course), the other is running Windows Server 2003 Adv.

    The hardware for each cluster is identical - a two-node set up of HP DL380s connected to MSA5000 clustered storage via SCSI. The DL380s have two NICs. For each one, one (NIC#1) is linked to the main LAN and the other (NIC#2) is a crossover cable linking each to each other to provide heartbeat.

    Now node 1 has control of the cluster. If we disble both NICs, the cluster fails over to node 2. If we shut down the server, the cluster fails over to node 2. The ultimate test was to just pull the network cables out to simulate a failure (without pulling the mains out). Whilst this works fine (fails over) with 2003, it does NOT fail over for the 2000 cluster. Is this normal?

    A colleague thinks it is under the basis that node 1 which has control sees the heartbeat go away so does not think node 2 could take control and hence doesn't release it. My argument is that the connection to the LAN has also been lost (NIC1) so there wouldn't be any clients available to connect to the cluster anyway, so it might as well release control to node 2.

    I suppose technically, it can still hold the node under control because the SCSI connection to the storage is still alive so node 2 cannot take control of the data on the storage.

    Is this the case and WS2003 is just smarter than WS2000 for failing over?

    Thoughts?

    Cheers
    Last edited by tfboy; 08-05-2006 at 09:20 AM.

  2. #2
    Administrator Moby-Dick's Avatar
    Join Date
    Jul 2003
    Location
    There's no place like ::1 (IPv6 version)
    Posts
    10,665
    Thanks
    53
    Thanked
    385 times in 314 posts
    Id love to help , but clustering isn't one of my strong points - I'll ask around int he office , I'm sure our mail admin ( who seems to be in love with his exchange clusters ) will be able to shead some more light on the subject
    my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net

  3. #3
    Administrator Moby-Dick's Avatar
    Join Date
    Jul 2003
    Location
    There's no place like ::1 (IPv6 version)
    Posts
    10,665
    Thanks
    53
    Thanked
    385 times in 314 posts
    darn - I'd forgotten all our clusteres are 2k3 so dont have anything to compare.

    The only official comparison I can find between 2000 and 2003 cluteres is here :

    http://www.microsoft.com/windowsserv...lustering.mspx
    my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net

  4. #4
    Senior Member
    Join Date
    Jul 2003
    Location
    Reading, Berkshire
    Posts
    1,253
    Thanks
    64
    Thanked
    53 times in 34 posts
    • tfboy's system
      • Motherboard:
      • MSI X470 Gaming Plus
      • CPU:
      • AMD Ryzen 7 2700
      • Memory:
      • 2x8GB Corsair Vengeance LPX)
      • Storage:
      • Force MP600 1TB PCIe SSD
      • Graphics card(s):
      • 560 Ti
      • PSU:
      • Corsair RM 650W
      • Case:
      • CM Silencio 550
      • Operating System:
      • W10 Pro
      • Monitor(s):
      • HP LP2475w + Dell 2001FP
      • Internet:
      • VM 350Mb
    Thanks MD. We eventually did the ultimate destructive test of pulling out the power supplies

    W2k failed over ok.

    PMSL at the "in love with his exchange clusters" !!!!! You've just made my day

  5. #5
    Administrator Moby-Dick's Avatar
    Join Date
    Jul 2003
    Location
    There's no place like ::1 (IPv6 version)
    Posts
    10,665
    Thanks
    53
    Thanked
    385 times in 314 posts
    good stuff

    I'm not a huge clustering fan ( I think they are often miss sold as 100% uptime solutions , hence management expectations tend to far exceed what a cluster can deliver )
    None of my SQL boxes are clustered but I do have some hot standby's I can bring in to play with a restore and a DNS change
    my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net

  6. #6
    Ex-MSFT Paul Adams's Avatar
    Join Date
    Jul 2003
    Location
    %systemroot%
    Posts
    1,926
    Thanks
    29
    Thanked
    77 times in 59 posts
    • Paul Adams's system
      • Motherboard:
      • Asus Maximus VIII
      • CPU:
      • Intel Core i7-6700K
      • Memory:
      • 16GB
      • Storage:
      • 2x250GB SSD / 500GB SSD / 2TB HDD
      • Graphics card(s):
      • nVidia GeForce GTX1080
      • Operating System:
      • Windows 10 x64 Pro
      • Monitor(s):
      • Philips 40" 4K
      • Internet:
      • 500Mbps fiber
    Could be the SCSI drivers used on the 2K cluster maybe?

    If a node owns a resource group and loses connectivity to its clients then it should eventually do a bus reset so that other nodes with a functioning client-facing network interface should be able to acquire them.

    Unfortunately my cluster knowledge only extends to W2K3 so I'm not sure how much of best practice & expectations apply to W2K.
    ~ I have CDO. It's like OCD except the letters are in alphabetical order, as they should be. ~
    PC: Win10 x64 | Asus Maximus VIII | Core i7-6700K | 16GB DDR3 | 2x250GB SSD | 500GB SSD | 2TB SATA-300 | GeForce GTX1080
    Camera: Canon 60D | Sigma 10-20/4.0-5.6 | Canon 100/2.8 | Tamron 18-270/3.5-6.3

  7. #7
    Senior Member
    Join Date
    Dec 2005
    Location
    ::1
    Posts
    204
    Thanks
    4
    Thanked
    9 times in 8 posts
    • chinny's system
      • Motherboard:
      • Asus P5Q-EM
      • CPU:
      • Intel E6300
      • Memory:
      • 4Gb Corsair XMS2
      • Operating System:
      • Win7 x64
    Quote Originally Posted by Moby-Dick
    good stuff
    I'm not a huge clustering fan ( I think they are often miss sold as 100% uptime solutions , hence management expectations tend to far exceed what a cluster can deliver )
    I'll second that. we used to run clusters - they started as Win2K / Ex2K and then moved up to Win2K3 and Ex2K3.
    The expectation from management was that they would never fail and we'd have 100% uptime - the reality was quite different.
    We only had a few failures but when a service failed it would ping-pong between the two nodes until it finally died.

    We've now gone over to DL380s in a standard non-clustered setup and they're much better.
    Am using GFI NSM to monitor them and restart if the exchange services fail - works much better than clustering.

    I do seem to remember from back in the day that Win2K clusters had a very limited list of supported SCSI cards. Check out http://www.microsoft.com/whdc/hcl/search.mspx and see if your SCSI adapter is on the list...I seem to recall that there wern't very many on the supported list.

  8. #8
    Senior Member
    Join Date
    Jul 2003
    Location
    Reading, Berkshire
    Posts
    1,253
    Thanks
    64
    Thanked
    53 times in 34 posts
    • tfboy's system
      • Motherboard:
      • MSI X470 Gaming Plus
      • CPU:
      • AMD Ryzen 7 2700
      • Memory:
      • 2x8GB Corsair Vengeance LPX)
      • Storage:
      • Force MP600 1TB PCIe SSD
      • Graphics card(s):
      • 560 Ti
      • PSU:
      • Corsair RM 650W
      • Case:
      • CM Silencio 550
      • Operating System:
      • W10 Pro
      • Monitor(s):
      • HP LP2475w + Dell 2001FP
      • Internet:
      • VM 350Mb
    Chinny, that's an interesting point you raise about scsi compatibility. I'm sure it should be ok as it's sold as a clustered solution (2x DL380 + MSA5000) but with no OS. If only win2k is installed, it may be problematic.

    On the other hand, it does have sp4 and uses the standard ms-certified drivers so should be ok...

    I installed the w2k3 cluster, but not the 2000 so don't know exactly how it was setup and know if it should fail over in the first place when both heartbeat and lan connections are removed.

  9. #9
    Senior Member
    Join Date
    Dec 2005
    Location
    ::1
    Posts
    204
    Thanks
    4
    Thanked
    9 times in 8 posts
    • chinny's system
      • Motherboard:
      • Asus P5Q-EM
      • CPU:
      • Intel E6300
      • Memory:
      • 4Gb Corsair XMS2
      • Operating System:
      • Win7 x64
    If it's an MSA500 package (like the one here) then it's certified for Windows 2003 but no mention of Windows 2000.
    Like you say it should be fine with certified drivers though. I would have throught that if the SCSI driver for Windows 2003 has been written to be cluster compliant then they would have done the same with the Windows 2000 one - especially as it's being sold as a cluster package.

    I'm not sure what would happen when you fail both network cables on the active node. I would have thought the most likely cause for both NICs to fail at the same time would be a motherboard failure, in which case the active node would be dead and the shared storage would be released. The passive node should then take over.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. hardware clustering
    By firefox in forum PC Hardware and Components
    Replies: 23
    Last Post: 05-09-2004, 05:23 PM
  2. NTFS cluster size
    By luap.h in forum PC Hardware and Components
    Replies: 10
    Last Post: 13-04-2004, 03:14 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •