Re: ESX 3.X and Thecus N5200PRO
Hi Culbeda,
I have been following some of your posts about the Thecus 5200 and I am in the same boat as you in regards to wanting to test ESX 3.5. I have a test rig set up and running but I am reluctant to spend a heap of cash on a second box and the Thecus if this won't work for VMotion/Clustering etc. You mentioned above that this DOES work but there are some caveats. Are you able to elaborate on this a bit and mention some of the things you have to do to get this going? and how well it is performing for you?
Many Thanks
Re: ESX 3.X and Thecus N5200PRO
Quote:
Are you able to elaborate on this a bit and mention some of the things you have to do to get this going? and how well it is performing for you?
I would be happy to. It does work and it reasonably fast. I don't recommend it for production use, however, because there are some advanced iSCSI features that are not support, the most important one being support for iSCSI reservations (AKA reserve/release). This feature, as I understand it, reserves certain blocks of data so that they can't accidentally be overwritten by other nodes. Also, since it is not on the support SAN list, you can't get any support directly from VMWare.
All of that being said, I have yet to have any problems with it since I got it working. If you have already configured a node to use the SAN, I would probably recommend that you disconnect it from the SAN, add your new and old nodes to a VC cluster and then add the iSCSI target configuration all over again. Failing that, try reinstalling the OS. I don't know if you've ever tried reinstalling VMWare ESX on the same box, but it is a surprisingly quick and easy task. Just make sure that you have all of your network configuration documented. Once you re-mount the VMFS volumes, you can open the VMX files by browsing to them and you're up and running again in about an hour.
If you do that and still have problems, let us know and I'll be happy to send my ietd.conf information from the Thecus.
Re: ESX 3.X and Thecus N5200PRO
Quote:
Originally Posted by
cwindomsr
Is this patch something you can share? if so, my email address is cwindomsr at hotmail dot com.
I am currently havingthe same issues trying to get the N5200BPRO to work with ESX Server.
I'd share it, but it's a bit scary to use. The patch itself is fairly simple, but it had to be put back into the (poorly) encrypted compressed loop filesystems that the unit loads on boot. Installing it basically means ssh'ing into the box and then mounting the partition that stores the firmware loop filesystems (that are copied on boot -- not the ones it actually runs from in operation!), and then copying over the relevant one.
The huge drawback of this is that if you ever upgrade the firmware again your LUN IDs will be screwed up again and a new patch would have to be made to make them work again. (the data would still be safe, it would just break ESX again).
Quote:
Originally Posted by
culbeda
would be happy to. It does work and it reasonably fast. I don't recommend it for production use, however, because there are some advanced iSCSI features that are not support, the most important one being support for iSCSI reservations (AKA reserve/release). This feature, as I understand it, reserves certain blocks of data so that they can't accidentally be overwritten by other nodes. Also, since it is not on the support SAN list, you can't get any support directly from VMWare.
VMFS does write reservations in software, so it's not really a problem.
EDIT:
To elaborate, hosts that touch a VMFS place a lock on the VMDKs that they're using, which includes a "heartbeat" timestamp that's continually updated. Other hosts know to respect that lock as long as the heartbeat is still current. Obviously this is no real protection against a rogue modified/buggy VMFS implementation, but then you should trust all the hosts that are accessing your VMFS anyway. =)
Re: ESX 3.X and Thecus N5200PRO
You're more than welcome to trust YOUR production environment to a Thecus 5200Pro, but I'll stick with my NetApp. ;-)
You'd probably be better off using NFS for clustering at that point, but then again, I haven't really tested the performance of NFS on the 5200Pro compare to iSCSI with VMFS. Anyone else run the numbers?
Re: ESX 3.X and Thecus N5200PRO
From my experience iSCSI is significantly faster than NFS, and it was a big win to move to it -- VMFS is designed from the ground up for hosting virtual block devices.
Thus far I've had no problems beyond the ones already mentioned, and it's been happy and stable.
I would obviously not be running mission-critical stuff for a Fortune 500 company on the Thecus, but for general purposes (and potentially for a small to medium business) it's been great. I use it for R&D work on stuff that eventually gets pushed out to production on fancier hardware.
Re: ESX 3.X and Thecus N5200PRO
I presume the N5200 isn't in the HCL as an iSCSI SAN then ?
oddly enough I do know of a very large company still using NFS to host their VMFS - its on netapp kit , and the architect reckons he gets a performance boost due to some of the optimisations the netapp kit does internally.
Re: ESX 3.X and Thecus N5200PRO
Are there any news about benefits for ESX with the new 2.00.08 Firmware?
Thanxs in Advance ...
Re: ESX 3.X and Thecus N5200PRO
Quote:
Originally Posted by
dab_ch
Are there any news about benefits for ESX with the new 2.00.08 Firmware?
Thanxs in Advance ...
I am also keen to understand if anyone has any updates regarding firmware 2.08 and ESX 3.5 - especially with iSCSi (with two ESX hosts in a cluster attached to the N5200)
Re: ESX 3.X and Thecus N5200PRO
The Thecus 5200 series will likely never be a supported platform for iSCSI, although that can be said of many more expensive solutions as well. (My $5K (empty) Promise iSCSI appliance isn't either.)
But from a check on the new version, it is still using ietd version 0.4.12 and it is recommended that you use 0.4.15 unless they have added the iSCSI reserve/release patch. (I'm pretty sure they haven't.) All of that being said, I use a N5200BR-Pro with a 2-node ESX cluster for testing and it hasn't cause any problems for me once I got it working.
Short answer: There shouldn't be any significant difference but I would probably use 2.08 regardless just in case.