That would depend on the workload/type of the servers contained in vms.
Printable View
it would but as I say I've yet to find a host thats been CPu constrained - its RAM thats usually the bottleneck ( unless you are doing a big old VDI setup )
RAM is also less beneficial to overcommit in production.
I went with the six core CPUs, R810s got 32 DIMM slots and only 16 are being used by the 64GB RAM [16*4GB sticks]
This leaves me with spare slots to play with in 2012 :)
This platform will be heavily used only from Sep/Oct, I can get easily money signed off in Jan 2012 for another 64GB of RAM which should make it nice and sweet.
fair enough :) not having to use 8GB Sticks will save a bit :)
I'm meeting with the vendors of the application so we can iron out all the requirements.
Bit surprised here that they don't use 64bit OSs which limits the memory to around ~3.5GB - not all of them are 32bit, there might be 4-5 64bit ones.
Not sure, might be something to do with their application.
it may well suck ;)Quote:
Not sure, might be something to do with their application.
make sure they do support it in a virtual environment ( get it in writing ) , or you will run into the biggest problem in virtualisation - "Layer 8 - Ego's , Vendors and Poorly Coded applications"
OK, I got confirmation from the vendors that they do support virtual environments - got this in writing as well.
Got another question - what's the best practice when it comes to SQL server(s) and virtualising them? My vendor requires 3 instances of the same database and options are:
3 separate VMs to run 1 instance each,
1 VM to run 3 instances in a cluster although vMotion won't be possible [correct?],
As above but without the Microsoft clustering i.e. 1 VM to run 3 instances.
Is there any other option that I have missed?
What's the best way of going about this? I find myself a bit out of depth when it comes to databases to be perfectly honest.
Any input much appreciated.
its going to come down to licence cost , personally I'd have 3 VM's , however how heavy are your uptime requirements ( ie can you take down the app when you need to apply a service pack )
I know for sure that patching the application itself will come with a downtime, this is how the maintenance on current Information System works.
Patching the OS itself or SQL - hmm not sure on this one, I suppose scheduling 1-2h windows once in a bit shouldn't be a problem.
When you say license cost you mean SQL or the OS? As a part of Campus Agreement with MS we get OS covered to only SQL would potentially be a problem.
I'll have to refresh myself on SQL licencing but is the application internet facing and not tied down to a specific number of users ? it'll make the difference between per user / per processor licences IIRC.
Do they need 3 instances of SQL or just 3 copies of the DB on a given system ?
there is a little bit of a loophole that you can use for your non production systems - buy an MSDN subscription for everyone that access non prod environment and you are covered.
SQL Licence Guide :
http://download.microsoft.com/downlo...w%20final.docx
Am I right in thinking that ESX gonna be retired in favour of ESXi?
I cannot recall where but I've read this somewhere...
This is how I have allocated 8 NICs I have per server:
Active/Passive Team:
NIC0 - vMotion [NIC0 Active, NIC1 Passive]
NIC1 - Management Network [NIC1 Active, NIC0 Passive]
NIC2 - DMZ
Active/Active Team or LACP Trunk [pros/cons?]:
NIC3 - iSCSI
NIC4 - iSCSI
Active/Active Team:
NIC5 - VM Network
NIC6 - VM Network
NIC7 - VM Network
Moby, can you see any potential issues with this?
Adrian
you are kind of right , ish :)
the next version of ESX to be released will not have a service console in the way thet current "full-fat " ESX does.
I went straight from full fat ESX3.5 to ESXi4.0 so I'm used to all the processes. If you are deploying at this stage it would seem odd not to deploy ESXi it will make future upgrades much less painless.
I day to day terms I'can't say its made a lot of difference , just make sure you set up a vMA appliance and set syslogging up for your hosts to it. everthing elce can be done with powershell ( grab a copy of Quests PowerGUI and the Vmware Communities powerpack)
I have transferred the network settings above to vSphere, does this look right? Kinda learning on the job here as I just happened to DON'T have a server with 8 physical NICs to test...
http://dl.dropbox.com/u/1471771/SIS%...%20Project.png
Please ignore the IP settings.
VMs will go under vSwitch3 [most of them] + front facing web nodes on vSwitch1.
Id' be tempted to run aecond nic for the DMZ or have you only got a sinlge DMZ switch so no point trying to provide a redudant network path ?
Its a single switch at the moment and I think it will stay this way - don't have any input here but might try to influence the future decisions.
Apart from that does the diagram look alright?
vCenter as a VM , but what are you wanting to do network wise with it - are you wanting to isloate it on the network ?