Maybe seperate network/vlan for vmotion traffic?
Multiple paths to iscsi targets?
Maybe seperate network/vlan for vmotion traffic?
Multiple paths to iscsi targets?
8 nics per host is plenty - is your DMz on the same segment as the rest of the network ?
dont forget the onboard nics as well.
I'd keep Vmotion on its own network as well as iSCSI traffic - if you are being extra security conscious you would keep management traffic on its own network as well, but its not essential.
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
DMZ is on the same segment, yes.
Separated vMotion traffic sounds sensible and I think it can be advantageous to add own network for management port...
Its really hard if you have to think about all of this without doing it before... I kinda struggle to see the big picture that its worth £100k lol
/EDIT
Server specs have changed... I will be getting 3 of them:
Height went from 1U to 2UOriginally Posted by Dell
RAM from 64 to 96GB
CPU changed from E5530 to E7520, both Quads, E7520 has lower frequency but more CACHE.
Above cache changed from 8 to 18MB
Any good?
Last edited by spoon_; 17-01-2011 at 06:00 PM.
My Blog => http://adriank.org
RAM is generally the big bottle neck in these things
however I'm concerned that there isn't any I/O sizing done - what Heads is your Netapp kit running ?
remeber when it comes to the I/O intensive servers ( db/ app workloads ) design the drives as if there where a physical. dont skimp on spindles. If you have heady read profile on workloads then consider adding some PAM to the Netapp kit.
if you really want something quick and have Cash to spend I can recommend some seriously clever stuff...
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
In the first post I said this will be hanging off existing FC SAN, I lied. Just looked again at the quote and it has separate box included:
Once my supplier comes back to me I will know more.Originally Posted by Supplier
I have to admit that it wasn't me who decided on the above specs, I got involved very late after they singed off the CAPEX to buy hardware - just trying to salvage the situation here.
Money have been singed off but I can still make changes as long as they are within £100k budget.
Can you elaborate on the "seriously clever stuff..." ?
My Blog => http://adriank.org
+1 for Virtualized vCenter, all our farms run them, as long as you follow VMware's best practice guide your good, ie disable DRS on the vCenter so you at least know where it was is there is an issue!
The vCenter doesn't control the restarting of VM's in an HA situation so no worries there.
Intel i5 2500K
Gbyte Z68mx-ud3
2x 4GB Corsair Vengance
NEC ND3540A BLACK DVDR
1TB HDD
Sony SDM-HS75P 17" TFT
Logitech Cordless Mouse and Keyboard LX700
The network would depend on how many networks you plan to run on the servers. I belive best practise is to seperate VKernel, VMotion, SAN (not required in your instance becuase you're fibre) and data networks but that would only give you 1 gig on the data network.
I'd be tempted to run a 4 gig lacp team and have VSphere perform the vlan tagging so that would reduce another point of failure and it's not like the VKernel needs a 1 gig link to itself. EDIT: I think you may have 8 ports from the spec which would be better for running single nics but the 3750g should do an 8 port etherchannel so I'd be tempted to run that.
Apart from that, I agree that VCentre in a virtual Machine is fine, HA would probably be a nice addition if you've paid for that.
As has been mentioned before as well though get good disks, I've ran 30-40 hosts on a single VSphere server fine on a good 15k sas solution but struggled for 8 when running slightly older 10k's.
Unless you need more than 1Gb bandwidth to a given VM , I reckon you can get away with out the ether channel and just bind multiple pNics to a given vswitch & let vSphere bind a Vm to a given vnic.
vlan tagging at the portgroup level should be fine - especially if you can trunk vlans together.
the vmkernel interface is use for vmotion / iSCSi / NFS traffic , I guess you are thinking of the Management interface? ( which I agree doesn't really need 1Gb - its vmkernel that reallt does need it for vmotion ) - you may have a second vmkernel port for NFS which you mat want at 1Gb , but use the same vswitch as the mgmt network.
spoon as for the seriously clever stuff I know of a solution that would leverage your Netapp kit for its feature set ( dedupe/ snap drive / snap mirror ) but use its own SAS/SSD/DDR auto tiering to give you some very fast I/O , but it may be out of budget ( I think they are about 50k a unit)
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
Spoon that quote is just for disks - I assume its an additional shelf to connect to your existing netapp heads ( which is doable if they have the horsepower spare )
have you thoguht about things like backup / monitoring ?
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
Sorry my mistake. I do indeed mean that the management interface doesn't need to have a full 1 gig of bandwidth while VKernel will require it, no ISCSI or NFS here so you don't need to worry about that.
The etherchannel also isn't required but it does have the advantage of allowing items to leverage the full bandwidth should they require it. You would allow VMs/Vmotion to burst past the 1 gig limit when required.
How was the VCAP course BTW?
I'm not a fully fledged packet pusher so tend to stay away from complex switch side configs , but if you are happy with an ether channel that would span both of those 3750's
I did the Design course for DCD , it was pretty good , especially in terms of some of the "softer" skills around putting a design together. there wern't really any courses around at the time I took the DCA as I did the beta , so just a copy of the blueprint , the Train Signal Videos and a home lab.
Still I passed both exams , so I must have got something right
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
This is on my todo list as we speak, one thing is annoying like hell - I got involved so late that's virtually impossible to think of everything. Everyone wants all of this now, now.
You can't possibly kick out good design document in 3 days...
Not sure how to accommodate backups, think it will be under 'Known Risks' section as the budget it very tight.
Monitoring however it doable using built-in alerts, unless you've been thinking of something else?
We do have Solarwinds to monitor VMs itself, its just the hosts that would need attention.
You are correct, the price if for additional shelf for NetApp 3020, failing that brand new NetApp 3210 will be purchased and dedicated to this project.
I share the same concerns performance wise looking at the 3020 with yet another shelf - it simply might not have enough spare IOPS to cope..
Last edited by spoon_; 28-01-2011 at 10:44 AM.
My Blog => http://adriank.org
The 4.1 Alerting is a lot better than the old VI3 alerting and will get you most of the info you need for day to day stuff - its exporting/extrapolating the data into capacity analysis that is a pain
Solarwinds is pretty good for the VM Monitoring , I *think* there might be a module to look at host performance too. remember a metric showin *within* a VM may not be 100% accurate.
so if you can find somethign inside your solarwinds budget , then have a look at this.
http://www.solarwinds.com/products/p...on_management/
however if you want to do it all in one then why not try Veeam Essentials ( http://www.veeam.com/smb.html )
It would give you a best in class backup product as well as reporting / monitroring modules
I'm not sure on price but you might be able to get a deal from them ( if you are interested I have a couple of contacts at veeam who might even do you a discount )
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
spoon_ (18-01-2011)
Aaah that's a great answer Moby, thanks for that.
I will look into this asap.
Another query - looking at the vSphere Advance feature list one thing strikes my mind - DRS isn't included... [I didn't have any input here] With 20-30 VMs surely it must be hard to pick which VM sits on which host?
I mean to have this done right without DRS I'd have to assume each VM running/consuming 100% of resources at all times and then distribute them evenly across all 3 hosts, right?
@Loki
Is LACP fully supported by VMware? I heard there are some issues with this.
Last edited by spoon_; 18-01-2011 at 10:34 AM.
My Blog => http://adriank.org
missing DRS is a bit of a pain , but possibly not the end of the world - VM's should settle down into a pattern & remeber you have 60 days to run with a full feature set before you apply the licences
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
Great news!
We are eligible for Mid Sized Acceleration Kit which means vSphere Enterprise is at the same price as Advanced flavour so DRS/DPM + Storage vMotion just got added as extra features
Another dilemma...
96GB of RAM and 2 Quad core CPUs
or
64GB of RAM and 2 Hexa core CPUs
?
There is basically £200 difference if I want to go 6 Core with less RAM.
My Blog => http://adriank.org
There are currently 1 users browsing this thread. (0 members and 1 guests)