After experiencing some odd performance issues on one of our virtualised servers at work I was wondering how CPU resources are allocated between virtual servers on the same box? Most other resources are very simple to split between the servers such as RAM and HDD space etc, although a CPU (single core, non-hyperthreading for simplicity) can only run 1 thread at a time.
The main issue is can very high load on one of the servers affect the performance of another? I'd imagine there's some kind of management done by the 'root' OS so that one server can't take all the resources, sharing the CPU time between the 2 servers. Although I this seems like it could cause response time issues while processes wait for a CPU time slice, since having the time slices too small would lead to time wasted loading data in/out of cache.
So, can anyone with some more knowledge/experience than me on how well CPU resources are handled between virtualised servers fill me in? Also, I'm not aware of what else is running on the box, since they don't let the devs touch the low level system setup![]()


LinkBack URL
About LinkBacks
Reply With Quote

