My name is Philipp C. Heckel and I write about nerdy things.

Server Virtualization with VMware Infrastructure (vSphere)


  • Nov 01 / 2008
  • Comments Off on Server Virtualization with VMware Infrastructure (vSphere)
Uncategorized

Server Virtualization with VMware Infrastructure (vSphere)



Contents


Download as PDF: This article is a slightly shortened version of my seminar paper. Feel free to download the original PDF version, or the presentation slides.


3. VMware Infrastructure

VMware Infrastructure is a full data center virtualization solution. Its core technology is the ESX hypervisor that manages the virtual machines of one host. By combining many ESX hosts and connecting them with an internal network, the virtualized servers create a so called cluster and form flexible virtualized data center. Since the servers running in the data center are VMs, one can easily move them around between different hosts. The additional software VMotion makes it possible to move VMs to another physical host while they are running and without experiencing any noticeable downtime. VMware’s Distributed Resource Scheduler (DRS) automates this process by live migrating VMs to hosts with the most available resources. If a physical host crashes, the purchasable add-on High Availability (HA) can restart the VMs on a different ESX host.

3.1. Live Migration

Virtualizing various operating systems on a host allows a very intensive usage of the available resources of a server. It maximizes hardware utilization and makes it possible to run a VM on almost any host system. However, it increases the dependence of all running VMs to their underlying physical server. If only one server fails or has to be replaced, many virtual machines are affected. Live migration is a technology that addresses to solve this problem by allowing to move running virtual machines from one host to another. That is, if a physical server has to be taken down for maintenance, virtual machines running on this host can be moved to other available servers of the cluster. During the process of migrating a running OS with all its applications, every service stays online and does not notice that the physical location of the CPU and memory it is using has changed. This includes connections as well as file handles and sockets. Even the IP address stays the same so that nothing has to be reconfigured inside the VM. Live migration is hence a handy tool to minimize the time in which important customer services are unavailable and improves data center manageability significantly.

However, not every server hardware is appropriate to use as a live migration host. VMware’s implementation of the technology, VMotion, makes high demands on the hardware and precisely defines the requirements to be able use it. Especially in terms of processors, VMware published many compatibility guidelines and exact specifications of what CPUs are supported. Thinking of the fact that virtualization abstracts the underlying hardware from its functionality, one might wonder why VMotion still is so hardware-dependent.

The problem VMware shares with other virtualization vendors it that not all CPU instructions are virtualized and hence no total hardware-independence is given. As described above, non privileged instructions are executed directly on the underlying CPU without rewriting them. This works fine as long as the capabilities of the CPU do not change while the application is running. Due to the fact that exactly this happens while the operating system is being migrated to a different machine, the application will most certainly crash if it wants to execute an instruction that the “new” CPU does not support. Hence, both source and destination CPU have to support the same feature set to make VMotion work (VMware, 2008 [was at: http://www.vmware.com/files/pdf/vmotion_info_guide.pdf, site now defunct, July 2019]).

The detailed technical implementation differs depending on the vendor, but the basic idea of the migration process is similar. In order to facilitate the migration, both source and destination host have be connected to the same network and must be able to access the virtual disk of the VM. Migrating a VM requires transferring the in-memory state of a machine to the destination host. That includes the kernel-internal state (e.g. active TCP connections) and the application-level state with all open processes. The process basically divides into three phases (Clark et al., 2005).

The VM is running on the source host. After making sure that the destination host provides enough resources to run the virtual machine, the VM’s memory pages are copied iteratively to the destination host. The full memory image of the virtual machine is copied once and all pages that have been dirtied while the copying was in progress are scheduled to be resend in the successive rounds. That way, fewer pages have to be send in every cycle while the VM on the source host can continue to run (pre-copy/push phase). When a beneficial point in time is reached and the amount of transferred pages constantly stays low, the VM on the source host is being suspended and the network traffic is redirected to the destination host. To inform the machines in the network that the MAC address of the physical host has changed, an ARP reply is sent either to the router or directly to the machines in the ARP cache of the VM. At this point, both source and destination hold a consistent copy of the VM, so that the source VM can be resumed in case of an error. After this stage has been successful completed, the VM is resumed on the destination host (stop-and-copy phase). Even though the virtual machine is already running on the new host, it is possible that not all parts of the memory have been copied completely. If the guest operating system accesses a memory page that has not yet been transferred, the page is faulted by the VMM and pulled over the network. After the memory reaches a consistent state, the VM on the destination host is activated and the image on the source host can be deleted. This phase is called the pull phase (Wagnon, 2005).

3.2. Data Center Automation and High Availability

Even though live migration simplifies the maintenance works and makes it easier to load balance VMs among different ESX hosts in a cluster, it lacks of the ability to automatically manage the data center without human interactions. VMware’s Distributed Resource Scheduler (DRS) tries to fill this gap by providing a framework for this purpose. DRS automatically migrates VMs to different hosts in a cluster based on freely definable rules. It monitors the resources on all registered ESX hosts and balances the load according to the current needs and the specified priorities of the virtual machines. This can reduce the amount of required administration and contribute to an optimal utilization of the data center.

Additionally to the automated load balancing and utilization control via DRS, VMware offers a solution to provide highly available data centers. The additional component High Availability (HA) monitors virtual machines and restarts them on another physical server if it they are not available. For this purpose, it sends so called heartbeats to each VM in a definable interval of time. If a VM does not respond, HA considers it as crashed and tries to restart it either on the same or on a different ESX host.

Pages:<123 45>

Comments are closed.