Translate

Total Pageviews

My YouTube Channel

Sunday, 28 April 2013

Restarting the Management agents on an ESXi or ESX host (1003490)


Symptoms

  • Cannot connect ESX/ESXi to VirtualCenter / vCenter Server
  • Cannot connect directly to the ESX/ESXi host from the VMware Infrastructure Client / vSphere Client
  • You cannot stop or start a virtual machine
  • A virtual machine is shown as running in vCenter Server when it is not
  • vCenter Server shows the error:

    Virtual machine creation may fail because agent is unable to retrieve VM creation options from the host

Purpose

For troubleshooting purposes, it may be necessary to restart the management agents on your ESX host. This article provides steps to restart the management agents (mgmt-vmware and vmware-vpxa) directly on ESX or ESXi.
This article applies to ESX/ESXi 3.x, 4.x, and 5.x.
Caution: Restarting the management agents may impact any tasks that may be running on the ESX or ESXi host at the time of the restart. For more information about identifying tasks running on an ESX or ESXi host, see Collecting information about tasks in VMware ESX and ESXi (1013003).
 
For related information, see Using ESXi Shell in ESXi 5.x (2004746).

Resolution

Restarting the Management agents on ESXi


To restart the management agents on ESXi:
 
DCUI:
  1. Connect to the console of your ESXi host.
  2. Press F2 to customize the system.
  3. Log in as root.
  4. Use the Up/Down arrows to navigate to Restart Management Agents.

    Note: In ESXi 4.1 and ESXi 5.x, this option is available under Troubleshooting Options.
  5. Press Enter.
  6. Press F11 to restart the services.
  7. When the service has been restarted, press Enter.
  8. Press Esc to log out of the system.
From Local Console or SSH:
  1. Log in to SSH or Local console as root.
  2. Run this command:

    ./sbin/services.sh restart
Note: For more information about restarting the management service on an ESXi host, see Service mgmt-vmware restart may not restart hostd (1005566).

Restarting the Management agents on ESX


To restart the management agents on an ESX host:
  1. Log into your ESX host as root from either an SSH session or directly from the console.
  2. Run this command:

    service mgmt-vmware restart

    Caution: Ensure Automatic Startup/Shutdown of virtual machines is disabled before running this command or you risk rebooting the virtual machines. For more information, see Restarting hostd (mgmt-vmware) on ESX hosts restarts hosted virtual machines where virtual machine Startup/Shutdown is enabled (1003312) and Determining whether virtual machines are configured to autostart (1000163).
  3. Press Enter.
  4. Run this command:

    service vmware-vpxa restart
  5. Press Enter.
  6. Type logout and press Enter to disconnect from the ESX host.
    If this process is successful, it appears as:
    [root@server]# service mgmt-vmware restart
    Stopping VMware ESX Server Management services:
    VMware ESX Server Host Agent Watchdog [ OK
     ]
    VMware ESX Server Host Agent [ OK ]
    Starting VMware ESX Server Management services:
    VMware ESX Server Host Agent (background) [ OK ]
    Availability report startup (background) [ OK ]
    [root@server]# service vmware-vpxa restart
    Stopping vmware-vpxa: [ OK ]
    Starting vmware-vpxa: [ OK ]
    [root@server]#
    Note: If there are failures listed:
    If starting or stopping the management agent fails try restarting it a second time. If the issue continues to exist after trying the steps in this article:
Source:-

Cannot log in to vCenter Server using the domain username/password credentials via the vSphere Web Client/vSphere Client after upgrading to vCenter Server 5.1 Update 1 (2050941)

Symptoms

  • After upgrading to vCenter Server 5.1 Update 1, you are unable to log in using the vSphere Web Client or domain username/password credentials via the vSphere Client.
  • In the imsTrace.log file, located at VC Installation Directory\SSOServer\logs\, you see entries similar to:

    2013-04-26 18:35:51,928, [LDAP Parallel Search Thread-15], (GroupAccessSQL.java:1775), trace.com.rsa.ims.admin.dal.sql.GroupAccessSQL, DEBUG, host.domain.com,,,,SELECT GROUP_ID FROM IMS_PRINCIPAL_GROUP WHERE PRINCIPAL_ID = ?
    2013-04-26 18:35:51,928, [castle-exec-11], (SecurityTokenServiceImpl.java:117), trace.com.rsa.riat.sts.impl.SecurityTokenServiceImpl, ERROR, host.domain.com,,,,Error while trying to generate RequestSecurityTokenResponse
    com.rsa.common.UnexpectedDataStoreException: Failed group search, unexpected interrupt
    at com.rsa.ims.admin.dal.ldap.GroupAccessLDAP.getPrincipalGroupsFromFSP(GroupAccessLDAP.java:1338)
    at com.rsa.ims.admin.dal.ldap.GroupAccessLDAP.getMemberOfGroupsInBatchForAD(GroupAccessLDAP.java:1273)
  • Logging in using the Use Windows session credentials option via the vSphere Client is successful.

Cause

This issue can occur if the specified vCenter Server login domain user account is associated with a large number of domain groups and multiple domains are configured as SSO identity sources. The precise number of groups at which this issue can occur varies due to the nature of Active Directory internals. However, it is more likely to occur once domain-group membership for an account exceeds 19.

Resolution

VMware is actively working on a fix for the issue to enable customers with a large number of AD groups to upgrade to vCenter Server 5.1 Update 1.
Customers with SSO configured with multiple domain-based identity sources along with vCenter Server domain user accounts that are associated with a large number of groups should not upgrade to vCenter Server 5.1 Update 1.
 
If your environment meets the conditions of this issue, upgrading prevents users from logging into vCenter Server using the vSphere Web Client or via domain username/password using the VMware vSphere Client. If you have upgraded, you must log in through the “Use Windows session credentials” option in the respective client.
 
To work around this issue, use one of these options:
  • Log in to vCenter Server via the vSphere Client using the Use Windows session credentials option.
  • Work with your Active Directory administrator to modify the group membership of the vCenter Server login account to a minimum.
  • Limit the number of domain based identity sources to no more than one.
Source:-

Tuesday, 23 April 2013

When installing ESX/ESXi 4.x or 5.x to a physical server, the local SAS drive appears as a remote storage (1027819)


Symptoms

  • When installing ESX/ESXi to a physical server, the local Serial Attached SCSI (SAS) drive appears as a remote storage
  • The installer presents the local SAS drives as remote storage

Resolution

The ESX/ESXi installer always marks a Fibre Channel storage as remote and Serial ATA (SATA) disks as local.
A Serial Attached SCSI (SAS) card is marked as local only if the installer finds its corresponding identifier in the installation driver file. Otherwise, it assumes that the SAS card is remote.
Though the SAS card appears to be remote, you can still select and use it.
Note: If you select remote and are using ESX, the scratch partition is created automatically. To create a persistent scratch location for ESXi, see Creating a persistent scratch location for ESXi (1033696).

Additional Information

When trying to install ESX/ESXi via the kickstart method, the kickstart install may fail with this message:
no suitable disk was found
Caution: A genuine remote storage LUNs should be masked or disconnected from the host before performing the kickstart install or you may overwrite a datastore.
To resolve this issue, specify this option in the kickstart file:
--firstdisk=remote
Source:-

Saturday, 20 April 2013

Sample configuration of virtual switch VLAN tagging (VST Mode) (1004074)


Purpose

This article provides a sample network configuration for isolation and segmentation of virtual machine network traffic.

Resolution

To configure Virtual Switch (vSwitch) VLAN Tagging (VST) on an ESXi/ESX host:
  1. Assign a VLAN to a portgroup(s). The supported VLAN range is 1-4094.
    Reserved VLAN IDs:
    • VLAN ID 0 (zero) Disables VLAN tagging on port group (EST Mode)
    • VLAN ID 4095 Enables trunking on port group (VGT Mode)
  2. Set the switch NIC teaming policy to Route based on originating virtual port ID (this is set by default).

To configure the physical switch settings:
  1. Define ESXi/ESX VLANs on the physical switch.
  2. Allow the proper range to the ESXi/ESX host.
  3. Set the physical port connection between the ESXi/ESX host and the physical switch to TRUNK mode. ESXi/ESX only supports IEEE 802.1Q (dot1q) trunking.

    • Physical switch is set to TRUNK mode
    • dot1q encapsulation is enabled
    • Spanning-tree is set to portfast trunk (for example, port forwarding, skips other modes)
    • Define VLAN interface
    • Assign IP Range to VLAN interface
    • VLAN Routing – and VLAN Isolation

      Caution: Native VLAN ID on ESXi/ESX VST Mode is not supported. Do not assign a VLAN to a port group that is same as the native VLAN ID of the physical switch. Native VLAN packets are not tagged with the VLAN ID on the outgoing traffic toward the ESXi/ESX host. Therefore, if the ESXi/ESX host is set to VST mode, it drops the packets that are lacking a VLAN tag.

This sample is a supported Cisco Trunk Port configuration:
interface GigabitEthernet1/2
switchport                             (Set to layer 2 switching)
switchport trunk encapsulation dot1q   (ESXi/ESX only supports dot1q, not ISL)
switchport trunk allowed vlan 10-100   (Allowed VLAN to ESXi/ESX. Ensure ESXi/ESX VLANs are allowed)
switchport mode trunk                  (Set to Trunk Mode)
switchport nonegotiate                 (DTP is not supported)
no ip address
no cdp enable                          (ESXi/ESX 3.5 or higher supports CDP)
spanning-tree portfast trunk           (Allows the port to start forwarding packets immediately on linkup)

Note: For more information on configuring your physical network switch, contact your switch vendor.

To assign a VLAN to a port group, there must be a corresponding VLAN interface for each VLAN on a physical switch with a designated IP range.
For example:

interface Vlan200
ip address 10.10.100.1 255.255.255.0  (This IP can be used as VLAN 200 Gateway IP)

Note: When the VLAN ID is defined on the physical switch, it can be configured for ESX. If the IP range is assigned to a VLAN, decide if any routing may be required to reach other nodes on the network.

To configure a VLAN on the portgroup using the VMware Infrastructure/vSphere Client:
  1. Click the ESXi/ESX host.
  2. Click the Configuration tab.
  3. Click the Networking link.
  4. Click Properties.
  5. Click the virtual switch / portgroups in the Ports tab and click Edit.
  6. Click the General tab.
  7. Assign a VLAN number in VLAN ID (optional).
  8. Click the NIC Teaming tab.
  9. From the Load Balancing dropdown, choose Route based on originating virtual port ID.
  10. Verify that there is at least one network adapter listed under Active Adapters.
  11. Verify the VST configuration using the ping command to confirm the connection between the ESXi/ESX host and the gateway interfaces and another host on the same VLAN.

    Note: For additional information on VLAN configuration of a VirtualSwitch (vSwitch) port group, see Configuring a VLAN on a portgroup (1003825).
To configure via the command line:
esxcfg-vswitch -p "portgroup_name" -v VLAN_ID virtual_switch_name

Note: The illustration attached to this article is a sample VST mode topology and configuration with two ESXi/ESX hosts, each with two NICs connecting to the Cisco switch.
Source:-

Sample configuration of virtual machine (VM) VLAN Tagging (VGT Mode) in ESX (1004252)



Purpose

This article provides a sample configuration of a VLAN tagging at virtual machine level.

Resolution

Overview
  • 802.1Q VLAN trunking driver is required inside the virtual machine.
  • 64bit windows guest operating system automatically loads 802.1q E1000 driver.
  • 32bit guest operating system – requires manual configuration of the VMX file to point to E1000 driver.
  • Physical switch is set to trunk mode by using VLAN ID 4095.
  • Windows: Only 64bit Windows ships with E1000 Drivers.
  • Linux: Use dot1q module.
Configuration of VirtualSwitch (vSwitch)
To set a standard vSwitch portgroup to trunk mode:
  1. Edit host networking via the Virtual infrastructure Client.
  2. Click Host > Configuration > Networking > vSwitch > Properties.
  3. Click Ports > Portgroup > Edit.
  4. Click the General tab.
  5. Set the VLAN ID to 4095. A VLAN ID of 4095 represents all trunked VLANs.
  6. Click OK.
To set a distributed vSwitch portgroup to trunk mode:
  1. Edit host networking via the Virtual infrastructure Client.
  2. Click Home > Inventory > Networking.
  3. Right click on the dvPortGroup and choose Edit Settings.
  4. Within that dvPortGroup navigate to Policies > VLAN.
  5. Set VLAN type to VLAN Trunking and specify a range of VLANs or specificy a list of VLANs to be passed to the Virtual Machines connected to this portgroup.

    Note: To improve security, virtual Distributed Switches
     allow you to specify a range or selection of VLANs to trunk rather than allowing all VLANS via VLAN 4095.

Configuration of Windows TCP/IP
To configure the guest operating system for VGT:
  1. Download the e1000 NIC drivers from the Intel website into the 32-bit Windows virtual machine.
  2. Power off the virtual machine.
  3. Configure the virtual machine to use the e1000 virtual NIC. Enter a new line (or replace the existing virtual NIC) in the .vmx file of the virtual machine:

    Ethernet<n>.virtualDev = "e1000"

    Replace <n> with the number of the Ethernet adapter. For example, the entry for the first Ethernet adapter that has number 0 is:

    Ethernet0.virtualDev = "e1000"
  4. Power on the virtual machine.
  5. Configure the e1000 network connection.


To install the driver manually within a Windows 20008 R2 guest operating system: 
  1. Extract the Intel drivers downloaded to the temp folder using this command:

    Prowinx64.exe /s /e /f "C:\temp
  2. Right-click the network adapter and click Update Driver Software.
  3. Select Browse my computer for driver software.
  4. Click Let me pick from a list of device drivers on my computer.
  5. Select Have Disk.
  6. Click Browse.
  7. Browse to C:\temp\pro1000\winx64\ndis61\e1g6032e.inf.
  8. Click Next to install the driver.
  9. Repeat Steps 2-8 for each network adapter you have for the virtual machine.
  10. After all the adapters are updated, run the Intel setup program. You should now be able to install the advanced network services software with VLANs.
Note: You can also find the instructions in the manual/readme file for the driver.

Source:

Configuring Virtual Switch VLAN Tagging (VST) mode on a vNetwork Distributed Switch (1010778)



Purpose

This article describes the concept and configuration of VST mode on dvPortGroup.

Resolution

Note: For additional information on dvPortGroup configuration, see vNetwork Distributed PortGroup (dvPortGroup) configuration (1010593).
 
Set the physical port connection between ESX and physical switch to TRUNK mode. ESX only supports IEEE 802.1Q (dot1q) trunking.
 
VLAN configuration is required on ESX side. Define ESX VLANs on the physical switch. Set ESX dvPortgroup to belong to a certain VLAN ID.
Caution: Native VLAN ID on ESX VST Mode is not supported. Do not assign a VLAN to a portgroup that is the same as the native VLAN ID of the physical switch.
Native VLAN packets are not tagged with VLAN ID on the out going traffic toward ESX host. Therefore, if ESX is set VST mode, it drops the packets that are lacking a VLAN tag.
 
To configure VST on dvPortGroup:
  1. In vCenter, go to Home > Inventory > Networking.
  2. Right-click dvPortGroup and click Edit Settings.
  3. Under dvPortGroup > Settings > VLAN > Policies, set the VLAN type to VLAN.
  4. Select a VLAN ID between 1- 4094

    Note: Do not use VLAN ID 4095.
  5. Click OK.
Source:-

Friday, 19 April 2013

VLAN Tagging Policies

There are three vlan tagging policies:-
1. EST
2. VST
3. VGT
 


Thursday, 18 April 2013

How to tag a Non-SSD device as an SSD device for the Host Cache to Swap Feature Testing

Datastores that are created on solid state drives (SSD) can be used to allocate space for host cache. The host reserves a certain amount of space for swapping to host cache.

The host cache is made up of files on a low-latency disk that ESXi uses as a write back cache for virtual machine swap files. The cache is shared by all virtual machines running on the host. Host-level swapping of virtual machine pages makes the best use of potentially limited SSD space.

Using swap to host cache is not the same as placing regular swap files on SSD-backed datastores. Even if you enable swap to host cache, the host still needs to create regular swap files. However, when you use swap to host cache, the speed of the storage where the host places regular swap files is less important.

The Host Cache Configuration page allows you to view the amount of space on a datastore that a host can use to swap to host cache. Only SSD-backed datastores appear in the list of datastores on the Host Cache Configuration page.


1. Run this Command to enable the SSD Tagging on the Non-SSD device. Here after --device parameter you have to type the canonical name of your LUN.


2. Now Reclaim that path



3. Then Create the datastore from that lun you will get the ssd tagging on that datastore


4. Then It will be available to you in Host Cache to SSD that is under the Software Category in the Esxi Host Configuration Tab.

5. Now Open the Properties of the datastore and enable allocate space to host cache feature.


That's Done. This is how you can configure this feature.

Wednesday, 17 April 2013

Virtual Machine World ID Changes When Resetting or Powering Off Through the Remote Console (1137)



Details

Why does my virtual machine's ID (world ID) change if I press the reset or power off button on the remote console?

Solution

If you reset, or power off and then power on a virtual machine through the remote console, then ESX Server assigns a new virtual machine ID (or world ID) to this virtual machine. ESX Server doesn't retain the ID previously assigned to this virtual machine.
This behavior is similar to powering off, then powering on, an application in a Windows or Linux machine. In each case, the operating system assigns a new process ID (PID) to the newly started application.
To retain the same ID for a virtual machine, restart the virtual machine by using Restart in Windows or the reboot command in Linux. By performing this soft reset, you avoid changing the virtual machine's ID.
Source:-

Tuesday, 16 April 2013

Choosing a port binding type (1022312)



Details

When choosing a port binding type, consider how you want to connect your virtual machines and virtual network adapters to a vDS and how you intend to use your virtual machines. Port binding type, along with all other vDS and port group configuration, can be set only through vCenter Server.

Note: For details about using port binding in iSCSI adapter configuration, see Considerations for using software iSCSI port binding in ESX/ESXi (KB 2038869).

Solution

Types of port binding

These three different types of port binding determine when ports in a port group are assigned to virtual machines:
  • Static Binding
  • Dynamic Binding
  • Ephemeral Binding
Static binding
When you connect a virtual machine to a port group configured with static binding, a port is immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is disconnected only when the virtual machine is removed from the port group. You can connect a virtual machine to a static-binding port group only through vCenter Server.
Note: Static binding is the default setting, recommended for general use.
Dynamic binding
In a port group configured with dynamic binding, a port is assigned to a virtual machine only when the virtual machine is powered on and its NIC is in a connected state. The port is disconnected when the virtual machine is powered off or the virtual machine's NIC is disconnected. Virtual machines connected to a port group configured with dynamic binding must be powered on and off through vCenter.
Dynamic binding can be used in environments where you have more virtual machines than available ports, but do not plan to have a greater number of virtual machines active than you have available ports. For example, if you have 300 virtual machines and 100 ports, but never have more than 90 virtual machines active at one time, dynamic binding would be appropriate for your port group.
Note: Dynamic binding is deprecated in ESXi 5.0.
Ephemeral binding
In a port group configured with ephemeral binding, a port is created and assigned to a virtual machine by the host when the virtual machine is powered on and its NIC is in a connected state. The port is deleted when the virtual machine is powered off or the virtual machine's NIC is disconnected.
You can assign a virtual machine to a distributed port group with ephemeral port binding on ESX/ESXi and vCenter, giving you the flexibility to manage virtual machine connections through the host when vCenter is down. Although only ephemeral binding allows you to modify virtual machine network connections when vCenter is down, network traffic is unaffected by vCenter failure regardless of port binding type.
Note: Ephemeral port groups should be used only for recovery purposes when you want to provision ports directly on host bypassing vCenter Server, not for any other case. This is true for several reasons:
  • Scalability

    An ESX/ESXi 4.x host can support up to 1016 ephemeral port groups and an ESXi 5.x host can support up to 256 ephemeral port groups. Since ephemeral port groups are always pushed to hosts, this effectively is also the vCenter Server limit. For more information, see Configuration Maximums for VMware vSphere 5.0 and Configuration Maximums for VMware vSphere 4.1.
  • Performance

    Every operation, including add-host and virtual machine power operation, is slower comparatively because ports are created/destroyed in the operation code path. Virtual machine operations are far more frequent than add-host or switch-operations, so ephemeral ports are more demanding in general.
  • Non-persistent (that is, "ephemeral") ports

    Port-level permissions and controls are lost across power cycles, so no historical context is saved.
Note:  vSphere 5.0 has introduced a new advanced option for static port binding called Auto Expand. This port group property allows a port group to expand automatically by a small predefined margin whenever the port group is about to run out of ports. In vSphere 5.1, the Auto Expand feature is enabled by default.
In vSphere 5.0 Auto Expand is disabled by default. To enable it, use the vSphere 5.0 SDK via the managed object browser (MOB):
  1. In a browser, enter the address http://vc-ip-address/mob/.
  2. When prompted, enter your vCenter Server username and password.
  3. Click the Content link.
  4. In the left pane, search for the row with the word rootFolder.
  5. Open the link in the right pane of the row. The link should be similar to group-d1 (Datacenters).
  6. In the left pane, search for the row with the word childEntity. In the right pane, you see a list of datacenter links.
  7. Click the datacenter link in which the vDS is defined.
  8. In the left pane, search for the row with the word networkFolder and open the link in the right pane. The link should be similar to group-n123 (network).
  9. In the left pane, search for the row with the word childEntity. You see a list of vDS and distributed port group links in the right pane.
  10. Click the distributed port group for which you want to change this property.
  11. In the left pane, search for the row with the word config and click the link in the right pane.
  12. In the left pane, search for the row with the word autoExpand. It is usually the first row.
  13. Note the corresponding value displayed in the right pane. The value should be false by default.
  14. In the left pane, search for the row with the word configVersion. The value should be 1 if it has not been modified.
  15. Note the corresponding value displayed in the right pane as it is needed later.
  16. Go back to the distributed port group page.
  17. Click the link that reads ReconfigureDvs_Task. A new window appears.
  18. In the Spec text field, enter this text:

    <spec>
    <configVersion>1</configVersion>
    <autoExpand>true</autoExpand>
    </spec>


    where configVersion is what you recorded in step 15.
  19. Click the Invoke Method link.
  20. Close the window.
  21. Repeat Steps 10 through 14 to verify the new value for autoExpand.
These steps can be automated using a custom script. For more information, see the VMware vSphere blog entry Automating Auto Expand Configuration for a dvPortgroup in vSphere 5.
Examples: The script allows you to:
  • Enable Auto Expand for a distributed port group, by running this command:

    updatedvPortgroupAutoExpand.pl --server vcenter-ip --username user --operation enable --dvportgroup portgroupname
  • Disable Auto Expand for a distributed port group, by running this command:

    updatedvPortgroupAutoExpand.pl --server vcenter-ip --username user --operation disable --dvportgroup portgroupname
Source:

How to disable automatic rescan of HBAs initiated by vCenter (1016873)



Purpose

When you create, expand, extent, or delete a datastore, a rescan of your HBAs is automatically initiated. However, this rescan may result in a “rescan storm” when you are building a new environment and may result in issues such as, packet drops and poor performance. In such cases, you may want to disable auto-rescan.
 
This article provides the steps to disable auto-rescan.
 
Note: Auto-rescan enables other hosts in the cluster to recognize and use a newly created datastore immediately. Disabling auto-rescan may create issues when the hosts are unable to recognize datastores. For example, during vMotion, the destination host may reject the request because it is unable to recognize the datastore created by the source host. VMware recommends you to disable auto-rescan only when building a new environment.

Resolution

To disable auto-rescan:
  1. Open the vSphere Client.
  2. Go to Administration > vCenter Server.
  3. Go to Settings > Advanced Settings.
  4. If the config.vpxd.filter.hostRescanFilter key is not available, add the key and set it to False.

    Note:
     You need not restart vCenter Server for the change to take effect.
Note: If you turn off the Host Rescan Filter, your hosts continue to perform a rescan each time you present a new LUN to a host or a cluster. This can be seen in the Task & Events pane.

Additional Information

Ensure to set the config.vpxd.filter.hostRescanFilter key to True immediately after building your environment.
 
vCenter Server may display an error if you try to reset the value using the GUI. In such a case, you must manually edit the vpxd.cfg file to enable auto-rescan.
 
To enable auto-rescan by manually editing the vpxd.cfg file:
  1. Open the file using a text editor.
  2. Locate the <hostRescanFilter> entry.
  3. Set the entry to False:

    <filter><hostRescanFilter>true</hostRescanFilter></filter>
  4. Restart the VMware VirtualCenter Server service.

For more/related information, see the Turn off vcenter server storage filters section of the ESX 4.1 Configuration Guide or ESXi 5.0 Storage Guide

Source:-

Thursday, 11 April 2013

Determining and changing the rate of timer interrupts a guest operating system requests (1005802)



Purpose

Many aspects of timekeeping are influenced by the rate of timer interrupts that a guest operating system requests. This article lists the default values for a number of kernels and how to measure the rate.
 
Many guest operating systems keep track of time by programming a periodic timer interrupt and incrementing the current time by the period of the interrupt every time an interrupt is received. Such periodic timer interrupts are often referred to as "ticks" and this method of timekeeping is known as tick counting.

Resolution

Windows

Windows generally requests 64 or 100 interrupts per second depending on which HAL is use. There is a Windows multimedia timer API that allows applications to raise this to 1024 interrupts per second. There is no easy way to determine what the timer interrupt rate rate is from within the virtual machine. Timekeeping in a VMware Virtual Machinedescribes how to use TimeTrackerStats to determine the rate of timer interrupts the virtual machine is requesting.   

Linux

Linux kernels request a timer interrupt from the Programmable Interval Timer (PIT) and an interrupt from each Local APIC Timer, of which there is one per virtual CPU. These are all requested at the same rate, but that rate varies from kernel to kernel. The requested rate of timer interrupts is referred to in the Linux kernel as HZ. The total number of timer interrupts requested by a Linux kernel is HZ * (N + 1) where N is the number of virtual CPUs in the virtual machine.

Default mainline values

Different values can be selected at compile time. Vendor kernels based on a given mainline kernel may have a different value than the default.
  • Linux 2.4: HZ = 100Hz.
  • Linux 2.6.0 - 2.6.13: HZ = 1000Hz.
  • Linux 2.6.14 onward: HZ = 250Hz.

Tickless Linux Kernels

Starting with Linux 2.6.18, Linux kernels began transitioning away from tick counting. Some new kernels use aperiodic interrupts rather than programming a periodic timer interrupt. These kernels are called tickless kernels. For tickless Linux kernels, HZ still influences the rate at which aperiodic timer interrupts are requested, but since these interrupts are not counted for timekeeping, the value of HZ does not impact timekeeping.
Generally tickless Linux kernels do not use the PIT at all, and instead just use aperiodic Local APIC timer interrupts.
 

Affected Vendor kernels

  • Redhat Enterprise Linux 4: HZ = 1000Hz
  • Redhat Enterprise Linux 5: HZ = 1000Hz
  • SUSE Linux Enterprise Server 9: HZ = 1000Hz
  • SUSE Linux Enterprise Server 10: HZ = 250Hz
  • Ubuntu 6.10: HZ = 250Hz
  • Ubuntu 7.04 and 7.10 Desktop: HZ = 250Hz  

"divider" kernel command line option

Redhat Enterprise Linux 5 starting with version 5.1 and Redhat Enterprise Linux 4 starting with version 4.7 support a kernel command line option that allows a divider to be supplied that makes the kernel request HZ/divider timer interrupts per second. The syntax is divider=x where x is the divider. 
 
To change the requested rate to 100Hz, pass divider=10 on the kernel command line. RHEL 5.1 64bit kernels have a bug in their implementation of the divider= option. The bug is fixed in Redhat RHBA-2007-0959. Do not use thedivider= option with 64bit RHEL 5.1 unless this fix is applied.

Recompiling with a different HZ value

In standard 2.6 kernels, the timer interrupt rate is fixed at kernel compile time and cannot be changed by command line parameters. You can, however, recompile your kernel with a lower timer interrupt rate. 100Hz is adequate for most applications in a Linux guest. See the documentation for your Linux distribution for detailed instructions on how to build and run a custom kernel. Some version of Linux allow the value of HZ to be selected during the "make config" stage. Other kernels require the source to be modified. For those kernels, before recompiling the guest kernel, locate the following line in include/asm-i386/param.h orinclude/asm-x86_64/param.h: #define HZ 1000 . Change the value of HZ to 100: #define HZ 100 .

Measuring HZ

To measure the actual rate of timer interrupts in a Linux virtual machine:
  1. Compare /proc/interrupts at the beginning and end of a 10 second interval. 
  2. From a shell prompt run cat /proc/interrupts ; sleep 10; cat /proc/interrupts .

    There are several columns. 
    The first is the IRQ number. The PIT is the device that generates the interrupts that Linux counts for time keeping. This is IRQ 0. Also of interest is "LOC" which is the Local APIC timer. For each vCPU there is a column listing the number of interrupts that device has raised on that vCPU. The final two columns give the interrupt type and a name for the devices on that IRQ.
To compute the rate of timer interrupts:
  • before = Sum of all PIT interrupts received on all vCPUs before sleep.
  • after = Sum of all PIT interrupts received on all vCPUs after sleep.
  • timer rate = (after - before) / 10 seconds
For example:
 
         CPU0       CPU1        
0:     125251      79291    IO-APIC-edge  timer 
1:        591        585    IO-APIC-edge  i8042 
8:          0          0    IO-APIC-edge  rtc 
9:          0          0    IO-APIC-level  acpi
12:        67          8    IO-APIC-edge  i8042
14:       753        643    IO-APIC-edge  ide0
169:     2840        142    IO-APIC-level  ioc0
177:      748         19    IO-APIC-level  eth0
NMI:       43         35   
LOC:     204282     204830
ERR:        0
MIS:        0          
         CPU0       CPU1        
0:     134539      80039    IO-APIC-edge  timer 
1:        592        585    IO-APIC-edge  i8042 
8:          0          0    IO-APIC-edge  rtc 
9:          0          0    IO-APIC-level  acpi
12:        67          8    IO-APIC-edge  i8042
14:       771        715    IO-APIC-edge  ide0
169:     2840        147    IO-APIC-level  ioc0
177:      800         19    IO-APIC-level  eth0
NMI:       43         36   
LOC:     214314     214862
ERR:        0
MIS:        0
before = 125251 + 79291 = 204542
after = 134539 + 80039 = 214578
timer rate = (214578 - 204542) / 10 seconds = 1003/sec
 
This matches the 1000Hz rate for Redhat Enterprise Linux 4 that is expected.
 
If no PIT interrupts are seen in 10 seconds, the kernel is a tickless kernel and the HZ value does not matter for timekeeping.

Source:-

Using Dynamic Voltage and Frequency Scaling (DVFS) for Power Management (1037164)


Purpose

This article provides best practices to use Dynamic Voltage and Frequency Scaling (DVFS) for Power Management feature in ESX/ESXi hosts.
 
Dynamic Voltage and Frequency Scaling (DVFS) is a vSphere  feature that allows your ESX/ESXi host to dynamically switch its CPU frequency depending on the current load on the host. This saves power by providing only the voltage and frequency required to sustain the host and the virtual machines on the host.

Resolution

ESX/ESXi 4.0  
To set the CPU power management policy in ESX/ESXi 4.0, use the advanced host attribute Power.CpuPolicy. This attribute setting is saved in the host configuration and can be used again at boot time. This attribute can be changed at any time and does not require a server reboot.
For more information on configuring DVFS, see the CPU Power Management section of the vSphere Resource Management Guide. 
ESX/ESXi 4.1
You can set the CPU power management policy for a ESX/ESXi 4.1 host using the vSphere Client.
Note: ESX/ESXi supports the Enhanced Intel SpeedStep and Enhanced AMD PowerNow! CPU power management technologies. For the VMkernel to take advantage of the power management capabilities provided by these technologies, you must enable power management, called Demand-Based Switching (DBS), in the BIOS.
 
To set the CPU power management policy:
  1. In the vSphere Client inventory panel, select a host and click the Configuration tab.
  2. Under Hardware, click Power Management and then click Properties.
  3. Select a power management policy for the host and click OK.
The policy selection is saved in the host configuration and can be used again at boot time. You can change it at any time, and does not require a server reboot.
For more information, see the Using CPU Power Management Policies section of the vSphere Resource Management Guide.
Source:-