Translate

Total Pageviews

My YouTube Channel

Monday, 28 October 2013

Cannot deploy a virtual machine from a template because the template VMDK file is locked (1007487)

Symptoms

  • Cannot deploy a virtual machine from a template
  • The template's <vm name>-flat.vmdk or <vm name>.vmdk file is locked by a running virtual machine
  • The VMDK file is not found
  • When deploying a template the option to Edit virtual Hardware (Experimental) is selected and the disk size has been modified

Resolution

This issue is resolved in VMware vCenter Server 5.0 Update 2 and vCenter Server 5.1 Update 1.
  • To download vCenter Server 5.0 Update 2, see VMware Download Center.
  • To download vCenter Server 5.1 Update 1, see VMware Download Center.

    Note
    : If updating vCenter Server is not an option, follow the steps below to work around the issue:

A virtual machine deployed from a template locks the template's VMDK files if it is incorrectly configured to run on the template's virtual disk and VMDK file instead of its own. The lock prevents other virtual machines from being deployed from the template.
 
You must determine if the template's VMDK file is locked by another virtual machine, and if so, release the lock on the files.
 
Caution: Since a virtual machine was running on the template's disk, there may have been unwanted changes to the disk. As such, a good backup and recovery plan is essential to production environments.
 
To determine if the template's VMDK file is locked by another virtual machine:
  1. Login as root to the ESX host.
  2. Change to the working directory of the template.
  3. Run the command:
    vmkfstools -D <virtual disk file> | tail -f /var/log/vmkernel | grep -i owner

    Note: In ESXi 5.x, run this command:
    vmkfstools -D <
    virtual disk file> | tail -f /var/log/vmkernel.log | grep -i owner
  4. If there are (non-zero) numbers and letters near the end of the output, a MAC address has been associated with this VMDK and another virtual machine has a lock on this file. If there are zeros at the end of the output, the VMDK file is not locked by another virtual machine.
To release the lock on the template's VMDK file:
  1. To show which virtual machine on a single host has the lock, run the command:

    grep -lr <virtual disk file> /vmfs/volumes/*/*/*.vmx

    Note: Record the full path to the virtual disk file. If this fails because of locked files try one of the methods below. This may be the case if the virtual machine disks are on storage shared between multiple hosts:

    Alternate methods:
    • grep vmx /etc/vmware/hostd/vmInventory.xml | sed -e 's/>/</g' | awk -F\< '{print $3}' > vmx;
      IFS=$(echo -en "\n\b"); for i in `cat vmx`; do echo grep -H vmdk $i; done
    • Connect-VIServer -server localhost
      Get-Vm | Get-Harddisk | Where {$_.Filename -match "<diskname>"}
  2. Power off the virtual machine that has the lock. If you cannot power off the virtual machine, seePowering off an unresponsive virtual machine on an ESX host (1004340).
  3. Edit the virtual machine's settings and remove the virtual disk, but do not delete it.
  4. To connect an existing disk:
    • In the Add Hardware Wizard, click Hard Disk > Next.
    • Select the type of storage for the virtual machine’s disk.
    • Click Next > Use an existing virtual disk.
    • Browse to the path of the virtual disk file identified in Step 1.
    • Click Next.
Source:-

Friday, 25 October 2013

Vmware Learning-path-tool (Learn by Solution Track, Role, Product, or Certification)

Vmware New Learning Path Tool to select the course as per Solution Track, Role, Product, or Certification
Try this now:-
http://vmwarelearningpaths.com/#sthash.mpr7mj1Y.dpbs


Wednesday, 16 October 2013

Add a Shared Smart Card Reader to Virtual Machines in the vSphere Client

You can configure multiple virtual machines to use a virtual shared smart card reader for smart card authentication. The smart card reader must be connected to a client computer on which the vSphere Client runs. All smart card readers are treated as USB devices.
A license is required for the shared smart card feature. See the vCenter Server and Host Management documentation.
When you log out of Windows XP guest operating systems, to log back in, you must remove the smart card from the smart card reader and re-add it. You can also disconnect the shared smart card reader and reconnect it.
If the vSphere Client disconnects from the vCenter Server or host, or if the client computer is restarted or shut down, the smart card connection breaks. For this reason, it is best to have a dedicated client computer for smart card use.
Prerequisites
Verify that the smart card reader is connected to the client computer.
Verify that the virtual machine is powered on.
Verify that a USB controller is present.
1
Select the virtual machine in the vSphere Client inventory.
2
Click the USB icon on the virtual machine toolbar.
3
Select the shared smart card reader from the Connect to USB Devices drop-down menu.
The smart card device appears in the menu as a USB device and as a virtual shared device.
4
Select Shared the model name of your smart card reader followed by a number.
The device status appears as Connecting, then the device connects.
You can now use smart card authentication to log in to virtual machines in the vSphere Client inventory.

Saturday, 12 October 2013

Cannot roll back ESXi 5.x installations after an upgrade (2004502)

Symptoms

  • Rolling back an upgrade of ESXi 5.x fails.
  • If you reboot and press Shift-R, you see the error:

    No alternative hypervisor to roll back to

Cause

You cannot roll back an ESXi 5.x installation after an upgrade.
 
ESXi 5.x uses the same installer for fresh installations and upgrades. If the installer finds an existing ESX/ESXi 4.x installation, it performs an upgrade. Pressing Shift+R after the upgrade does not perform a roll back due to the differences between the ESX/ESXi 4.x and the 5.x boot loaders.

Resolution

After an upgrade to ESXi 5.x is complete, you cannot roll back the installation. 
 
To go back to the previous version, you need to perform a new installation.
 
Caution: When you perform the new installation, all configurations, and possibly virtual machines, contained on the host may be lost.

Source:-

Reverting to a previous version of ESXi after a failed upgrade (1033604)

Purpose

This article provides steps to revert to a previous version of ESXi after a failed upgrade.

Resolution

To revert to a previous version of ESXi after a failed upgrade:
 
Note: Back up your configuration data before making any changes.
  1. In the console screen of the ESXi host, press Ctrl+Alt+F2 to see the Direct Console User Interface (DCUI) screen.
  2. In the DCUI screen, press F12 to view the shutdown options for the ESXi host.
  3. Press F11 to reboot.
  4. When the Hypervisor progress bar starts loading, press Shift+R. You see a popup with a warning:
    Current hypervisor will permanently be replaced
    with build: X.X.X-XXXXXX. Are you sure? [Y/n]

  5. Press Shift+Y to to roll back the build.
  6. Press Enter to boot.
Source:-

Thursday, 10 October 2013

The vmxnet3 network adapter displays incorrect link speed on Windows XP and Windows Server 2003 (1013083)

Details

The vmxnet3 network adapter (10 GBps) displays an incorrect link speed in Windows XP and Windows Server 2003 guests, typically 1.4 GBps.

Solution

For more information, see the knowledge base article "A 10 GbE network adapter displays an incorrect link speed in Windows XP and Windows Server 2003" on the Microsoft Web site: http://support.microsoft.com/kb/931857/en-us.
Source:-

A 10 GbE network adapter displays an incorrect link speed in Windows XP and Windows Server 2003

Article ID: 931857 - View products that this article applies to.
System TipThis article applies to a different version of Windows than the one you are using. Content in this article may not be relevant to you. Visit the Windows 7 Solution Center

Collapse imageSYMPTOMS

In Microsoft Windows XP and Microsoft Windows Server 2003, a 10 Gigabit Ethernet (10 GbE) network adapter displays an incorrect link speed. Typically, the network adapter displays a link speed of 1.4 gigabits (Gb), regardless of the actual link speed.

Collapse imageCAUSE

This problem occurs because a network adapter calculates its link speed in multiples of 100 bits per second (bps). When the result exceeds a 32-bit field, the network adapter displays only the remainder. This remainder is usually 1.4 Gb.

Collapse imageSTATUS

Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.

Operational Limits for SRM and vSphere Replication (2034768)

Details

Different limits apply to the numbers of virtual machines that you can protect using SRM, depending on whether you use array-based replication, vSphere Replication, or a combination of both. Limits also apply if you use vSphere Replication independently of SRM.

Solution


SRM Protection Limits for Array-Based Replication

If you establish bidirectional protection, in which site B serves as the recovery site for site A and at the same time site A serves as the recovery site for site B, these limits apply across both sites, and not per site. For example, if you are using array-based replication for bidirectional protection, you can protect a total of 1000 virtual machines across both sites. In a bidirectional implementation, you can protect a different number of virtual machines on each site, but the total number of protected virtual machines across both sites cannot exceed 1000. For example, you can protect 400 virtual machines from site B to site A and 600 virtual machines from site A to site B.

SRM Protection Limits for Array-Based Replication:
ItemMaximum
Protected virtual machines per protection group500
Total number of protected virtual machines1000 *
Protection groups per recovery plan250 *
Recovery plans250
Datastore groups255 *
Concurrent recovery plans10

* Refer to Limitations to Using SRM with Array-Based Replication in Large-Scale Environments (2059498).

SRM Protection Limits for vSphere Replication Protection

If you establish bidirectional protection, in which site B serves as the recovery site for site A and at the same time site A serves as the recovery site for site B, these limits apply across both sites, and not per site. For example, if you are using vSphere Replication for bidirectional protection, you can protect a total of 500 virtual machines across both sites. In a bidirectional implementation, you can protect a different number of virtual machines on each site, but the total number of protected virtual machines across both sites cannot exceed 500. For example, you can protect 200 virtual machines from site B to site A and 300 virtual machines from site A to site B.

SRM Protection Limits for vSphere Replication Protection:
ItemMaximum
Protected virtual machines per protection group500
Total number of protected virtual machines500
Protection groups per recovery plan250
Recovery plans250
Concurrent recovery plans10

Combined Array-Based and vSphere Replication Protection Limits with SRM

With SRM, you can run array-based protection groups alongside vSphere Replication protection groups in the same SRM Server. However, the total number of protection groups cannot exceed 250. For example, you cannot create 150 array-based protection groups and then create 250 vSphere Replication protection groups, as this creates 400 protection groups in total. If you have 150 array-based protection groups, you can create an additional 100 vSphere Replication protection groups, to make a total of 250 protection groups.

Similarly, in a combined array-based and vSphere Replication setup, you can protect a maximum of 1000 virtual machines in total per SRM Server. If you establish bidirectional protection, in which site B serves as the recovery site for site A and at the same time site A serves as the recovery site for site B, this limit applies across both sites, and not per site.

You must remember that each replication type has a different maximum limit. The protection limit for array-based replication is 1000 virtual machines. The protection limit for vSphere Replication is 500 virtual machines. Because the maximum protection limit is 1000 virtual machines per SRM Server or per pair of SRM servers if you are using bidirectional protection, you cannot protect 1000 virtual machines with array-based replication and an also protect an additional 500 virtual machines with vSphere Replication. This results in a total of 1500 protected virtual machines, which exceeds the maximum limit.

As an example of a supported implementation, you could use array-based replication to protect 500 virtual machines and vSphere Replication to protect 500 virtual machines. Alternatively, you could use array-based replication to protect 900 virtual machines and vSphere Replication to protect 100 virtual machines. The combined total of virtual machines that you protect using array-based replication and vSphere Replication must not exceed 1000.

Combined Array-Based and vSphere Replication Protection Limits with SRM:
ItemMaximum
Protected virtual machines per protection group500
Number of virtual machines protected by array-based replication1000
Number of virtual machines protected by vSphere Replication500
Total number of protected virtual machines (array-based replication + vSphere Replication)1000
Protection groups per recovery plan250
Recovery plans250
Concurrent recovery plans10

Deployment Limits for vSphere Replication with SRM

SRM 5.0.x included vSphere Replication 1.0.x. In vSphere Replication 1.0.x, you deployed a vSphere Replication management server and one or more vSphere Replication servers separately. The vSphere Replication management server manages the vSphere Replication infrastructure and the vSphere Replication servers handle the replication of virtual machines. In SRM 5.1 and later, you deploy vSphere Replication as a single combined appliance that contains a vSphere Replication management server and a vSphere Replication server that is automatically registered with the vSphere Replication management server. With SRM 5.1 and later, you can deploy and register additional vSphere Replication servers to balance the replication load across your virtual infrastructure. You can deploy up to 9 additional vSphere Replication servers per vSphere Replication appliance, to make a total of 10 vSphere Replication servers per vSphere Replication appliance. For more information, see Deploy an Additional vSphere Replication Server in Site Recovery Manager Installation and Configuration.

If you use vSphere Replication independently of SRM, you cannot deploy additional vSphere Replication servers. If you use vSphere Replication independently of SRM, the vSphere Replication appliance, with its single vSphere Replication server, supports a maximum of 500 replication schedules. When using vSphere Replication with SRM, the maximum number of replication schedules per vSphere Replication appliance is also 500. However, when using vSphere Replication with SRM, you should not exceed 100 replication schedules per vSphere Replication server. This limit applies to the vSphere Replication server that is embedded in the vSphere Replication appliance as well as to any additional vSphere Replication servers that you deploy. So, if you want to use vSphere Replication with SRM to replicate more than 100 virtual machines, you must deploy additional vSphere Replication servers. How you distribute the replication schedules across the vSphere Replication servers is unimportant, as long as no single vSphere Replication server has more than 100 replication schedules and the total number of replication schedules for the vSphere Replication appliance does not exceed 500.

For example, with SRM you can deploy 4 additional vSphere Replication servers, to make a total of 5 vSphere Replication servers registered with a vSphere Replication appliance. Each vSphere Replication server can handle 100 replication schedules, to make a total of 500 replication schedules. As another example, you can deploy 9 additional vSphere Replication servers, to make a total of 10 vSphere Replication servers (the maximum). One vSphere Replication server can handle 100 replication schedules (the maximum), 8 vSphere Replication servers can each handle 45 replication schedules, and one vSphere Replication server can handle 40 replication schedules.

Deployment Limits for vSphere Replication with SRM:
ItemMaximum
vSphere Replication servers registered to a vSphere Replication appliance, including the embedded vSphere Replication server10
Virtual machine replication schedules per vSphere Replication server100
Maximum number of virtual machines protected by vSphere Replication500

Limits for Using vSphere Replication Independently of SRM

If you use vSphere Replication independently of SRM, you cannot deploy additional vSphere Replication servers. The maximum number of virtual machines that you can protect using vSphere Replication without SRM is 500 per appliance. If you establish bidirectional protection, in which site A serves as the recovery site for site B and at the same time site B serves as the recovery site for site A, these limits apply across both sites. If you are using vSphere Replication for bidirectional protection, you can protect a total of 500 virtual machines across both sites.

However, it is possible to create a vSphere Replication setup with more than two sites, in which each site has a vCenter Server instance and a vSphere Replication appliance. If you have more than two sites, you can protect more than 500 virtual machines. For example, you can create a setup with three sites, Site A, Site B, and Site C:
  • Site A replicates 250 virtual machines to Site B
  • Site A replicates 250 virtual machines to Site C
  • Site B replicates 250 virtual machines to Site C
In this example, the total number of replicated virtual machines is 750, but each site only handles 500 replications:
  • Site A has no incoming replications and 500 outgoing replications
  • Site B has 250 incoming replications and 250 outgoing replications
  • Site C has 500 incoming replications and no outgoing replications
By setting up more than two vSphere Replication sites, in which no individual site exceeds 500 replications, the total number of replications can be greater than 500.

Limits for Using vSphere Replication Independently of SRM:
ItemMaximum
vSphere Replication appliances per vCenter Server instance1
Maximum number of protected virtual machines per vSphere Replication appliance500

Microsoft NLB not working properly in Unicast Mode (1556)


Details

When running Microsoft Network Load Balancing (NLB) configured in unicast mode, the Network traffic is directed to only one of the nodes.

Solution

In unicast mode, all the NICs assigned to a Microsoft NLB cluster share a common MAC address. This requires that all the network traffic on the switches be port-flooded to all the NLB nodes. Normally, port flooding is avoided in switched environments when a switch learns the MAC addresses of the hosts sending network traffic through it.

The Microsoft NLB cluster masks the cluster's MAC address for all outgoing traffic to prevent the switch from learning the MAC address.

In the ESXi/ESX host, the VMkernel sends a RARP packet each time certain actions occur; for example, when a virtual machine is powered on, experiences teaming failover, performs certain vMotion operations, and so forth. The RARP packet informs the switch of the MAC address of that virtual machine. In an NLB cluster environment, this exposes the MAC address of the cluster NIC as soon as an NLB node is powered on. This can cause all inbound traffic to pass through a single switch port to a single node of the NLB cluster.

To resolve this issue, you must configure the ESXi/ESX host to not send RARP packets when any of its virtual machines is powered on.

Notes:
  • VMware recommends configuring the cluster to use NLB multicast mode even though NLB unicast mode should function correctly if you complete these steps. This recommendation is based on the possibility that the settings described in these steps might affect vMotion operations on virtual machines. Also, unicast mode forces the physical switches on the LAN to broadcast all NLB cluster traffic to every machine on the LAN. If you plan to use NLB unicast mode, ensure that:

    • All members of the NLB cluster must be running on the same ESXi/ESX host.
    • All members of the NLB cluster must be connected to a single portgroup on the virtual switch.
    • vMotion for unicast NLB virtual machines is not supported.
    • The Security Policy Forged Transmit on the Portgroup is set to Accept.
    • The transmission of RARP packets is prevented on the portgroup / virtual switch as explained in the later part of the article.
  • VMware recommends having two NICs on the NLB server.

ESXi/ESX 3.x, 4.x, and 5.x

You can prevent the ESXi/ESX host from sending RARP packets upon virtual machine power up, teaming failover, and so forth using the Virtual Infrastructure (VI) Client or vSphere Client. You can control this setting at the virtual switch level or at the port group level.

To prevent RARP packet transmission for a virtual switch:

Note: This setting affects all the port groups using the switch. You can override this setting for individual port groups by configuring RARP packet transmission for a port group.

  1. Log into the VI Client/vSphere Client and select the ESXi/ESX host.
  2. Click the Configuration tab.
  3. Click Networking under Hardware.
  4. Click Properties for the vSwitch. The vSwitch Properties dialog appears.
  5. Click the Ports tab.
  6. Click vSwitch and click Edit.
  7. Click the NIC Teaming tab.
  8. Select No from the Notify Switches dropdown.


  9. Click OK and close the vSwitch Properties dialog box.

To prevent RARP packet transmission for a port group:

Note: This setting overrides the setting you make for the virtual switch as a whole.

  1. Log into the VI Client or vSphere Client and select the ESXi/ESX host.
  2. Click the Configuration tab.
  3. Click Networking under Hardware.
  4. Click Properties for the vSwitch. The vSwitch Properties dialog appears.
  5. Click the Ports tab.
  6. Click the portgroup you want to edit and click Edit.
  7. Click the NIC Teaming tab.
  8. Select No from the Notify Switches dropdown.


  9. Click OK to close the vSwitch Properties dialog.

ESX 2.x

  1. Log into the Management Interface and click Options > Advanced Settings.
  2. Set the value for Net.NotifySwitch to 0.

    NoteNet.NotifySwitch is a global setting that impacts all virtual machines.

For more information on NLB, see the Microsoft TechNet article Network Load Balancing Technical Overview.

Note: The preceding link was correct as of October 16, 2012. If you find the link is broken, please provide feedback and a VMware employee will update the link. The information provided in this link is provided as-is and VMware does not guarantee the accuracy or applicability of this information.

For related information, see Microsoft Network Load Balancing Multicast and Unicast operation modes (1006580).

Windows 2008 introduced a strong host model that does not allow different NICs to communicate with each other. For example, if a request comes in on the second NIC and if there is no default gateway set up, then the NIC will not use the first NIC to reply to the requests, even though a default gateway setup on the first NIC.

To change that behavior and return to the 2003 model, run these commands from the command prompt:

netsh interface ipv4 set interface "Local Area Connection" weakhostreceive=enable
netsh interface ipv4 set interface "Local Area Connection" weakhostsend=enable


Where Local Area Connection is the name of the network interface.

For more information, see the Microsoft TechNet Magazine article on Strong and Weak Host Models.

Note: The preceding link was correct as of October 16, 2012. If you find the link is broken, provide feedback and a VMware employee will update the link. The information provided in this link is provided as-is and VMware does not guarantee the accuracy or applicability of this information.
Source:-