Translate

Total Pageviews

My YouTube Channel

Tuesday, 26 August 2014

vSphere Replication Appliance is not listed in SRM UI even after the successful deployment


Recently i faced this issue that even after the successful deployment of vSphere Replication Appliance it's not listed in SRM UI. After struggling with issue i came with this solution that i would like to share with everyone.

 
For resolving this issue:-
 
1. I opened the web browser
2. Then i provided the https://vSphereReplicationApplianceFQDNorIP:5480 to access the VAMI (Virtual Appliance Management Interface) of appliance.
3. Then i provided username and password: Username = root, Password = As per your configuration
4. Then i opened the Configuration Tab and Started the VRM service
5. Click on Logout and closed this Web Interface
6. Bingo!!!! Now my vSphere Replication Appliance is listed in SRM UI

 

Monday, 25 August 2014

Installing SRM to Use with a Shared Recovery Site

With SRM, you can connect multiple protected sites to a single recovery site. The virtual machines on the protected sites all recover to the same recovery site. This configuration is known as a shared recovery site, a many-to-one, or an N:1 configuration.
In the standard one-to-one SRM configuration, you use SRM to protect a specific instance of vCenter Server by pairing it with another vCenter Server instance. The first vCenter Server instance, the protected site, recovers virtual machines to the second vCenter Server instance, the recovery site.
Another example is to have multiple protected sites that you configure to recover to a single, shared recovery site. For example, an organization can provide a single recovery site with which multiple protected sites for remote field offices can connect. Another example for a shared recovery site is for a service provider that offers business continuity services to multiple customers.
In a shared recovery site configuration, you install one SRM Server instance on each protected site. On the recovery site, you install multiple SRM Serverinstances to pair with each SRM Server instance on the protected sites. All of the SRM Server instances on the shared recovery site connect to the same vCenter Server instance. You can consider the owner of an SRM Server pair to be a customer of the shared recovery site.
You can use either array-based replication or vSphere Replication or a combination of both when you configure SRM Server to use a shared recovery site.

An organization has two field offices and a head office. Each of the field offices is a protected site. The head office acts as the recovery site for both of the field offices. Each field office has an SRM Server instance and a vCenter Server instance. The head office has two SRM Server instances, each of which is paired with an SRM Server in one of the field offices. Both of the SRM Server instances at the head office extend a single vCenter Server instance.

Field office 1

SRM Server A
vCenter Server A
Field office 2

SRM Server B
vCenter Server B
Head office

SRM Server C, that is paired with SRM Server A
SRM Server D, that is paired with SRM Server B
vCenter Server C, that is extended by SRM Server C and SRM Server D

Example of Using SRM in a Shared Recovery Site Configuration
Architecture of SRM in a Shared Recovery Site Configuration
VMware vCenter Site Recovery Manager (SRM) is a disaster recovery product which uses array replication technologies to failover from one site to another. Shared Recovery Site (also called N-to-1) is a new feature of SRM 4.x and 5.0. This feature allows the management of multiple SRM environments by a single vCenter 4.x or 5.x instance on the recovery side. Each environment is identified by a custom plug-in named extension.
 
Note: Shared Recovery Site is not supported on the protected site. It is only supported on the recovery site.
 
To enable the Shared Recovery Site feature during the SRM installation, the command line option CUSTOM_SETUP=1 must be passed as an argument to the installer when deploying SRM. This option opens additional windows during the installation which allow a user to create a custom SRM plug-in identifier. These windows are not visible during a standard SRM install. This identifier must be created on both the protected SRM site and the recovery SRM site. The custom SRM plug-in identifier must be identical on the source and the target side.

For further/related information, see:
 
Here is an example of the command line to install SRM 4.0 with the Shared Recovery Site feature :
"C:\Documents and Settings\Administrator\My Documents\Software\SRM\VMware-srm-4.0.0-175235.exe" /V"CUSTOM_SETUP=1"
The procedure is the same for SRM 4.1 and 5.x

For step-by-step instructions to deploy SRM with the Shared Recovery Site feature, watch this video:
 
 
 
*If you find the video does not play, try updating your flash player.
**To watch the video in full-screen, choose the full-screen option at the bottom right of the player.

Additional Information

For each shared recovery site customer, you must install SRM once at the production/protected site and again at the recovery site using the same custom extension at both sites. Each shared recovery site SRM server installation must have a dedicated windows server (virtual or physical machine). You cannot install multiple instances of the SRM server on a single host.

Installing SRM with a custom extension does not preclude you from creating an additional installation, extending the same instance of vCenter that uses the default extension. Similarly, installing SRM with the default extension does not preclude you from creating an additional installation, extending the same instance of vCenter that uses a custom extension. You can run SRM with custom extensions only; the default extension is not a requirement.
 
Thanks to VMware Documentation
Source KB:-

 

Cannot pair vCenter Site Recovery Manager sites after reinstalling one site with the Remove Database Contents option (2052841)

Symptoms

After uninstalling vCenter Site Recovery Manager (SRM) with the Remove Database Contents option at either the primary or remote site, the SRM server is unable to pair the sites after reinstalling SRM.

Cause

You cannot pair a newly installed SRM site with an existing SRM because it is not possible to recreate all SRM data structures based on information on the remote SRM site alone. 

Resolution

To resolve this issue, you must uninstall/reinstall both sites when using the Remove Database Contents option.
Source KB:-
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2052841&src=vmw_so_vex_ragga_1012

Thursday, 21 August 2014

Multipathing Options for VMFS and NFS Datastores

To maintain a constant connection between a host and its storage, ESXi supports multipathing. Multipathing is a technique that lets you use more than one physical path that transfers data between the host and an external storage device.
In case of a failure of any element in the SAN network, such as an adapter, switch, or cable, ESXi can switch to another physical path, which does not use the failed component. This process of path switching to avoid failed components is known as path failover.
In addition to path failover, multipathing provides load balancing. Load balancing is the process of distributing I/O loads across multiple physical paths. Load balancing reduces or removes potential bottlenecks.
Note
Virtual machine I/O might be delayed for up to sixty seconds while path failover takes place. These delays allow the SAN to stabilize its configuration after topology changes. In general, the I/O delays might be longer on active-passive arrays and shorter on active-active arrays.
To Select the Multipathing Policy for VMFS datastore "Connectivity and Multipathing" option is available:-
Here you can select the multipathing policies from here:-
Select datastore > Manage Tab > Settings
 
To Select the Multipathing Policy for NFS datastore
For NFS Datastores there is no such option called Connectivity and Multipathing and From the same screen we can only check this datastore is connected with how many esxi hosts
Select datastore > Manage Tab > Settings
 
Then to configure multipathing for NFS datastore you need to create vSwitch with multiple NICs and create the VMkernel port on on this vSwitch and go to the properties of the vSwitch and select NIC teaming policy as given in the screenshot.

 

Options for Enabling or disabling Lockdown mode on an ESXi host

When you enable Lockdown mode, only the vpxuser has authentication permissions. Other users cannot perform any operations directly on the host. Lockdown mode forces all operations to be performed through vCenter Server. A host in Lockdown mode cannot run vCLI commands from an administration server, from a script, or from the vMA on the host. In addition, external software or management tools might not be able to retrieve or modify information from the ESXi host.

For more information on Lockdown mode, see the vSphere Security Guide (for vSphere 5.x) or the ESXi Configuration Guide (for earlier versions).

You can enable Lockdown mode from the Direct Console User Interface (DCUI).

Notes:
  • These procedures are for ESXi only.
  • The host profile does not have a setting to enable or disable Lockdown mode.
  • Configure Lockdown Mode will be grayed out if vCenter is down or the host is disconnected from vCenter.
  • None of the troubleshooting services will work after Lockdown mode is enabled.
If you enable or disable Lockdown mode using the DCUI, permissions for users and groups on the host are discarded. To preserve these permissions, you must enable or disable Lockdown mode using the vSphere Client connected to vCenter Server.

To enable Lockdown mode:
  1. Log directly into the ESXi host.
  2. Open the DCUI on the host.
  3. Press F2 for Initial Setup.
  4. Toggle the Configure Lockdown Mode setting.

Using troubleshooting services

By default, troubleshooting services in ESXi hosts are disabled. You can enable these services if necessary. Troubleshooting services can be enabled or disabled irrespective of the Lockdown mode on the host.

The various troubleshooting services are:
  • Local Tech Support Mode (TSM): You can enable this service to troubleshoot issues locally. 
  • Remote Tech Support Mode Service (SSH): You can enable this service to troubleshoot issues remotely. 
  • Direct Console User Interface Service (DCUI): When you enable this service while running in Lockdown mode, you can log in locally to the Direct Console User Interface as the root user and disable Lockdown mode. You can then troubleshoot the issue using a direct connection to the vSphere Client or by enabling Tech Support Mode.

    For information on Tech Support Mode, see Tech Support Mode for Emergency Support (1003677) or Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).

Enabling or disabling the Lockdown mode using ESXi Shell

You can run these commands from the vSphere CLI to verify the status of the Lockdown mode and to enable/disable it.

ESXi 5.x and 4.1
  • To check if Lockdown mode is enabled: vim-cmd -U dcui vimsvc/auth/lockdown_is_enabled
  • To disable Lockdown mode: vim-cmd -U dcui vimsvc/auth/lockdown_mode_exit
  • To enable Lockdown mode: vim-cmd -U dcui vimsvc/auth/lockdown_mode_enter 
ESXi 4.0
  • To check if Lockdown mode is enabled: vim-cmd -U dcui vimsvc/auth/admin_account_is_enabled
  • To disable Lockdown mode: vim-cmd -U dcui vimsvc/auth/admin_account_enable
  • To enable Lockdown mode: vim-cmd -U dcui vimsvc/auth/admin_account_disable
Note: To check the status or disable Lockdown mode when Lockdown mode is already enabled, you must enter the Direct Console User Interface Service (DCUI) and then run these commands on the ESXi host.

Enabling or disabling Lockdown mode using PowerCLI

To enable Lockdown mode using PowerCLI, run this command:

(get-vmhost <hostname> | get-view).EnterLockdownMode() | get-vmhost | select Name,@{N="LockDown";E={$_.Extensiondata.Config.adminDisabled}} | ft -auto Name LockDown 

To disable Lockdown mode, run this command:

(get-vmhost <hostname> | get-view).ExitLockdownMode() 

To batch modify Lockdown mode using PowerCLI, save this text in a *.PS1 file and run with PowerCLI:

$vCenter = 'vCenterServer_Name_or_IP_address'
Connect-VIServer $vCenter
$Scope = Get-VMHost #This will change the Lockdown Mode on all hosts managed by vCenter
foreach ($ESXhost in $Scope) {
(get-vmhost $ESXhost | get-view).ExitLockdownMode() # To DISABLE Lockdown Mode
#(get-vmhost $ESXhost | get-view).EnterLockdownMode() # To ENABLE Lockdown Mode
}
Disconnect-VIServer -Server $vCenter -Confirm:$false

For more information, see the vSphere Command-Line Interface Documentation.
 
Source KB:-