Purpose
This article provides explanations and guidelines on resolving networking issues you may encounter during a vSphere Storage Appliance (VSA) installation or during its network configuration.
Resolution
Ensure the vSphere vCenter Server has only a single IP. This is a requirement until the VSA Cluster is deployed. For more info see the VMware vSphere Storage Appliance 1.0 Release Notes.
Prior to VSA Manager installation, you must verify that vCenter Server:
- Is using static IP.
- If the static IP is given by DHCP, the hostname must resolve to the IP and the IP to the hostname.
- Is using IPv4 only. VSA 1.0 does not support IPv6.
Prior to the VSA Cluster deployment, you must verify verify the vSphere vCenter Server and the ESXi 5.0 hosts which are designated as the VSA nodes:
- Are in the same subnet.
- Have static IP addresses set.
- Have the same Default Gateway configured.
Using VLANs:
- vSphere vCenter and ESXi management vmkernel port must be in the front-end VLAN before start deploying VSA Cluster. You can put the ESXi management vmkernel port on a VLAN using the DCUI (Direct Console User Interface). On DCUI Select
Customize System > Configure Management Network > VLAN (optional). Set the VLAN Id. - Using VLAN = 0 is the same as having no VLAN.
- All the ESXi 5.0 NICs must be connected to trunking ports on the physical Switch as the ESXi 5.0 host may send packets with different VLANs through its NICs.
- The VLAN protocol on the physical switch must be IEEE 802.1Q and NOT ISL (Inter-Switch Link, Cisco proprietary). ISL is used between Cisco switches.
If a previous VSA installation has failed, ensure to:
- Remove possible existing Alerts about network redundancy on every ESXi host. Unable to select ESXi host during VSA Cluster Installation (2005092).
- Remove any non-default networking created by the failed installation (remove vSwitch1, front-end and back-end portgroups) on every host.
Networking best practices:
- All the ESXi 5.0 NICs must be connected to trunking ports on the physical Switch, because the ESXi 5.0 host may send packets with different VLANs through its NICs.
- The VLAN protocol on the physical switch must be IEEE 802.1Q and NOT ISL (Inter-Switch Link, Cisco proprietary). ISL is used between Cisco switches.
- If during the VSA installation you select DHCP for Feature IP (VMotion), a DHCP server must be running on the front-end network, and the IPs given for VMotion must be static. It is recommended to set up these IPs manually on the VSA installation wizard.
Frequently Asked Questions:
Q: Can I install VSA Cluster in hosts with more than 1 NIC on vSwitch0 (default setup)?
A: Assuming the host has 4 NICs, if vSwitch0 has 1 or 2 NICs it will work, if it has 3 or 4 NICs the installation will fail. However, it doesn't matter which NICs are in vSwitch0 at the beginning, as the VSA installer may rearrange them differently to the initial configuration.
Q: If the ESXi host has more than 4 NICs and some of them are not suitable for VSA (not connected to trunking port or other reason), how does VSA decide which NICs it should take for VSA?
A: VSA does not detect the non suitable NICs and may use them to configure the networking. This means the Cluster may not work if such a NIC gets chosen. All the NICs (present during VSA installation) on the host should have similar configuration.
The Selection of nics is as follows:
The 1st nic is assigned to vSwitch0, which is already connected.
The 1st nic is assigned to vSwitch0, which is already connected.
For the 2nd nic, here is the selection order:
- a live nic port on a different nic card + connected to a different physical switch; or
- a live nic port on a different nic card; or
- a live nic port connected to a different physical switch; or
- next available nic
- a live nic port on a different nic card + connected to a different physical switch; or
- a live nic port on a different nic card; or
- a live nic port connected to a different physical switch; or
- next available nic
Note: we can only differentiate between switches if they are cisco switches.
If the incorrect nics are being selected, and you are unable to change the network configuration, easiest solution is to assign the 2 appropriate nics to vSwitch0 before starting the VSA Cluster deployment. The other vSwitch will be assigned the remaining 2 nics. Thus, as long as your selection for vSwitch0 was correct, networking should be all set.
Q: If the NICs used for VSA are not selected by the user but automatically picked by the installer, how does it ensure a proper distribution of NICs among the 2 physical switches to ensure networking redundancy (only one physical switch is supported, but does not provide redundancy)?
A: The VSA installer utilizes the Cisco Discovery Protocol (CDP) for Cisco switches and the Link Layer Discovery Protocol (LLDP) for non-Cisco switches to try to determine the optimally redundant network configuration when matching uplinks to vSwitches. Failing to get this information will result in a random matching of uplink to vSwitches.
Q: Does the VSA appliance support DNS?
A: No, all data for installation must be supplied as an IP address.
Q: How should I do testing/pinging to ensure correct connectivity?
A:
Frontend network: this should be accessible from VC. Thus, simply do a ping from VC to ensure connectivity & no conflicts.
There could be a scenario where management PortGroup is on a different VLAN than VSA-Front End & VSA-Vmotion. In this case, create the vSwitch1 and VSA-Vmotion PortGroup manually, and ensure that the different ESXi hosts can vmkping each other. This will guarantee that all ESXi hosts have access to frontend VLAN.
There could be a scenario where management PortGroup is on a different VLAN than VSA-Front End & VSA-Vmotion. In this case, create the vSwitch1 and VSA-Vmotion PortGroup manually, and ensure that the different ESXi hosts can vmkping each other. This will guarantee that all ESXi hosts have access to frontend VLAN.
Backend network: VC can't access backend. Thus, easiest thing to do here (to ensure all hosts have backend connectivity) is to create vSwtich1 and VSA-Vmotion PortGroup manually. However, instead of assigning that PortGroup the frontend VLAN ID, assign it the backend VLAN ID. Once done, ensure all ESXi hosts can vmkping each other. This will guarantee that all ESXi hosts have access to the backend VLAN.
Note: In both cases, it's best to use one NIC at a time, and verify that all NICs that could at any point use either of the VLANs can indeed access them.
For more information, see VMware vSphere Storage Appliance FAQ (2001389)
SCENARIOS
Problem A
Issue
VSA Installation fails with the following error: Failed to ping VSA service on VM 10.10.10.202.
Reason
vCenter cannot reach the VSA appliances.
Resolution
It may have many different reasons, but one common is that vCenter should be in the same VLAN as the front-end network. If the vCenter Server is a VM, connect it to a PortGroup with the front-end VLAN If the vCenter Server is a physical host, connect it to a physical port associated with the front-end VLAN.
Problem B
Issue
VSA Installation fails with the following error: CreateClusterEvent event: Failed to complete network configuration for specifiedhost.localhost, hence reverted all other hosts.
Reason
At least one of the ESXi hosts networking is not set up correctly. You can find which host from the Datacenter Tasks tab.
Resolution
See the suggestions about removing non-default networking after a failed installation. Make sure that the first 4 NICs (as they appear in Host > Configuration > Network Adapters) are suitable for VSA use.
Problem C
Issue
- VSA Cluster creation fails with the following error:
Unable to reconfigure back end interface - On VSAManager.log you may see something similar to:
2011-11-02 11:39:38,003 319 [ClusterService] [Thread-2251] INFO - Connect to VSA: 10.1.11.242011-11-02 11:39:38,066 325 [ClusterService] [Thread-2251] INFO - Successfully login to VSA: 10.1.11.242011-11-02 11:39:38,066 132 [SvaMessagingService] [Thread-2251] INFO - createStorageCluster()2011-11-02 11:39:38,207 166 [SvaMessagingService] [Thread-2251] ERROR - createCluster() failed.Unable to reconfigure back end interface.2011-11-02 11:39:38,222 61 [PersistenceService] [Thread-2251] INFO - clear Data - Key: createClusterTask2011-11-02 11:39:38,269 169 [BaseEventListener] [Thread-2251] INFO - Stopping the JMS listener connection on 10.1.11.242011-11-02 11:39:38,269 169 [BaseEventListener] [Thread-2251] INFO - Stopping the JMS listener connection on 10.1.11.222011-11-02 11:39:38,269 169 [BaseEventListener] [Thread-2251] INFO - Stopping the JMS listener connection on 10.1.11.242011-11-02 11:39:38,269 169 [BaseEventListener] [Thread-2251] INFO - Stopping the JMS listener connection on 10.1.11.222011-11-02 11:39:38,269 188 [CreateClusterThread] [Thread-2251] ERROR - Create cluster failed: java.lang.Exception: createCluster() failed.Unable to reconfigure back end interface.
Reason
The IPs of the back-end interface (192.168.0.1, 192.168.0.2, ...) are already in use in the network (2 or 3 IPs, depending if the VSA cluster has 2 or 3 nodes).
Resolution
Make sure that these IPs are not assigned or in use by any OS on the back end network. To verify whether the IPs are in use, ping them from any OS with access to the back-end network (if using VLANs), or from vCenter Server (if not using VLANs) to ensure no one responds.
Additional Information
See Also
Source:-
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007363
No comments:
Post a Comment