Q. The documentation explains how to replace a VSA member, but does not explain how to add a new member (for instance, if you have two and want to add a third). Is this possible?
Q. Can VSA be installed on non-English vCenter Servers?
A. Yes. VSA 1.0 supports i18N level 0, which means that the English version of the VSA can be installed on non-English Operating Systems.
Q. The documentation states that all hard disks must be only SATA or SAS (a combination of SATA+SAS is not supported) and must have the same capacity. Is this enforced? If so, is there any workaround?
A. This is not strictly enforced. The software finds the ESXi host with the least amount of local storage available, and all hosts use that amount of storage. For example, if there are three hosts providing local storage of 3 TB, 2 TB, and 2.5 TB to the VSA Cluster, the VSA Cluster only consumes 2 TB from each node. On the other two nodes, the additional storage is unavailable for use.
For more information, see
VSA Cluster Requirements in the VSA
documentation.
Q. Can the ESXi hosts running VSA have more than 4 NICs?
A. Yes, but they will not be used for VSA traffic by default.
For more information, see
VSA Cluster Network Architecture in the VSA
documentation.
Q. Is there are any production disadvantages to crossover cable connection between VSA backend interfaces?
A. This is an untested configuration and therefore not supported. Though it may work.
Q. Can you configure VSA to do NIC teaming with more than two physical NICs (whether you do from VSA or from vCenter Server/ESX)?
A. Yes, from the vSphere Client on each host.
The VSA installer does not utilize more than 4 NIC ports configured as active/standby uplinks across the two VSA virtual switches. However, the administrator can manually configure additional active uplinks for either of the vSwitches or their component port groups via vCenter Server.
Note: This can be used to add redundancy, but not to increase network bandwidth between any two ESXi hosts. None of the vSphere NIC teaming load-sharing policies load balance/share network IO across multiple active teamed uplinks for the same TCP connection. In a three-node VSA cluster, the IP Hash load-balancing policy can be used to distribute network traffic among multiple uplinks, such that each pair of ESXi hosts communicates over a different channel.
For more information, see
VSA Cluster Network Architecture in the VSA
documentation.
Q. Where does the VSA Cluster Service run if you have 2 ESXi nodes? How about 3 nodes?
A. The VSA Cluster Service is always installed on the vCenter Server machine. In a two-node VSA cluster, the service participates in the cluster. In a three-node cluster, the VSA Cluster Service is not configured and does not participate in the cluster.
For more information, see
VSA Cluster Architecture in the VSA
documentation.
Q. How do you restart the VMware VSA Cluster Service?
A. The VMware VSA Cluster Service appears as a normal Windows service. Connect to the server where the VSA Cluster Service is installed, open the Windows Services list (services.msc) and locate the VMware VSA Cluster Service. To restart, right-click and choose Restart.
Q. In a 2-node cluster, if the VSA Manager Service stops, does the VSA cluster remain online?
A. Yes. The VSA cluster is a quorum-based system. Each VSA has a vote, and the VSA Cluster service provides the third vote. As long as there are 2 votes, the cluster is online.
- If both VSAs are online and the VSA Cluster is down, the cluster is online.
- If one VSA is online and the VSA Cluster service is on line, the cluster is online.
- If one VSA is offline and the VSA Cluster service is offline, the cluster is down.
For more information, see
How a VSA Cluster Handles Failure in the VSA
documentation.
Q. Is it supported to have vCenter Server installed in a virtual machine that is stored on the VSA?
Q. Where are logs for the VSA stored?
Q. Can logs be collected from vCenter Server?
Q. I am experiencing an issue performing a manual VSA installation. Where are the logs for the VSA installer?
Q. Imagine you have a ESXi host running VSA that is experiencing an issue. Can you use the Replace VSA option to reinstall VSA on that node? If not, is there any workaround to avoid having to use an additional host?
Q. After VSA Manager installation, the plug-in does not appear in vCenter Server. Is it possible to reinstall the VSA Manager Plug-in without reinstalling VSA Manager itself?
A. Yes. Ensure that vCenter Server's tomcat service is running, and enable the VSA Manager Plug-in through vCenter Server's Plug-in Manager. For more information, see
Enable the VSA Manager Plug-In in the VSA
documentation.
Q. Can I use VLAN only for the back-end network?
A. Yes, leave VLAN 0 (zero) for the front-end network and the desired VLAN for back-end network. VLAN 0 equals no VLAN.
Q. Is the use of VLANs enforced? If so, is there any tweak available to make VSA work without VLANs?
A. VLANs are recommended but not required. For more information, see
Reconfigure the VSA Cluster Network and
Network Switch Requirements for a VSA Cluster in the VSA
documentation.
Q. Is there any VSA configuration maximum for sizes of local storage per host/per LUN?
A. Yes. There is a configuration maximum for the amount of local storage per ESX host that a VSA virtual machine uses. The limit is 32 TB of storage, spread evenly across 8 VMFS virtual disks used as data disks by a VSA virtual machine. This equals 64 TB of raw storage for an ESX host exporting RAID10 storage, and about 42.7 TB for an ESX host exporting RAID5 storage configured as two RAID sets of 4 drives. For more information, see
VSA Cluster Capacity in the VSA
documentation.
Notes:
- Approximately 20 GB of storage remain unallocated by VSA on the ESXi host's local datastores.
- The ESXi host's local storage hardware may impose additional limits, such as 2 TB storage devices.
Q. How do I know if my hardware supports VSA?
A. For a list of all hardware qualified to run VSA, see the
VMware Compatibility Guide. Ensure that vSphere Storage Appliance is selected under the Features criteria.
Q. Is VMX Swapping enabled or disabled by default?
A. VMX swapping is enabled by default. You can prevent virtual machines from VMX swapping to the VSA datastores by disabling VMX swapping on each virtual machine that runs in the VSA cluster. For more information, see
Memory Overcommitment Not Supported in a VSA Cluster in the VSA
documentation.
Q. Are the NFS datastores mounted on all the ESXi hosts in the target Datacenter or the target Cluster?
A. The NFS datastores are mounted to all ESXi hosts at the Datacenter level. However, vMotion does not work unless the VSA-VMotion port group is added to every cluster. For more information, see
Reconfigure the VSA Cluster Network in the VSA
documentation.
Q. What is the username and password for console access to the VSA nodes?
A. The username is
svaadmin. The default password is
svapass. For more information, see
Change the VSA Cluster Password in the VSA
documentation.
Q. Where is the VSA cluster password stored?
A. The user/password are stored as an actual Linux user/password on each VSA. The password is encrypted by Linux. The VSA cluster takes care of coordinating the update to each VSA.
Q. If vCenter Server or VSA Manager must be reinstalled to recover a VSA cluster, which password do I use: the default VSA password or the new one?
A. When reinstalling vCenter Server or the VSA Manager to recover a cluster, use the current password, not the original/default password.
Q. What is the minimum amount of local disks on each ESXi host?
A. VSA 1.0 supports a configuration of 4, 6, or 8 local disks per host. For more information, see the
Release Notes.
Q. My ESXi has more than one datastore. Can I specify the datastore that VSA should use?
A. VSA 1.0 expects only a single datastore on the ESXi server, and you cannot select a different datastore.
It could be possible that you make other datastores not visible during VSA installation, which should work but later on if you need to use some procedures like 'replacing a node' the availability of multiple datastores could cause problems. Therefore only one datastore in each ESXi host in the VSA Cluster is the only supported configuration.
Q. Does the vSphere Storage Appliance support vSphere APIs for Array Integration (VAAI)?
A. VSA supports the VAAI NAS space reservation interface. This mechanism is used whenever the creation of a non-Thin virtual disk on a NAS-VAAI-capable datastore is requested, such as when creating, cloning, or migrating a virtual machine. VSA does not support the VAAI NAS snapshot offloading interface.
Q. Can I protect virtual machines that reside on vSphere Storage Appliance (VSA) datastores with Site Recovery Manager (SRM) 5.0?
A. Yes, virtual machines that reside on the vSphere Storage Appliance (VSA) can be protected by SRM 5.0 using vSphere Replication (VR). VSA does not require a Storage Replication Adapter (SRA) to work with SRM 5.0. This information is provided in the
SRM Release Notes as well.
Q. Can I use VSA with vCenter in Linked mode?
Q. Can I change the IPs and VLANs of a VSA cluster after it has been installed?
A. Yes, use the Reconfigure VSA Cluster Network option from the VSA Manager tab.
Q. Can I mount the VSA storage as NFS shares to entities outside the VSA Cluster?
Q. In the event of a planned power outage to our physical environment, what is the procedure for shutting down the ESXi hosts that contain VSA?
A. To shut down the ESXi hosts that contain VSA:
- Shut down all virtual machines within the datacenter on the ESXi hosts.
Note: This is a limitation of VSA 1.0 only.
- Shut down Production virtual machines on the ESXi hosts.
- Put the VSA Cluster into maintenance mode.
- Power off VSA Appliances on the ESXi hosts.
- Power down the ESXi hosts
Q. Is ESXi 5.0 Embedded (non-installable version) or stateless supported with VSA 1.0?
A. ESXi Embedded is burned on a USB or Flash drive on the server, typically in environments without a local disk. VSA 1.0 is supported only on ESXi Installable and not on embedded or stateless ESXi.
Q: If I remove VSA Manager, does the VSA cluster service gets deleted from vCenter Server?
A: Yes, VSA Manager and the VSA cluster service are removed from vCenter Server. It there is an existing VSA Cluster (VSA appliances running on hosts), it is left as it is.
Q: If I have a healthy VSA Cluster and I remove and reinstall VSA Manager on the same vCenter Server, can I recover management over the existing VSA Cluster?
A: Yes. In fact, you need not do the recover process. When you open the VSA tab, you see the VSA cluster as it was before you removed the VSA Manager. Note that reinstalling VSA Manager does not have any effect on an existing healthy/unhealthy VSA Cluster.
Q: Can I install a VSA Cluster in a non-supported hardware (non-supported ESXi hosts)?
A: Yes, to some extent. The wizard displays a warning about the unsupported hardware, however it does not stop you from continuing. If the hosts do not support EVC, the installation fails later when you attempt to create the cluster of ESXi hosts in vCenter Server.
Q: My hardware is on the HCL, but the VSA 1.0 software says that it is unsupported.
A: The software contains a list of hardware that were supported at the time VSA 1.0 was released. If more supported machines are added after that date, the software is unaware of it and says that they are unsupported. The HCL contains an accurate list of supported hardware.
Q: Can I configure jumbo frames on the back-end network?
A: No. The VSA does not support jumbo or super jumbo Ethernet frame sizes for networking of ESX hosts in a VSA cluster. The CPU resource savings that might be expected from transmitting jumbo frames is already realized with physical and virtual interrupt coalescence for physical (ixgbe) and virtual (vmxnet3) NICs and TCP Large Receive Offload (LRO) configured on the physical NICs. While there is network bandwidth savings that could be realized by submitting fewer network frames, this benefit is considered to be minor.
Q: Does the VSA support Virtual Machines running in an FT (Fault Tolerant) configuration?
A. Yes. FT virtual machines can run on VSA datastores, however the FT virtual machine should not run on an ESXi server that is running a VSA server in the VSA cluster hosting same datastore. Customers should be aware that during VSA fail-over and recovery, I/O requests from an FT virtual machine (or any virtual machine) using a VSA datastore may be temporary suspended.
Q: Once a VSA node that has failed comes back up, the data is being replicated back to the node. Regarding the data being replicated, is it just the changes in the volume since the node went down, or would it be creating the volume completely?
A: The VSA’s data replication technology includes change tracking at the granularity of 64 MB chunks. This allows us to synchronize only the chunks that were modified when the volume is degraded. Therefore the answer to your question is, “we don’t recreate the volume.” But the amount of data synchronized could be greater than the amount which was actually modified.
Q: When is a datastore placed in the degraded state?
A: A datastore is placed in the DEGRADED state whenever a replica of the volume is unavailable, or when the 2 replicas are not in sync. This happens whenever a VSA member goes offline or the communication between the 2 nodes is broken (no network).
Q: How is it possible for a VSA node to enter maintenance when the user hasn't specifically requested it?
A: The VSA will automatically enter maintenance mode if it detects that it has rebooted 3 times within 15 minutes. This does not include reboots that were user-requested from the Guest OS (though a VSA VM reset done outside of the VSAManager UI would still not count as user-requested). The reason for this logic is to stop instances of a cyclic reboot of a VSA. In other words, a VSA that has run into a situation where a problem is causing it to continuously reboot itself. In an attempt to preserve log information on the initial cause of the reboot cycle, we try to catch it by automatically placing the VSA into maintenance mode. As a result, it does not rejoin the storage cluster and therefore any affected datastore(s) would remain in a DEGRADED state until that VSA is either:
Q: I configured a virtual machine running a SQL server on the VSA datastore. The HA boot sequence is configured to start VSA first and then the virtual machine. There was a power outage and on start up, both servers start at the same time. VSA starts but only offers its datastore service after it has finished booting completely, and the ESXi host finished booting up before VSA. The ESXi host then tries to power on the virtual machine but the datastore is not yet available. What happens in this situation? Will ESXi eventually detect that the VSA datastore is available and start the virtual machine? Will the virtual machine become non-responsive until we rescan the datastore?
A: HA retries the virtual machine power on for up to an hour with an exponential back-off (with the first retry attempted after one minute). A total of seven attempts will be made within 64 minutes, and if the virtual machine cannot be powered on after that, HA abandons the attempts and a permanent error is posted.
Source:-
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2001389