Tuesday, 26 November 2013
Friday, 22 November 2013
Tuesday, 19 November 2013
lists the privileges that the different user accounts have for vCLI usage against different targets.
Thanks to Vmware Documentation
As part of configuration, vMA creates a vi-user account with no password. However, you cannot use the vi-user account until you have specified a vi‑user password.
Important The vi-user account has limited privileges on the target ESXi hosts and cannot run any commands that require sudo execution. You cannot use vi-user to run commands for Active Directory targets (ESXi or vCenter Server). To run commands for the Active Directory targets, use the vi-admin user or log in as an Active Directory user to vMA.
Log in to vMA as vi‑admin.
Run the Linux passwd command for vi-user as follows:
If this is the first time you use sudo on vMA, a message about root user privileges appears, and you are prompted for the vi-admin password.
After the vi-user account is enabled on vMA, it has normal privileges on vMA but is not in the sudoers list.
vi-user has read-only privileges on the target system. vMA creates vi-user on each target that you add, even if vi-user is not currently enabled on vMA.
When a user is logged in to vMA as vi-user, vMA uses that account on target ESXi hosts, and the user can run only commands on target ESXi hosts that do not require administrative privileges.
Thanks to Vmware Documentation
Sunday, 17 November 2013
|CPU||%RDY||10||Overprovisioning of vCPUs, excessive usage of vSMP or a limit(check %MLMTD) has been set. See Jason’s explanation for vSMP VMs|
|CPU||%CSTP||3||Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM. This should lead to increased scheduling opportunities.|
|CPU||%SYS||20||The percentage of time spent by system services on behalf of the world. Most likely caused by high IO VM. Check other metrics and VM for possible root cause|
|CPU||%MLMTD||0||The percentage of time the vCPU was ready to run but deliberately wasn’t scheduled because that would violate the “CPU limit” settings. If larger than 0 the world is being throttled due to the limit on CPU.|
|CPU||%SWPWT||5||VM waiting on swapped pages to be read from disk. Possible cause: Memory overcommitment.|
|MEM||MCTLSZ||1||If larger than 0 host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited.|
|MEM||SWCUR||1||If larger than 0 host has swapped memory pages in the past. Possible cause: Overcommitment.|
|MEM||SWR/s||1||If larger than 0 host is actively reading from swap(vswp). Possible cause: Excessive memory overcommitment.|
|MEM||SWW/s||1||If larger than 0 host is actively writing to swap(vswp). Possible cause: Excessive memory overcommitment.|
|MEM||CACHEUSD||0||If larger than 0 host has compressed memory. Possible cause: Memory overcommitment.|
|MEM||ZIP/s||0||If larger than 0 host is actively compressing memory. Possible cause: Memory overcommitment.|
|MEM||UNZIP/s||0||If larger than 0 host has accessing compressed memory. Possible cause: Previously host was overcommited on memory.|
|MEM||N%L||80||If less than 80 VM experiences poor NUMA locality. If a VM has a memory size greater than the amount of memory local to each processor, the ESX scheduler does not attempt to use NUMA optimizations for that VM and “remotely” uses memory via “interconnect”. Check “GST_ND(X)” to find out which NUMA nodes are used.|
|NETWORK||%DRPTX||1||Dropped packets transmitted, hardware overworked. Possible cause: very high network utilization|
|NETWORK||%DRPRX||1||Dropped packets received, hardware overworked. Possible cause: very high network utilization|
|DISK||GAVG||25||Look at “DAVG” and “KAVG” as the sum of both is GAVG.|
|DISK||DAVG||25||Disk latency most likely to be caused by array.|
|DISK||KAVG||2||Disk latency caused by the VMkernel, high KAVG usually means queuing. Check “QUED”.|
|DISK||QUED||1||Queue maxed out. Possibly queue depth set to low. Check with array vendor for optimal queue depth value.|
|DISK||ABRTS/s||1||Aborts issued by guest(VM) because storage is not responding. For Windows VMs this happens after 60 seconds by default. Can be caused for instance when paths failed or array is not accepting any IO for whatever reason.|
|DISK||RESETS/s||1||The number of commands reset per second.|
|DISK||CONS/s||20||SCSI Reservation Conflicts per second. If many SCSI Reservation Conflicts occur performance could be degraded due to the lock on the VMFS.|
Saturday, 16 November 2013
You can use the advanced memory attributes to customize memory resource usage.
ESXi with NPIV supports the following items:
NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned WWN.