Translate

Total Pageviews

My YouTube Channel

Tuesday, 29 March 2016

Local SAS drive appears as a remote storage

The ESX/ESXi installer always marks a Fibre Channel storage as remote and Serial ATA (SATA) disks as local.
A Serial Attached SCSI (SAS) card is marked as local only if the installer finds its corresponding identifier in the installation driver file. Otherwise, it assumes that the SAS card is remote.

Though the SAS card appears to be remote, you can still select and use it. Now even you can mark those disks as local with vSphere Web Client GUI. Refer the screenshot given below to know how to mark the disk as local:-


Refer KB 1027819 for more info

Sunday, 27 March 2016

S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology)

S.M.A.R.T is designed by IBM. It was created to monitor the disk status by using various methods and devices (sensors). A Single ATA disk may have have up to 30 such measured value, which are called attributes. Some of them directly or indirectly affect hard disk health status and others give statistical information. Nowadays all modern IDE/Serial ATA/SCSI hard disks have S.M.A.R.T feature. it is not a standard so due to this attributes may be different from manufacturer to manufacturer. ESXi supports disk drives that are enabled with Self-Monitoring, Analysis and Reporting Technology (SMART)


A single S.M.A.R.T. attribute has the following fields:

Identifier (byte): the meaning of the attribute. Many attributes have standard meanings (for example, 5 = number of reallocated sectors, 194 = temperature, etc). Most applications provide name and textual description about the attributes.
Data (6 bytes): raw measured values are stored in this field, provided by a sensor or a counter. This data is then processed by an algorythm designed by the hard disk manufacturer. Sometimes different parts (for example, low, middle, high 16 bits) of this value contain different kind of information.
Threshold (byte): the (failure) limit value for the attribute.
Value (byte): the current relative "health" of the attribute. This number is calculated by the algorythm, using the raw data (see above). On a new hard disk, this number is high (a theoretical maximum, for example 100, 200 or 253) and it is decreasing during the lifetime of the disk.
Worst (byte): the worst (smallest) value ever found in the previous lifetime of the hard disk.
Status flags: indicate the main purpose of the attribute. An attribute can be for example critical (able to predict failure) or statistical one (does not directly affect condition).



If you want to know more about S.M.A.R.T refer these links:-
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2040405

http://www.hdsentinel.com/smart/index.php 

http://www.hdsentinel.com/smart/smartattr.php

Friday, 25 March 2016

Cloud Security with vCloud Air


VMware YouTube Link:-
https://www.youtube.com/watch?v=bcvlw1jUp68&list=PL9MeVsU0uG65QTHD56lbe2V5U82RMCtdZ&index=2

vCloud Air Integration with vCloud Connector and vRA Demo


VMware YouTube Link:-
https://www.youtube.com/watch?v=KvvvtyC7Kd4

Thursday, 24 March 2016

vCloud Air Overview


VMware YouTube Link:-
https://www.youtube.com/watch?v=Ztj63zbU9Pg

Overview of VMware vCloud Air Disaster Recovery

VMware YouTube Link:-
https://www.youtube.com/watch?v=gOBPkanf9cY

East-West and North-South Traffic

East-West Traffic:- Traffic that flows within the datacenter between servers of same datacenter is known as East-West Traffic. Server to Server traffic is one of example.

North-South Traffic:- Traffic that enters and exits the datacenter is known as North-South Traffic. it means the traffic that is going towards the internet and coming from internet is known as North-South Traffic. Client-Server Traffic is one of the example.

VMware Site Recovery Manager 6.1 Enhancements

With SRM 6.1 VMware announced the RPO of 5 minutes if you are using the VSAN datastores
Enhancements in SRM 6.1:-
There are alot of enhancements in SRM 6.1, now lets start with the briefing on these enhancements and in the later part of this post i will discuss them in more detail:-
1. Storage Policies based Protection Group
2. Streched Storage Support & Orchestrated vMotion
3. Enhanced Integration with NSX
4. SRM Air 

Storage Policies based Protection Group
Storage policy-based protection groups utilize vSphere tags in combination with vSphere storage policy based management to enable automated policy based protection for virtual machines. Storage policy-based management enables vSphere administrators to automate the provisioning and management of virtual machines storage to meet requirements like performance, availability and protection. vSphere tags allow for the ability to attach metadata to vSphere inventory, in this case datastores, which makes these objects more sortable, searchable and possible to associate with storage policies. 

 Here is how tags and storage-policy based management are used together with storage policy-based protection groups: 

• A tag is created and associated with all the datastores in each desired protection group 
• A tag based storage policy is created for each protection group utilizing the tag 
• A storage policy-based protection group is created and associated with the storage policy 

When any virtual machine, new or existing, is associated with that policy and placed on the replicated datastore, Site Recovery Manager protection is automatic. If a virtual machine is disassociated from that policy and/or moved off the datastore it is automatically unprotected. The same happens for datastores and the virtual machines on them. Leveraging storage profiles to identify protected resources saves time and reduces cost and complexity by eliminating the previously manual operations required to protect and unprotect VMs, and to add and remove datastores from protection groups.

Streched Storage Support & Orchestrated vMotion
Prior to Site Recovery Manager 6.1 customers had to make a choice between using Site Recovery Manager or vSphere Metro Storage Clusters/Stretched Storage to provide a multisite solution that was optimized for either site mobility or disaster recovery without being able to attain the benefits of both solutions simultaneously. Site Recovery Manager 6.1 now supports using cross-vCenter vMotion in combination with stretched storage, thereby combining the benefits of Site Recovery Manager with the advantages of stretched storage.
Graphic thanks to VMware

The integration of stretched storage with Site Recovery Manager 6.1 allows customers to achieve what was previously only possible with vSphere Metro Storage Clusters: 
• Planned maintenance downtime avoidance – Orchestrated cross-site vMotion and recovery plans allow for workload migration transparent to app owners or end users 
• Zero-downtime disaster avoidance – Utilizing the ability to live migrate workloads using cross-site vMotion and the planned migration workflow in Site Recovery Manager 6.1, customers can avoid downtime instead of recovering from it

Enhanced Integration with NSX

Graphic thanks to VMware
Networking is typically one of the more complex and cumbersome aspects of a disaster recovery plan. Ensuring that the proper networks, firewall rules and routing are configured correctly and available can quite challenging. Making an isolated test network with all the same capabilities can be even more so. Additionally, solutions like cross-vCenter vMotion require a stretched layer-2 network which can create even more difficulty. NSX 6.2 has a number of new features which enhance Site Recovery Manager. This means that organizations can now use NSX and Site Recovery Manager to simplify the creation, testing and execution of recovery plans as well as accelerate recovery times. NSX 6.2 supports creating “Universal Logical Switches”, which allow for the creation of layer-2 networks that span vCenter boundaries. This means that when utilizing Universal Logical Switches with NSX there will be a virtual port group at both the protected and recovery site that connect to the same layer-2 network. 

SRM Air
SRM Air is a software-as-a-service offering from VMware, which brings the management and automation benefits of Site Recovery Manager to VMware vCloud Air. SRM Air is fully integrated with vCloud Air Disaster Recovery to provide self-service protection and centralized recovery plans for orchestrated migration, failover, and fail-back between on-premises data centers and vCloud Air. 
Graphic thanks to VMware

For More Info:-
https://blogs.vmware.com/virtualblocks/2015/09/02/introduction-to-vmware-site-recovery-manager-air/
https://www.vmware.com/files/pdf/products/SRM/vmware-site-recovery-manager-whats-new.pdf
https://www.youtube.com/watch?v=EEK5LfS-waE

Thursday, 17 March 2016

Four New Courses on Some of Our Most Powerful Technologies

Start strengthening your vRealize Automation, App Volumes, Virtual SAN, and NSX skills today.

1. Designed for experienced VMware vSphere® users, VMware vRealize Operations Manager: Install, Configure, Manage [V6.2] teaches you how to use VMware vRealize® Operations Manager™ as a forensic and predictive tool. Includes content about advanced capabilities, customization and management.

2. This new VMware App Volumes Application and User Profile Management [V2.X] course builds application management skills — from installation to update and replacement. Learn how to deliver applications and data to desktops and users in seconds and at scale.

3. During VMware Virtual SAN: Deploy & Manage [V6.2] you will learn how to deploy and manage a software-defined storage solution. See why Virtual SAN is such a vital component of the VMware software-defined data center (SDDC).

4. The VMware NSX: Install, Configure, Manage [V6.2] - On Demand course covers a lot of ground, including how to: virtualize your switching environment, dynamically route between different virtual environments, and secure and optimize your NSX environment.

5. For more information — or for help developing a learning plan for yourself or your team — please contact the Education Specialist for your area or Live Chat Now*.

Some of these courses have Beta offerings available, so you could save 50% while helping finalize a nearly complete course.

* Live chat available in North America only.

Source:-
VMware Education 

Wednesday, 16 March 2016

The Virtual SAN host cannot be moved to the destination cluster: Virtual SAN cluster UUID mismatch

Today one of my client faced this issue in his environment when he was trying to move one vsan cluster nodes to another vsan cluster although he deleted the disk groups but without removing the host from cluster he deleted the cluster itself and then added the host in vcenter again then when he was trying to add those added esxi hosts in the another cluster then this message was reported by him "The Virtual SAN host cannot be moved to the destination cluster: Virtual SAN cluster UUID mismatch" Screenshot of this error is given below:-


Primary Checks:-
1. Verify in Storage Inventory View do you still have vsan datastore visibility of that deleted vsan cluster
2. If yes, Connect with the ESXi Shell and then execute this command to check whether host has still old cluster information:-
esxcli vsan cluster get
3. If yes then apply the solution given below

Solution:-
1. SSH to each esxi host those were the cluster member one by one
2. Execute this command to leave the vsan cluster
esxcli vsan cluster leave
 3. Now try to add these esxi hosts to another vsan cluster and it will work.

Monday, 14 March 2016

Absent vs. Degraded Status in VSAN 6.0

An ABSENT state reflects a transient situation that may or not resolve itself over time, and a DEGRADED state is a permanent state.

Note: If a failure in a physical hardware component is detected, such as an Solid State Disk (SSD) or Magnetic Disk (MD), VSAN immediately responds by rebuilding a disk object. Like the Example given below for disk failure vs disk ejected:-




To change the repair delay time using the VMware vSphere Web Client, run these steps on each ESXi host in the VSAN cluster:

1. Log in with admin credentials to the VMware vCenter Server using the vSphere Web Client.

2. Select the VSAN Cluster and highlight the ESXi host > Manage > Settings.

3. Select Advanced System Settings > VSAN.ClomRepairDelay.

4. Click Edit.

5. Modify VSAN.ClomRepairDelay value in minutes as required.

6. Restart the Cluster Level Object Manager (CLOM) service clomd to apply the changes by running this command:

/etc/init.d/clomd restart

Note: Restarting the clomd service briefly interrupts CLOM operations. The length of the outage should be less than one second. However, if a virtual machine is being provisioned at the time the clomd service is restarted, that provisioning task may fail.

7. Apply steps 1-6 to each ESXi host in the VSAN cluster.