Author: tmahanta

APD and PDL

Permanent Device Loss (PDL): A datastore is shown as unavailable in the Storage view A storage adapter indicates the Operational State of the device as Lost Communication All-Paths-Down (APD): A datastore is shown as unavailable in the Storage view. A storage adapter indicates the Operational State of the device as Dead or Error. PDL: In vSphere 4.x, an All-Paths-Down (APD) situation occurs when all paths to a device are down. As there is no indication whether this is a permanent or temporary device loss, the ESXi host keeps reattempting to establish connectivity. APD-style situations commonly occur when the LUN is incorrectly unpresented from the ESXi/ESX host. The ESXi/ESX host, still believing the device is available, retries all SCSI commands indefinitely. This has an impact on the management agents, as their commands are not responded to until the device is again accessible. This causes the ESXi/ESX host to become inaccessible/not-responding in vCenter Server. In vSphere 5.x/6.x, a clear distinction has been made between a device that is permanently lost (PDL) and a transient issue where all paths are down (APD) for an unknown reason. For example, in the VMkernel logs, if a SCSI sense code of H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0 or Logical Unit Not Supported is logged by the storage device to the ESXi 5.x/6.x host, this indicates that the device is permanently inaccessible to the ESXi host, or...

Read More

Filtering Virtual Machine I/O

I/O filters are software components that can be installed on ESXi hosts and can offer additional data services to virtual machines. The filters process I/O requests, which move between the guest operating system of a virtual machine and virtual disks. The I/O filters can be offered by VMware or created by third parties through vSphere APIs for I/O Filtering (VAIO). I/O filters can gain direct access to the virtual machine I/O path. You can enable the I/O filter for an individual virtual disk level. The I/O filters are independent of the storage topology. VMware offers certain categories of I/O filters. In addition, third-party vendors can create the I/O filters. Typically, they are distributed as packages that provide an installer to deploy the filter components on vCenter Server and ESXi host clusters. After the I/O filters are deployed, vCenter Server configures and registers an I/O filter storage provider, also called a VASA provider, for each host in the cluster. The storage providers communicate with vCenter Server and make data services offered by the I/O filter visible in the VM Storage Policies interface. You can reference these data services when defining common rules for a VM policy. After you associate virtual disks with this policy, the I/O filters are enabled on the virtual disks. Datastore Support I/O filters can support all datastore types including the following: VMFS NFS 3 NFS 4.1 Virtual Volumes (VVol) vSAN Types of I/O Filters VMware provides...

Read More

SRM advanced parameters which can be used for troubleshooting

VMware vCenter Site Recovery Manager (SRM) has a default setting of 300 seconds for the elapsed time for SRA commands (such as discoverDevices, discoverArrays). If the requested information is not passed back from the SRA in five minutes, SRM flags a timeout and terminates the command. Example Error: +++++++++++ "Timed out (300 seconds) while waiting for SRA to complete '<commandtype>' command" Resolution : +++++++++ To resolve this issue, increase the VMware vCenter Site Recovery Manager (SRM) timeout value for SRA commands: Log in to vSphere Web Client and click Site Recovery Manager plugin. Click Sites in the left pane. Click on a Site > go to Advanced Settings > click Storage. To change SRA update timeout, enter a new value in the storage.commandTimeout field greater than its current value (600 or 900). Perform test recovery again.   ADVANCED SETTING DEFAULT VALUE MY VALUE DESCRIPTION Recovery.powerOffTimeout 300 600 Change the timeout for guest OS to power off. Recovery.powerOnTimeout 120 300 Change the timeout to wait for VMware Tools when powering on virtual machines. StorageProvider.fixRecoveredDatastoreNames Not checked Checked Force removal, upon successful completion of a recovery, of the snap-xx prefix applied to recovered datastore names. StorageProvider.hostRescanRepeatCount 1 3 Repeat host scans during testing and recovery. StorageProvider.hostRescanTimeoutSec 300 600 Change the interval that Site Recovery Manager waits for each HBA rescan to complete Storage.commandTimeout 300 600 Change timeout in seconds for executing an SRA...

Read More

ATS (Atomic Test and Set)

ATS-Only Mechanism For storage devices that support T10 standard-based VAAI specifications, VMFS provides ATS locking, also called hardware-assisted locking. The ATS algorithm supports discrete locking per disk sector. All newly formatted VMFS5 and VMFS6 datastores use the ATS-only mechanism if the underlying storage supports it, and never use SCSI reservations. When you create a multi-extent datastore where ATS is used, vCenter Server filters out non-ATS devices. This filtering allows you to use only those devices that support the ATS primitive. In certain cases, you might need to turn off the ATS-only setting for a VMFS5 or VMFS6 datastore. ATS+SCSI Mechanism A VMFS datastore that supports the ATS+SCSI mechanism is configured to use ATS and attempts to use it when possible. If ATS fails, the VMFS datastore reverts to SCSI reservations. In contrast with the ATS locking, the SCSI reservations lock an entire storage device while an operation that requires metadata protection is performed. After the operation completes, VMFS releases the reservation and other operations can continue. Datastores that use the ATS+SCSI mechanism include VMFS5 datastores that were upgraded from VMFS3. In addition, new VMFS5 or VMFS6 datastores on storage devices that do not support ATS use the ATS+SCSI mechanism. Change Locking Mechanism to ATS+SCSI When you create a VMFS5 datastore on a device that supports the atomic test and set (ATS) locking, the datastore uses the ATS-only locking mechanism. In certain...

Read More

Boot from SAN

Configure SAN Components and Storage System : Before you set up your ESXi host to boot from a SAN LUN, configure SAN components and a storage system. Because configuring the SAN components is vendor specificǰ refer to the product documentation for each item. Procedure >> Connect network cable, referring to any cabling guide that applies to your setup. Check the switch wiring, if there is any. >> Configure the storage array.       ++ From the SAN storage array, make the ESXi host visible to the SAN. This process is often referred to as creating an object.       ++ From the SAN storage array, set up the host to have the WWPNs of the host’s adapters as port names or node names.       ++ Create LUNs.       ++ Assign LUNs.       ++ Record the IP addresses of the switches and storage arrays.       ++ Record the WWPN for each SP. Configure Storage Adapter to Boot from SAN : When you set up your host to boot from SAN, you enable the boot adapter in the host BIOS. You then configure the boot adapter to initiate a primitive connection to the target boot LUN. Prerequisites Determine the WWPN for the storage adapter. Procedure Configure the storage adapter to boot from SAN. Because configuring boot adapters is vendor specificǰ refer...

Read More

Recent Posts

Recent Comments

    Select Categories and Browse