Identify storage adapters and devices
Supported Storage Adapters in the vSphere Storage Guide on page 20.
Supported Storage Adapters in the vSphere Storage Guide on page 20.
ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Ethernet.
ESXi accesses adapters directly through device drivers in the VMkernel.
View installed Storage Adapters in the Web Client -> Hosts and Clusters -> Host -> Manage -> Storage -> Storage Adapters
View discovered Storage Devices in the Web Client -> Hosts and Clusters -> Host -> Manage -> Storage -> Storage Devices
– Identify storage naming conventions
Understanding Storage Device Naming in the vSphere Storage Guide on page 123.
Understanding Storage Device Naming in the vSphere Storage Guide on page 123.
- SCSI INQUIRY Device Identifiers
iqn.XXXXXXXXX, t10.XXXXXXXXX, naa.XXXXXXXXX
Device identifiers are unique across all hosts, persistent across reboots. - Path-based Identifiers
vmhbaV:Cx:Ty:Lz
V = HBA number, x = Channel, y = Target, z = LUN
Path-based identifiers are not unique and are not persistent. Path-based identifiers could change after every boot. - Legacy Identifier
vml.XXXXXXXXXX - Device Display Name
ESXi host assigns to the device based on the storage type and manufacturer
The display name can be changed.
To display device names using the vSphere CLI:
esxcli storage core device list
esxcli storage core device list
Fibre Channel targets use World Wide Names (WWN)
- World Wide Port Names (WWPN)
- World Wide Node Names (WWNN)
iSCSI Naming Conventions in the vSphere Storage Guide on page 64.
iSCSI Naming Conventions:
- iqn.yyyy-mm.com.domain:uniquename
yyyy-mm is the year and month the naming authority was established
com.domain – internet domain name of naming authority in reverse syntax
uniquename – a unique name - eui.16hexdigits ie eui.0123456789ABCDEF
– Identify hardware/dependent hardware/software iSCSI initiator requirements
Internet SCSI (iSCSI) in the vSphere Storage Guide on page 16.
Internet SCSI (iSCSI) in the vSphere Storage Guide on page 16.
- Hardware iSCSI
Host connects to storage through a HBA capable of offloading the iSCSI and network processing. Hardware adapters can be dependent or independent. - Software iSCSI
Host uses a software-based iSCSI initiator in the VMkernel to connect to storage.
Set Up Independent Hardware iSCSI Adapters in the vSphere Storage Guide on page 71.
An independent hardware iSCSI adapter is a specialized third-party adapter capable of accessing iSCSI storage over TCP/IP. This iSCSI adapter handles all iSCSI and network processing and management for your ESXi system.
An independent hardware iSCSI adapter is a specialized third-party adapter capable of accessing iSCSI storage over TCP/IP. This iSCSI adapter handles all iSCSI and network processing and management for your ESXi system.
Network configuration for Independent Hardware iSCSI adapters is done on the HBA. No VMkernel interface is required.
About Dependent Hardware iSCSI Adapters in the vSphere Storage Guide on page 74.
A dependent hardware iSCSI adapter is a third-party adapter that depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware.
A dependent hardware iSCSI adapter is a third-party adapter that depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware.
About the Software iSCSI Adapter in the vSphere Storage Guide on page 81.
The software-based iSCSI implementation use standard NICs to connect a host to a remote iSCSI target on the IP network.
The software-based iSCSI implementation use standard NICs to connect a host to a remote iSCSI target on the IP network.
Dependent and Software iSCSI adapters require a VMkernel to be configured.
Software iSCSI Adapters and Dependent iSCSI adapters support IPv4 and IPv6.
Multipathing is not supported when you combine independent Hardware iSCSI adapters with Software or Dependent iSCSI adapters.
Bidirectional CHAP is only supported with Software and Dependent Hardware iSCSI adapters. Independent hardware iSCSI adapters do not support bidirectional CHAP.
– Compare and contrast array thin provisioning and virtual disk thin provisioning
Two models of thin provisioning:
Two models of thin provisioning:
- array-level
- virtual disk-level
Virtual Disk Thin Provisioning in the vSphere Storage Guide on page 253.
Thin provisioned vmdks report allocated size to the guest but only consumes space as data is written to the vmdk.
A thin provisioned vmdk can grow to consume its allocated size.
Thin provisioned vmdks allow for over-provisioning of datastores.
Thin provisioned vmdks save space by only consuming the space need to store data.
Thin provisioned vmdks report allocated size to the guest but only consumes space as data is written to the vmdk.
A thin provisioned vmdk can grow to consume its allocated size.
Thin provisioned vmdks allow for over-provisioning of datastores.
Thin provisioned vmdks save space by only consuming the space need to store data.
Array Thin Provisioning and VMFS Datastores in the vSphere Storage Guide on page 257.
Thin-provisioned LUNs report the LUNs logical size which may be larger than the physical capacity of the disk backing the LUN.
Thin-provisioned LUNs can grow to consume the allocated/logical size.
Array thin provisioning allows for over-provisioning on the array.
VAAI allows the host to be aware of the underlying thin-provisioned LUN and the space usage.
VAAI allows for monitoring of the physical space to provide warning and alerts for over-commitment thresholds and out-of-space conditions.
VAAI also informs the array about datastore space which has been freed when files are deleted or removed to allow the array to reclaim the freed blocks.
Thin-provisioned LUNs report the LUNs logical size which may be larger than the physical capacity of the disk backing the LUN.
Thin-provisioned LUNs can grow to consume the allocated/logical size.
Array thin provisioning allows for over-provisioning on the array.
VAAI allows the host to be aware of the underlying thin-provisioned LUN and the space usage.
VAAI allows for monitoring of the physical space to provide warning and alerts for over-commitment thresholds and out-of-space conditions.
VAAI also informs the array about datastore space which has been freed when files are deleted or removed to allow the array to reclaim the freed blocks.
– Describe zoning and LUN masking practices
Using Zoning with Fibre Channel SANs in the vSphere Storage Guide on page 36.
Using Zoning with Fibre Channel SANs in the vSphere Storage Guide on page 36.
Zoning defines which HBAs/ports can connect to which targets. Zoning controls and isolates paths in the storage fabric.
Use a single-initiator zoning or a single-initiator-single-target zoning.
Single-initiator-single target the preferred zoning practice for ESXi.
Single-initiator-single target the preferred zoning practice for ESXi.
LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts.
The MASK_PATH module can be used to mask devices from a host. This can be used to prevent a host from accessing storage devices or LUNs or from using individual paths to a LUN. This is done using esxcli to create a claimrule which masks the path or device. See Mask Paths in the vSphere Storage Guide on page 192.
– Scan/Rescan storage
Storage Refresh and Rescan Operations in the vSphere Storage Guide on page 124.
Storage Refresh and Rescan Operations in the vSphere Storage Guide on page 124.
Perform a manual rescan when:
- Zone a new disk array on a SAN
- Create new LUNs on an array
- Changes to path masking on the host
- Changes to iSCSI CHAP settings
- Add or remove iSCSI Discovery addresses
If a path is unavailable when a rescan operation is performed, the host removes the path from the list of paths to the device.
- Scan for New Storage Device – Rescans HBAs for new storage devices
- Scan for New VMFS Volumes – Rescans known storage devices for VMFS volumes
– Configure FC/iSCSI LUNs as ESXi boot devices
Boot from SAN supported over FC, iSCSI, and FCoE.
Boot from SAN Requirements
- Each host must have access to its own boot LUN only, not the boot LUNs of other hosts.
- Enable the boot adapter in the host BIOS
- Enable and correctly configure the HBA, so it can access the boot LUN.
Booting ESXi from Fibre Channel SAN in the vSphere Storage Guide on page 49.
Booting from iSCSI SAN in the vSphere Storage Guide on page 107.
Booting ESXi with Software FCoE in the vSphere Storage Guide on page 55.
– Create an NFS share for use with vSphere
Understanding Network File System Datastores in the vSphere Storage Guide on page 152.
Understanding Network File System Datastores in the vSphere Storage Guide on page 152.
NFS v4.1 support new in vSphere 6
NFS Server Configuration
- Export NFS volume using NFS over TCP
- Export the share as either NFS v3 or NFS v4.1. Do not provide both protocol versions to the same share.
- For NFS 3 or non-Kerberos NFS 4.1, ensure that each host has root access to the volume (no_root_squash).
A NFS client is built into ESXi. There are two NFS Clients – one for NFS v3 and one for NFS v4.1.
A VMkernel port group is required for NFS storage.
vSphere 6 supports NFS v3 and NFS v4.1.
NFS v3 and NFS v4.1 (non-Kerberos) support IPv4 and IPv6.
NFS 3 and NFS 4.1 datastores can coexist on the same host.
NFS v3 and NFS v4.1 (non-Kerberos) support IPv4 and IPv6.
NFS 3 and NFS 4.1 datastores can coexist on the same host.
Create an NFS Datastore in the vSphere Storage Guide on page 161.
Upgrading a mounted NFS datastore from NFS v3 to NFS 4.1 is not supported.
NFS 4.1 Capabilities
- NFS 4.1 supports multipathing.
- NFS 4.1 supports nonroot user access when using Kerberos.
- Fault Tolerance is not supported when using NFS 4.1
- NFS 4.1 does not support hardware acceleration.
- NFS 4.1 does not support Storage DRS, Storage IO Control, Site Recovery Manager, or Virtual Volumes (VVOLS).
– Enable/Configure/Disable vCenter Server storage filters
Turn off Storage Filters in the vSphere Storage Guide on page 172.
Turn off Storage Filters in the vSphere Storage Guide on page 172.
Manage vCenter storage filters in the Web Client -> Host and Clusters -> vCenter Server -> Manage -> Settings -> Advanced Settings
Storage filters help prevent device corruption or performance degradation caused by unsupported use of storage devices. These filters are enabled by default.
Storage Filtering in the vSphere Storage Guide on page 173.
- config.vpxd.filter.vmfsFilter
Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server. - config.vpxd.filter.rdmFilter
Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server. - config.vpxd.filter.SameHostAndTransportsFilter
Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. - config.vpxd.filter.hostRescanFilter
Automatically rescans and updates VMFS datastores after you perform datastore management operations.
– Configure/Edit hardware/dependent hardware initiators
Independent and Dependent iSCSI Initiators can be configured in the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Adapters
The default iSCSI name and alias can be configured.
IP settings for Independent Hardware iSCSI adapters can be configured.
Dynamic (SendTargets) and Static Discovery can be configured.
Independent and Dependent iSCSI Initiators can be configured in the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Adapters
The default iSCSI name and alias can be configured.
IP settings for Independent Hardware iSCSI adapters can be configured.
Dynamic (SendTargets) and Static Discovery can be configured.
– Enable/Disable software iSCSI initiator
Add the software iSCSI initiator using the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Adapters
Once the software iSCSI initiator is added it will be listed as a Storage Adapter.
Add the software iSCSI initiator using the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Adapters
Once the software iSCSI initiator is added it will be listed as a Storage Adapter.
– Configure/Edit software iSCSI initiator settings
Configure and Edit Software iSCSI initiator settings in the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Adapters
Configure and Edit Software iSCSI initiator settings in the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Adapters
- Enable/Disable the Adapter
- Update iSCSI Name and Alias
- Configure CHAP Authentication
- View/Attach/Detach Devices from the Host
- Enable/Disable Paths
- Configure Dynamic Discovery (SendTargets) and Static Discovery
- Add Network Port Bindings to the adapter
- Configure iSCSI advanced options
– Configure iSCSI port binding
Guidelines for Using iSCSI Port Binding in ESXi in the vSphere Storage Guide on page 91.
Guidelines for Using iSCSI Port Binding in ESXi in the vSphere Storage Guide on page 91.
Use use port binding for multipathing when:
- iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the VMkernel adapters.
- All VMkernel adapters used for iSCSI port binding must reside in the same broadcast domain and IP subnet.
- All VMkernel adapters used for iSCSI connectivity must reside in the same virtual switch.
- Port binding does not support network routing.
Do not use port binding when:
- Array target iSCSI ports are in a different broadcast domain and IP subnet.
- VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP subnets, or use different virtual switches.
- Routing is required to reach the iSCSI array.
When binding a portgroup/VMkernel adapter to an iSCSI adapter only the VMkernel adapters which are compatible with iSCSI port binding requirements and available physical adapters are listed.
Compliant VMkernel adapters are configured with only a single active uplink and no standby uplinks.
Compliant VMkernel adapters are configured with only a single active uplink and no standby uplinks.
– Enable/Configure/Disable iSCSI CHAP
Configuring CHAP Parameters for iSCSI Adapters in the vSphere Storage Guide on page 98.
Configuring CHAP Parameters for iSCSI Adapters in the vSphere Storage Guide on page 98.
CHAP – Challenge Handshake Authentication Protocol.
Unidirectional CHAP – Target authenticates the initiator, initiator does not authenticate the target.
Bidirectional CHAP – Initiator authenticates the target and target authenticates the initiator.
Bidirectional CHAP – Initiator authenticates the target and target authenticates the initiator.
Bidirectional CHAP is only supported with Software and Dependent Hardware iSCSI adapters. Independent hardware iSCSI adapters do not support bidirectional CHAP.
- None
CHAP authentication is not used. - Use unidirectional CHAP if required by target
Host prefers non-CHAP connection but can use CHAP if required by target. - Use unidirectional CHAP unless prohibited by target
Host prefers CHAP, but can use non-CHAP if target does not support CHAP. - Use unidirectional CHAP
Requires CHAP authentication. - Use bidirectional CHAP
Host and target support bidirectional CHAP.
CHAP name and secret are set at the iSCSI adapter level. All targets receive the same parameters from the adapter. By default all discovery addresses inherit CHAP parameters from the adapter.
CHAP name should not exceed 511 alphanumeric charaters, CHAP secret should not exceed 255 alphanumeric characters.
CHAP provides access authentication only, it does not provide data encryption.
– Determine use case for hardware/dependent hardware/software iSCSI initiator
iSCSI Architecture in the Best Practices for Running VMware vSphere® on iSCSI whitepaper on page 5.
iSCSI Architecture in the Best Practices for Running VMware vSphere® on iSCSI whitepaper on page 5.
The software iSCSI adapter allows the use of iSCSI technology without purchasing specialized hardware adapters.
The Dependent Hardware iSCSI Adapter depends on the VMKernel for networking, configuration, and management but offloads iSCSI processing to the adapter. This reduces CPU overhead in the host.
Independent Hardware iSCSI Adapter implements its own networking, iSCSI configuration, and management interfaces.
– Determine use case for and configure array thin provisioning
Array Thin Provisioning and Virtual Disk Thin provision allows allocation of more space than is physically available.
The use case is to provide more efficient use of capacity by consuming only the amount of space needed to store data.
This can be done on the datastore using thin provisioned virtual disk or on the array using thin provisioned LUNs.
Array Thin Provisioning and Virtual Disk Thin provision allows allocation of more space than is physically available.
The use case is to provide more efficient use of capacity by consuming only the amount of space needed to store data.
This can be done on the datastore using thin provisioned virtual disk or on the array using thin provisioned LUNs.
No comments:
Post a Comment