Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
25 Cards in this Set
- Front
- Back
Configure DirectPath I/O on Host
|
Networking Guide, pages 41-42
1. Select host. 2. Click on Configuration tab. 3. Click Advanced Settings. 4. Click Configure Passthrough 5. Select an available passthrough device that is green from the list (orange indicates that the device state has changes and the host must be rebooted). 6. Click OK. 7. Reboot the host. |
|
Configure PCI passthrough device on a Virtual Machine
|
Virtual Machine Administration, pages 79, 109, 144-145
1. Select virtual machine and click Edit Settings. 2. On the Hardware tab, click Add. 3. Select PCI Device and click Next. 4. Select the passthrough device to use and click Next. 5. Click Finish. |
|
Configure NPIV
|
Storage Guide, pages 43-44
1. Select virtual machine you want to add NPIV to (VM must be powered off, and must have an RDM). 2. Click the Options tab. 3. Select Fibre Channel NPIV. 4. Uncheck "Temporarily Disable NPIV" 5. Select "Generate new WWNs" 6. Select the number of WWNNs (number of WWPNs will change to match). NOTE: Two WWN pairs required for NPIV redundancy. 7. Click OK |
|
Disable Host Registration: GUI
|
Storage Guide, page 60
From Inventory -> Hosts and Clusters 1. Select Host 2. Click on Configuration tab 3, Under Software, click on Advanced Settings 4. Click on Disk in the left-hand pane. 5. In the right-hand pane, scroll down to find Disk.EnableNaviReg 6. Set this value to 0 |
|
Disable Host Registration: CLI
|
esxcli system settings advanced set -i=0 -o"/Disk/EnableNaviReg"
|
|
Increase Max NFS Volumes: GUI
|
From Inventory -> Hosts and Clusters
1, Select Host. 2. Click on Configuration tab. 3. Under Software, click on Advanced Settings 4. Click on NFS in the left-hand pane. 5. In the right-hand pane, scroll down to find NFS.MaxVolumes 6. Set a value 7. Next, in the left-hand pane, click on Net 8. In the right-hand pane, scroll down to find Net.TcpipHeapSize. 9. Set a value. 10. Click OK Increase heap size to 30 MB for up to 32 mount points. Any higher than that, increase the heap size to 32 and the "TcpipHeapMax" to 128. |
|
Increase Max NFS Volumes: CLI
|
esxcli system settings advanced set -i=32 -o"/NFS/MaxVolumes"
esxcli system settings advanced set -i=30 -o"/Net/TcpipHeapSize" Increase heap size (TcpipHeapSize) to 30 MB for up to 32 mount points. Any higher than that, increase the heap size to 32 and the "TcpipHeapMax" to 128. |
|
Configure vCenter Server Storage Filters
|
Storage Guide, pages 125 - 126
1. Click on Administration -> vCenter Server Settings. 2. Click on Advanced Settings 3. At the bottom of the window add in the Filter key you wish to modify and set to "false" to disable. |
|
Storage Filters: vmfsFilter
|
Storage Guide, pages 125 - 126
Advanced Settings Key: config.vpxd.filter.vmfsFilter Filters out storage devices, or LUNs that are already used by VMFS datastores on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with another VMFS datastore, or to be used as an RDM. |
|
Storage Filters: rdmFilter
|
Storage Guide, pages 125 - 126
Advanced Settings Key: config.vpxd.filter.rdmFilter Filters out LUNs that are already referenced by an RDM on any host managed by vCenter server. The LUNs do not show up as candidates to be formatted with VMFS or to be used by a different RDM. |
|
Storage Filters: SameHostAndTransportsFilter
|
Storage Guide, pages 125-126
Advanced Settings Key: config.vpxd.filter.SameHostAndTransportsFilter Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. Prevents you from adding the following LUN types as extents: -LUNs not exposed to all hosts that share the original VMFS datastore. -LUNs that use a storage type different from the one the original VMFS datastore uses (you cannot add a Fibre Channel extent to a VMFS datastore on a local storage device, for instance). |
|
Storage Filters: hostRescanFilter
|
Storage Guide, pages 125 - 126
Advanced Settings Key: config.vpxd.filter.hostRescanFilter Automatically rescans and updates VMFS datastores after you perform datastore management operations. NOTE: If you present a new LUN to a host or cluster, the hosts automatically perform a rescan no matter whether you have the Host Rescan Filter on or off. |
|
Add a MASK_PATH claimrule
|
Storage Guide, pages 169-170
1. Check what the next available rule ID is: esxcli storage core claimrule list 2. Find the volume label and UUID for the LUN you want to mask: esxcli storage vmfs extent list 3. Find and record the path information for the UUID obtained in Step 2: vicfg-mpath -L | grep 'UUID' (for this example, we'll say it's vmhba33:C0:T0:L0) 4. Unmount the VMFS datastore volume esxcli storage filesystem unmount -l <volume label> 5. Add a claimrule, using the next available claimrule number obtained in Step 1 (we'll use 110) and the path information obtained in Step 3 (vmhba33:C0:T0:L0): esxcli storage core claimrule add -r 110 -t location -A vmhba33 -C 0 -T 0 -L 0 -P MASK_PATH 6. Verify that the claimrule (110) is in the list: esxcli storage core claimrule list 7. If the rule is in the list, load the claimrule esxcli storage core claimrule load 8. Disassociated the path from the existing plugin and associate it with the new MASK_PATH plugin esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 0 -L 0 9. Run the claimrule esxcli storage core claimrule run 10. Perform a rescan esxcli storage core adapter rescan -A vmhba33 |
|
Analyze I/O workloads to determine storage performance requirements
|
Use vscsiStats
1. List VMs (World) and disks (Handle IDs) vscsiStats -l 2. Start stats collection (runs for 30 minutes by default): vscsiStats -s -w <World ID> -i <Handle ID> or, for all disks: vscsiStats -s -w <World ID> 3. Export histogram to be analyzed vscsiStats -p <histogram type> -w <World ID> -c > out.csv NOTE: histogram types are: - all - ioLength - seekDistance - outstandingIOs - latency - interarrival 4. Stop collection and purge output data vscsiStats -x |
|
Identify and tag SSD devices
|
Storage Guide, pages 144-148
1. Identify and record device UUID and SATP esxcli storage nmp device list 2. Add PSA rule using UUID and SATP esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA -d "UUID" -o enable_ssd 3. Load claimrule esxcli storage core claimrule load 4. Run unclaim esxcli storage core claiming unclaim -t device -d "UUID" 5. Run claimrule esxcli storage core claimrule run |
|
Administer hardware acceleration for VAAI
|
Storage Guide, pages 174-181
1. Determine the volume label and UUID of the device you want to configure hardware acceleration on: esxcli storage vmfs extent list 2. Display hardware acceleration plug-ins and filter esxcli storage core plugin list -N VAAI esxcli storage core plugin list -N Filter 3. Verify hardware acceleration support status on device: esxcli storage core device list -d "UUID" (ensure the line "VAAI Status" shows "supported") 4. Verify Hardware Acceleration support details: esxcli storage core device vaai status get -d "UUID" 5. Add Hardware Acceleration claim rules Filter Claimrule: esxcli storage core claimrule add -c Filter -P VAAI_FILTER -t vendor -V <vendor name> -u VAAI Claimrule: esxcli storage core claimrule add -c VAAI -P <VAAI plugin name> -t vendor -V <vendor name> -u 6. Load the claimrules esxcli storage core claimrule load -c Filter esxcli storage core claimrule load -c VAAI 7. Run the claimrules esxcli storage core claimrule run -c Filter esxcli storage core claimrule run -c VAAI |
|
Configure profile-based storage: Enable VM Storage Profiles
|
Storage Guide, pages 197-203
From the Home menu in vCenter: 1. Click on VM Storage Profiles 2. Click on Enable VM Storage Profiles 3. Enable Licensed Hosts/Clusters |
|
Configure profile-based storage: Manage Storage Capabilities
|
Storage Guide, pages 197-203
From the Home menu in vCenter: 1. Click on VM Storage Profiles. 2. Click on Manage Storage Capabilities 3. Click on the Add button 4. On the Add Storage Capability screen, define a storage capability. |
|
Configure profile-based storage: Create VM Storage Profile
|
Storage Guide, pages 197-203
From the Home menu in vCenter: 1. Click on VM Storage Profiles. 2. Click on Create New VM Storage Profile 3. Give the VM Storage Profile a name and click Next. 4. On the next page, associate the VM Storage Profile with a defined storage capability. 5. Click Next and click Finish. |
|
Configure profile-based storage: Associate Storage Capability with Datastore
|
Storage Guide, pages 197-203
From Inventory -> Datastores and Datastore Clusters 1. Select a datastore. 2. Right-click on the datastore and select "Assign User-Defined Storage Capability" 3. Select the storage capability from the drop-down list and click OK. |
|
Configure profile-based storage: Apply storage profile to VM
|
Storage Guide, pages 197-203
From Inventory -> Hosts and Clusters 1, Select the VM you want to apply the storage profile to. 2. Right-click and select Edit Settings on the VM. 3. Click on the Profiles tab. 4. Set the VM storage profile for home and propagate to disks, or set the VM storage profile for each disk. 5. Click OK. |
|
Prepare storage for maintenance (unmounting): GUI
|
Storage Guide, page 128
From Inventory -> Hosts and Clusters 1. Select a host. 2. Click on the Configuration tab. 3. Under Hardware, click on Storage. 4. Select the datastore you want to unmount. 5. Right click and select unmount. 6. Validate all checks pass and hit OK. |
|
Prepare storage for maintenance (unmounting): CLI
|
esxcli storage filesystem unmount -l <volume label>
|
|
Upgrade VMware Storage Infrastructure: GUI
|
Storage Guide, pages 120-122
From Inventory -> Hosts and Clusters 1. Select Host. 2. Click on Configuration tab. 3. Click on Storage. 4. Find and click on VMFS 3 datastore. 5. Below datastores, click on "Upgrade to VMFS-5" 6. Once completed, rescan all hosts |
|
Upgrade VMware Storage Infrastructure: CLI
|
esxcli storage vmfs upgrade <volume label>
|