VMware ESXI and Intel Optane NVMe – intelmas firmware update

How to install intelmas tool

[~] esxcli software component apply -d /vmfs/volumes/SSD/_ISO/intel-mas-tool_2.2.18-1OEM.700.0.0.15843807_20956742.zip
Installation Result
   Components Installed: intel-mas-tool_2.2.18-1OEM.700.0.0.15843807
   Components Removed:
   Components Skipped:
   Message: Operation finished successfully.
   Reboot Required: false

Common information about the disc

[~] /opt/intel/intelmas/intelmas show -intelssd 1

- 1 Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -

Bootloader : EB3B0416
Capacity : 260.83 GB (280,065,171,456 bytes)
DevicePath : nvmeMgmt-nvmhba5
DeviceStatus : Healthy
Firmware : E201HPS2
FirmwareUpdateAvailable : The selected drive contains current firmware as of this tool release.
Index : 1
MaximumLBA : 547002287
ModelNumber : INTEL SSDPED1D280GAH
NamespaceId : 1
PercentOverProvisioned : 0.00
ProductFamily : Intel(R) Optane(TM) SSD 905P Series
SMARTEnabled : True
SectorDataSize : 512
SerialNumber : PHMB839000LW280IGN

S.M.A.R.T information

[~] /opt/intel/intelmas/intelmas show -nvmelog SmartHealthInfo -intelssd 1

-  PHMB839000LW280IGN -

- NVMeLog SMART and Health Information -

Volatile memory backup device has failed : False
Temperature has exceeded a critical threshold : False
Temperature - Celsius : 30
Media is in a read-only mode : False
Power On Hours : 0x0100
Power Cycles : 0x03
Number of Error Info Log Entries : 0x0
Controller Busy Time : 0x0
Available Spare Space has fallen below the threshold : False
Percentage Used : 0
Critical Warnings : 0
Data Units Read : 0x02
Available Spare Threshold Percentage : 0
Data Units Written : 0x0
Unsafe Shutdowns : 0x0
Host Write Commands : 0x0
Device reliability has degraded : False
Available Spare Normalized percentage of the remaining spare capacity available : 100
Media Errors : 0x0
Host Read Commands : 0x017F

Show all the SMART properties for the Intel® SSD at index 1

[~] /opt/intel/intelmas/intelmas show  -intelssd 1 -smart

- SMART Attributes PHMB839000LW280IGN -

- B8 -

Action : Pass
Description : End-to-End Error Detection Count
ID : B8
Normalized : 100
Raw : 0

- C7 -

Action : Pass
Description : CRC Error Count
ID : C7
Normalized : 100
Raw : 0

- E2 -

Action : Pass
Description : Timed Workload - Media Wear
ID : E2
Normalized : 100
Raw : 0

- E3 -

Action : Pass
Description : Timed Workload - Host Read/Write Ratio
ID : E3
Normalized : 100
Raw : 0

- E4 -

Action : Pass
Description : Timed Workload Timer
ID : E4
Normalized : 100
Raw : 0

- EA -

Action : Pass
Description : Thermal Throttle Status
ID : EA
Normalized : 100
Raw : 0
ThrottleStatus : 0 %
ThrottlingEventCount : 0

- F0 -

Action : Pass
Description : Retry Buffer Overflow Count
ID : F0
Normalized : 100
Raw : 0

- F3 -

Action : Pass
Description : PLI Lock Loss Count
ID : F3
Normalized : 100
Raw : 0

- F5 -

Action : Pass
Description : Host Bytes Written
ID : F5
Normalized : 100
Raw : 0
Raw (Bytes) : 0

- F6 -

Action : Pass
Description : System Area Life Remaining
ID : F6
Normalized : 100
Raw : 0

Disk firmware update

[~] /opt/intel/intelmas/intelmas load -intelssd 1
WARNING! You have selected to update the drives firmware!
Proceed with the update? (Y|N): Y
Checking for firmware update...

- Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -

Status : The selected drive contains current firmware as of this tool release.

📆 Save the Date | VMware Explore 2023

📆 Save the Date | VMware Explore 2023

We’re excited to announce the next event dates and locations for VMware Explore 2023! August 21 – 24, 2023: The Venetian Convention and Expo Center in Las Vegas, Nevada. November 6 – 9, 2023: Fira Gran Via in Barcelona, Spain. Download the calendar invite and block your schedule off now!


VMware Social Media Advocacy

How to Maxtang’s NX 6412 NUC add to vDS? Fix script /etc/rc.local.d/local.sh

How to fix network after adding to vDS. When you add NX6412 to vDS and reboot ESXi. I don’t have uplink for vDS. You could check it with:

# esxcfg-vswitch -l
DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vDS              2560        6           512               9000    vusb0
--cut
  DVPort ID                               In Use      Client
  468                                     0           
  469                                     0
  470                                     0
  471                                     0

We will have to note DVPort ID 468 – example. vDS is name of your vDS switch.

esxcfg-vswitch -P vusb0 -V 468 vDS

It is necessary add it to /etc/rc.local.d/local.sh before exit 0. You could have similar script from source Persisting USB NIC Bindings

vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
    sleep 10
    count=$(( $count + 1 ))
    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done

esxcfg-vswitch -R
esxcfg-vswitch -P vusb0 -V 468 vDS

exit 0

What’s the story with Optane?

I am using Intel SSD Optane 900P PCIe in my HomeLAB as ZIL L2ARC drives for TrueNAS, but in July of 2022 Intel announced their intention to wind down the Optane business. I will try summary information about Intel Optane from Simon Todd presentation.

My HomeLAB benchmark Optane 900P -TrueNAS ZIL L2ARC with HDD

Optane help a lot with IOPs for RAID with normal HDD. I reach 2,5GB/s peak write performance.

Writer Report – iozone -Raz -b lab.wks -g 1G – Optane 900P -TrueNAS ZIL L2ARC with HDD x-axis File size in KB; z-axis MB/s
Writer Report – iozone -Raz -b lab.wks -g 1G – Optane 900P -TrueNAS ZIL L2ARC with HDD

We call see great write performance for 40GB file size set about 1,7GB/s.

# perftests-nas ; cat iozone.txt
        Run began: Sun Dec 18 08:02:39 2022

        Record Size 128 kB
        File size set to 41943040 kB
        Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.

              kB  reclen    write  rewrite    read    reread
        41943040     128  1734542  1364683  2413381  2371527

iozone test complete.
# dd if=/dev/zero of=foo bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes transferred in 1.517452 secs (707595169 bytes/sec) 707 MB/s

# dd if=/dev/zero of=foo bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes transferred in 0.012079 secs (42386853 bytes/sec) 42 MB/s

Intel® Optane™ Business Update: What Does This Mean for Warranty and Support

As announced in Intel’s Q2 2022 earnings, after careful consideration, Intel plans to cease future development of our Optane products. We will continue development of Crow Pass on Sapphire Rapids as we engage with key customers to determine their plans to deploy this product. While we believe Optane is a superb technology, it has become impractical to deliver products at the necessary scale as a single-source supplier of Optane technology.

We are committed to supporting Optane customers and ecosystem partners through the transition of our existing memory and storage product lines through end-of-life. We continue to sell existing Optane products, and support and the 5-year warranty terms from date of sale remain unchanged.

Get to know intel® optane™ technology
Source Simon Todd – vExpert – Intel Webinar Slides

What makes Optane SSD’s different?

    NAND SSD

    NAND garbage collection requires background writes. NAND SSD block erase process results in slower writes and inconsistent performance.

    Intel® Optane™ technology

    Intel® Optane™ technology does not use garbage collection
    Rapid, in-place writes enable consistently fast response times

    Intel® Optane™ SSDs are different by design
    Source Simon Todd – vExpert – Intel Webinar Slides
    Consistent performance, even under heavy write loads
    Source Simon Todd – vExpert – Intel Webinar Slides
    ModelDies per channelChannelsRaw CapacitySpare Area
    Intel Optane SSD 900p 280GB37336 GB56 GB
    Intel Optane SSD DC P4800X 375GB47448 GB73 GB
    Intel Optane SSD 900p 480GB57560 GB80 GB
    Intel Optane SSD DC P4800X 750GB87896 GB146 GB
    The Optane SSD DC P4800X and the Optane SSD 900p both use the same 7-channel controller, which leads to some unusual drive capacities. The 900p comes with either 3 or 5 memory dies per channel while the P4800X has 4 or 8. All models reserve about 1/6th of the raw capacity for internal use Source

    Intel Optane SSD DC P4800X / 900P Hands-On Review

    Wow, How is Optane fast …

    The Intel Optane SSD DC P4800X is slightly faster than the Optane SSD 900p throughout this test, but either is far faster than the flash-based SSDs. Source

    Maxtang’s NX 6412 NUC – update ESXi 8.0a

    VMware ESXi 8.0a release was announced:

    How to prepare ESXi Custom ISO image 8U0a for NX6412 NUC?

    Download these files:

    Run those script to prepare Custom ISO image you should use PowerCLI version 13.0. Problem with upgrade to PowerCLI you could fix with blog PowerCLI 13 update and installation hurdles on Windows:

    Add-EsxSoftwareDepot .\VMware-ESXi-8.0a-20842819-depot.zip
    Add-EsxSoftwareDepot .\ESXi800-VMKUSB-NIC-FLING-61054763-component-20826251.zip
    New-EsxImageProfile -CloneProfile "ESXi-8.0a-20842819-standard" -name "ESXi-8.0.0-20842819-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-8.0.0-20842819-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-8.0.0-20842819-USBNIC" -ExportToBundle -filepath ESXi-8.0.0-20842819-USBNIC.zip

    Upgrade to ESXi 8.0

    TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.1-20842819-USBNIC.zip -p ESXi-8.0.1-20842819-USBNIC --no-hardware-warning
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true

    vSphere 8 Lab with Cohesity and VMware vExpert gift – Maxtang’s NX 6412 NUC

    During VMware Explore 2022 Barcelona, I’ve been given a gift as a vExpert. You could read it in my previous article. NX6412 doesn’t support onboard NICs. We will need Custom ISO with USB Network Native Driver for ESXi. Because of problem using latest PowerCLI 13 release Nov 25, 2022 with export ISO. I decided to install Custom ISO ESXi 7u2e and than upgrade to ESXi 8.0 with depot zip.

    Thank You Cohesity. Power consumpion is only 10 Watts …

    How to prepare ESXi Custom ISO image 7U2e for NX6412 NUC?

    Download these files:

    Run those script to prepare Custom ISO image you could use PowerCLI 12.7 or 13.0: You could use create_custom_esxi_iso.ps1 as well.

    Add-EsxSoftwareDepot .\VMware-ESXi-7.0U2e-19290878-depot.zip
    Add-EsxSoftwareDepot .\ESXi702-VMKUSB-NIC-FLING-47140841-component-18150468.zip
    New-EsxImageProfile -CloneProfile "ESXi-7.0U2e-19290878-standard" -name "ESXi-7.0U2e-19290878-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-7.0U2e-19290878-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-7.0U2e-19290878-USBNIC" -ExportToIso -filepath ESXi-7.0U2e-19290878-USBNIC.iso

    Create bootable ESXi USB Flash Drive from ESXi-7.0U2e-19290878-USBNIC.iso. More info How to create a bootable ESXi Installer USB Flash Drive

    • For Custom ISO image is necessary select Write in ISO -> ESP mode
    Dialog only for Custom ISO image

    Install ESXi 7U2e and fix Persisting USB NIC Bindings

    Currently there is a limitation in ESXi where USB NIC bindings are picked up much later in the boot process and to ensure settings are preserved upon a reboot, the following needs to be added to /etc/rc.local.d/local.sh based on your configurations.

    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
    count=0
    while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
    do
        sleep 10
        count=$(( $count + 1 ))
        vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
    done
    
    esxcfg-vswitch -R

    Prepare ESXi Custom zip depot 8.0 for NX6412 NUC

    Download these files:

    Run those script to prepare Custom ISO image you could use 13.0. Problem with upgrade to PowerCLI you could fix with blog PowerCLI 13 update and installation hurdles on Windows:

    Add-EsxSoftwareDepot .\VMware-ESXi-8.0-20513097-depot.zip
    Add-EsxSoftwareDepot .\ESXi800-VMKUSB-NIC-FLING-61054763-component-20826251.zip
    New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -name "ESXi-8.0.0-20513097-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-8.0.0-20513097-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-8.0.0-20513097-USBNIC" -ExportToBundle -filepath ESXi-8.0.0-20513097-USBNIC.zip

    Upgrade to ESXi 8.0

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC
    
    Hardware precheck of profile ESXi-8.0.0-20513097-USBNIC failed with warnings: <TPM_VERSION WARNING: TPM 1.2 device detected. Support for TPM version 1.2 is discontinued. Installation may proceed, but may cause the system to behave unexpectedly.>

    You could fix TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC --no-hardware-warning
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true
       VIBs Installed: VMW_bootbank_atlantic_1.0.3.0-10vmw.800.1.0.20513097, VMW_bootbank_bcm-mpi3_8.1.1.0.0.0-1vmw.800.1.0.20513097, VMW_bootbank_bfedac-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_bnxtnet_216.0.50.0-66vmw.800.1.0.20513097, VMW_bootbank_bnxtroce_216.0.58.0-27vmw.800.1.0.20513097, VMW_bootbank_brcmfcoe_12.0.1500.3-4vmw.800.1.0.20513097, VMW_bootbank_cndi-igc_1.2.9.0-1vmw.800.1.0.20513097, VMW_bootbank_dwi2c-esxio_0.1-2vmw.800.1.0.20513097, VMW_bootbank_dwi2c_0.1-2vmw.800.1.0.20513097, VMW_bootbank_elxiscsi_12.0.1200.0-10vmw.800.1.0.20513097, VMW_bootbank_elxnet_12.0.1250.0-8vmw.800.1.0.20513097, VMW_bootbank_i40en_1.11.2.5-1vmw.800.1.0.20513097, VMW_bootbank_iavmd_3.0.0.1010-5vmw.800.1.0.20513097, VMW_bootbank_icen_1.5.1.16-1vmw.800.1.0.20513097, VMW_bootbank_igbn_1.4.11.6-1vmw.800.1.0.20513097, VMW_bootbank_ionic-en-esxio_20.0.0-29vmw.800.1.0.20513097, VMW_bootbank_ionic-en_20.0.0-29vmw.800.1.0.20513097, VMW_bootbank_irdman_1.3.1.22-1vmw.800.1.0.20513097, VMW_bootbank_iser_1.1.0.2-1vmw.800.1.0.20513097, VMW_bootbank_ixgben_1.7.1.39-1vmw.800.1.0.20513097, VMW_bootbank_lpfc_14.0.635.3-14vmw.800.1.0.20513097, VMW_bootbank_lpnic_11.4.62.0-1vmw.800.1.0.20513097, VMW_bootbank_lsi-mr3_7.722.02.00-1vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt2_20.00.06.00-4vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt35_23.00.00.00-1vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt3_17.00.13.00-2vmw.800.1.0.20513097, VMW_bootbank_mlnx-bfbootctl-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_mnet-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.800.1.0.20513097, VMW_bootbank_ne1000_0.9.0-2vmw.800.1.0.20513097, VMW_bootbank_nenic_1.0.35.0-3vmw.800.1.0.20513097, VMW_bootbank_nfnic_5.0.0.35-3vmw.800.1.0.20513097, VMW_bootbank_nhpsa_70.0051.0.100-4vmw.800.1.0.20513097, VMW_bootbank_nmlx5-core-esxio_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-core_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-rdma-esxio_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-rdma_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlxbf-gige-esxio_2.1-1vmw.800.1.0.20513097, VMW_bootbank_ntg3_4.1.8.0-4vmw.800.1.0.20513097, VMW_bootbank_nvme-pcie-esxio_1.2.4.1-1vmw.800.1.0.20513097, VMW_bootbank_nvme-pcie_1.2.4.1-1vmw.800.1.0.20513097, VMW_bootbank_nvmerdma_1.0.3.9-1vmw.800.1.0.20513097, VMW_bootbank_nvmetcp_1.0.1.2-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-ens-esxio_2.0.0.23-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-ens_2.0.0.23-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-esxio_2.0.0.31-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3_2.0.0.31-1vmw.800.1.0.20513097, VMW_bootbank_penedac-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pengpio-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pensandoatlas_1.46.0.E.24.1.256-2vmw.800.1.0.20293628, VMW_bootbank_penspi-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pvscsi-esxio_0.1-5vmw.800.1.0.20513097, VMW_bootbank_pvscsi_0.1-5vmw.800.1.0.20513097, VMW_bootbank_qcnic_1.0.15.0-22vmw.800.1.0.20513097, VMW_bootbank_qedentv_3.40.5.70-4vmw.800.1.0.20513097, VMW_bootbank_qedrntv_3.40.5.70-1vmw.800.1.0.20513097, VMW_bootbank_qfle3_1.0.67.0-30vmw.800.1.0.20513097, VMW_bootbank_qfle3f_1.0.51.0-28vmw.800.1.0.20513097, VMW_bootbank_qfle3i_1.0.15.0-20vmw.800.1.0.20513097, VMW_bootbank_qflge_1.1.0.11-1vmw.800.1.0.20513097, VMW_bootbank_rd1173-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_rdmahl_1.0.0-1vmw.800.1.0.20513097, VMW_bootbank_rste_2.0.2.0088-7vmw.800.1.0.20513097, VMW_bootbank_sfvmk_2.4.0.2010-13vmw.800.1.0.20513097, VMW_bootbank_smartpqi_80.4253.0.5000-2vmw.800.1.0.20513097, VMW_bootbank_spidev-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_vmkata_0.1-1vmw.800.1.0.20513097, VMW_bootbank_vmksdhci-esxio_1.0.2-2vmw.800.1.0.20513097, VMW_bootbank_vmksdhci_1.0.2-2vmw.800.1.0.20513097, VMW_bootbank_vmkusb-esxio_0.1-14vmw.800.1.0.20513097, VMW_bootbank_vmkusb-nic-fling_1.11-1vmw.800.1.20.61054763, VMW_bootbank_vmkusb_0.1-14vmw.800.1.0.20513097, VMW_bootbank_vmw-ahci_2.0.14-1vmw.800.1.0.20513097, VMware_bootbank_bmcal-esxio_8.0.0-1.0.20513097, VMware_bootbank_bmcal_8.0.0-1.0.20513097, VMware_bootbank_clusterstore_8.0.0-1.0.20513097, VMware_bootbank_cpu-microcode_8.0.0-1.0.20513097, VMware_bootbank_crx_8.0.0-1.0.20513097, VMware_bootbank_drivervm-gpu_8.0.0-1.0.20513097, VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-6vmw.800.1.0.20513097, VMware_bootbank_esx-base_8.0.0-1.0.20513097, VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.0-1.0.20513097, VMware_bootbank_esx-ui_2.5.1-20374953, VMware_bootbank_esx-update_8.0.0-1.0.20513097, VMware_bootbank_esx-xserver_8.0.0-1.0.20513097, VMware_bootbank_esxio-base_8.0.0-1.0.20513097, VMware_bootbank_esxio-combiner-esxio_8.0.0-1.0.20513097, VMware_bootbank_esxio-combiner_8.0.0-1.0.20513097, VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.0-1.0.20513097, VMware_bootbank_esxio-update_8.0.0-1.0.20513097, VMware_bootbank_esxio_8.0.0-1.0.20513097, VMware_bootbank_gc-esxio_8.0.0-1.0.20513097, VMware_bootbank_gc_8.0.0-1.0.20513097, VMware_bootbank_loadesx_8.0.0-1.0.20513097, VMware_bootbank_loadesxio_8.0.0-1.0.20513097, VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.800.1.0.20513097, VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_2.7.2173-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-12vmw.800.1.0.20513097, VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.800.1.0.20513097, VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-8vmw.800.1.0.20513097, VMware_bootbank_native-misc-drivers-esxio_8.0.0-1.0.20513097, VMware_bootbank_native-misc-drivers_8.0.0-1.0.20513097, VMware_bootbank_qlnativefc_5.2.46.0-3vmw.800.1.0.20513097, VMware_bootbank_trx_8.0.0-1.0.20513097, VMware_bootbank_vdfs_8.0.0-1.0.20513097, VMware_bootbank_vmware-esx-esxcli-nvme-plugin-esxio_1.2.0.52-1vmw.800.1.0.20513097, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.52-1vmw.800.1.0.20513097, VMware_bootbank_vsan_8.0.0-1.0.20513097, VMware_bootbank_vsanhealth_8.0.0-1.0.20513097, VMware_locker_tools-light_12.0.6.20104755-20513097
       VIBs Removed: VMW_bootbank_atlantic_1.0.3.0-8vmw.702.0.0.17867351, VMW_bootbank_bnxtnet_216.0.50.0-34vmw.702.0.20.18426014, VMW_bootbank_bnxtroce_216.0.58.0-20vmw.702.0.20.18426014, VMW_bootbank_brcmfcoe_12.0.1500.1-2vmw.702.0.0.17867351, VMW_bootbank_brcmnvmefc_12.8.298.1-1vmw.702.0.0.17867351, VMW_bootbank_elxiscsi_12.0.1200.0-8vmw.702.0.0.17867351, VMW_bootbank_elxnet_12.0.1250.0-5vmw.702.0.0.17867351, VMW_bootbank_i40enu_1.8.1.137-1vmw.702.0.20.18426014, VMW_bootbank_iavmd_2.0.0.1152-1vmw.702.0.0.17867351, VMW_bootbank_icen_1.0.0.10-1vmw.702.0.0.17867351, VMW_bootbank_igbn_1.4.11.2-1vmw.702.0.0.17867351, VMW_bootbank_irdman_1.3.1.19-1vmw.702.0.0.17867351, VMW_bootbank_iser_1.1.0.1-1vmw.702.0.0.17867351, VMW_bootbank_ixgben_1.7.1.35-1vmw.702.0.0.17867351, VMW_bootbank_lpfc_12.8.298.3-2vmw.702.0.20.18426014, VMW_bootbank_lpnic_11.4.62.0-1vmw.702.0.0.17867351, VMW_bootbank_lsi-mr3_7.716.03.00-1vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt2_20.00.06.00-3vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt35_17.00.02.00-1vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt3_17.00.10.00-2vmw.702.0.0.17867351, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.702.0.0.17867351, VMW_bootbank_ne1000_0.8.4-11vmw.702.0.0.17867351, VMW_bootbank_nenic_1.0.33.0-1vmw.702.0.0.17867351, VMW_bootbank_nfnic_4.0.0.63-1vmw.702.0.0.17867351, VMW_bootbank_nhpsa_70.0051.0.100-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-core_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-en_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-rdma_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx5-core_4.19.16.10-1vmw.702.0.0.17867351, VMW_bootbank_nmlx5-rdma_4.19.16.10-1vmw.702.0.0.17867351, VMW_bootbank_ntg3_4.1.5.0-0vmw.702.0.0.17867351, VMW_bootbank_nvme-pcie_1.2.3.11-1vmw.702.0.0.17867351, VMW_bootbank_nvmerdma_1.0.2.1-1vmw.702.0.0.17867351, VMW_bootbank_nvmxnet3-ens_2.0.0.22-1vmw.702.0.0.17867351, VMW_bootbank_nvmxnet3_2.0.0.30-1vmw.702.0.0.17867351, VMW_bootbank_pvscsi_0.1-2vmw.702.0.0.17867351, VMW_bootbank_qcnic_1.0.15.0-11vmw.702.0.0.17867351, VMW_bootbank_qedentv_3.40.5.53-20vmw.702.0.20.18426014, VMW_bootbank_qedrntv_3.40.5.53-17vmw.702.0.20.18426014, VMW_bootbank_qfle3_1.0.67.0-14vmw.702.0.0.17867351, VMW_bootbank_qfle3f_1.0.51.0-19vmw.702.0.0.17867351, VMW_bootbank_qfle3i_1.0.15.0-12vmw.702.0.0.17867351, VMW_bootbank_qflge_1.1.0.11-1vmw.702.0.0.17867351, VMW_bootbank_rste_2.0.2.0088-7vmw.702.0.0.17867351, VMW_bootbank_sfvmk_2.4.0.2010-4vmw.702.0.0.17867351, VMW_bootbank_smartpqi_70.4000.0.100-6vmw.702.0.0.17867351, VMW_bootbank_vmkata_0.1-1vmw.702.0.0.17867351, VMW_bootbank_vmkfcoe_1.0.0.2-1vmw.702.0.0.17867351, VMW_bootbank_vmkusb-nic-fling_1.8-3vmw.702.0.20.47140841, VMW_bootbank_vmkusb_0.1-4vmw.702.0.20.18426014, VMW_bootbank_vmw-ahci_2.0.9-1vmw.702.0.0.17867351, VMware_bootbank_clusterstore_7.0.2-0.30.19290878, VMware_bootbank_cpu-microcode_7.0.2-0.30.19290878, VMware_bootbank_crx_7.0.2-0.30.19290878, VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-4vmw.702.0.0.17867351, VMware_bootbank_esx-base_7.0.2-0.30.19290878, VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.2-0.30.19290878, VMware_bootbank_esx-ui_1.34.8-17417756, VMware_bootbank_esx-update_7.0.2-0.30.19290878, VMware_bootbank_esx-xserver_7.0.2-0.30.19290878, VMware_bootbank_gc_7.0.2-0.30.19290878, VMware_bootbank_loadesx_7.0.2-0.30.19290878, VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.702.0.0.17867351, VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_2.0.0-2vmw.702.0.0.17867351, VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-5vmw.702.0.0.17867351, VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-hp-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-6vmw.702.0.0.17867351, VMware_bootbank_native-misc-drivers_7.0.2-0.30.19290878, VMware_bootbank_qlnativefc_4.1.14.0-5vmw.702.0.0.17867351, VMware_bootbank_vdfs_7.0.2-0.30.19290878, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.42-1vmw.702.0.0.17867351, VMware_bootbank_vsan_7.0.2-0.30.19290878, VMware_bootbank_vsanhealth_7.0.2-0.30.19290878, VMware_locker_tools-light_11.2.6.17901274-18295176
       VIBs Skipped:

    And reboot ESXi after upgrade with cmd reboot. Good luck

    How to create a bootable ESXi Installer USB Flash Drive

    ESXi Image Download

    Create a bootable ESXi Installer USB Flash Drive with Windows

    • Press SELECT and open the ESXi ISO image
    • Select your flash drive
    • Control Partition scheme: GPT and Target UEFI
    • Press START
    • For Custom ISO image is necessary select Write in ISO -> ESP mode
    Dialog only for Custom ISO image

    New ESXCLI Commands Details in vSphere 8.0

    In ESXi 8 / vSphere 8.0 the command line interface esxcli has been extended with new features.

    esxcli daemon entitlement

    [root@ESXI-8:~] esxcli daemon entitlement add --help
    Usage: esxcli daemon entitlement add [cmd options]
    
    Description:
      add                   Add Partner REST entitlements to the partner user.
    
    Cmd options:
      -p|--partner-user-name=<str>
                            Specifies the partner's user name. (required)
      -r|--read-acccess     Grant read access to the partner.
      -w|--write-acccess    Grant write access to the partner.
    [root@ESXI-8:~] esxcli daemon entitlement list --help
    Usage: esxcli daemon entitlement list [cmd options]
    
    Description:
      list                  List the installed DSDK built daemons.
    
    Cmd options:
      -p|--partner-user-name=<str>
                            Specifies the partner's user name. (required)
    [root@ESXI-8:~] esxcli daemon entitlement remove --help
    Usage: esxcli daemon entitlement remove [cmd options]
    
    Description:
      remove                Remove Partner REST entitlments from the partner user.
    
    Cmd options:
      -p|--partner-user-name=<str>
                            Specifies the partner's user name. (required)
      -r|--read-acccess     Remove read access from the partner.
      -w|--write-acccess    Remove write access from the partner.
    [root@ESXI-8:~] esxcli hardware devicecomponent list --help
    Usage: esxcli hardware devicecomponent list [cmd options]
    
    Description:
      list                  List all device components on this host.
    
    Cmd options:

    esxcli network ip hosts

    [root@ESXI-8:~] esxcli network ip hosts add --help
    Usage: esxcli network ip hosts add [cmd options]
    
    Description:
      add                   Add association of IP addresses with host names.
    
    Cmd options:
      -A|--alias=[ <str> ... ]
                            The list of aliases of the host.
      -C|--comment=<str>    Comment line of this item
      -H|--hostname=<str>   The name of the host. (required)
      -I|--ip=<str>         The IP address (v4 or v6) of the host. (required)
    [root@ESXI-8:~] esxcli network ip hosts list --help
    Usage: esxcli network ip hosts list [cmd options]
    
    Description:
      list                  List the user specified associations of IP addresses with host names.
    
    Cmd options:
    [root@ESXI-8:~] esxcli network ip hosts remove --help
    Usage: esxcli network ip hosts remove [cmd options]
    
    Description:
      remove                Remove association of IP addresses with host names.
    
    Cmd options:
      -H|--hostname=<str>   The name of the host. (required)
      -I|--ip=<str>         The IP address (v4 or v6) of the host. (required)

    esxcli nvme

    [root@ESXI-8:~] esxcli nvme device config list
    ol stats list
    Name          Default   Current   Description
    ------------  --------  --------  -----------
    logLevel      0         0         Log level of this plugin.
    adminTimeout  60000000  60000000  Timeout in microseconds of the admin commands issued by this plugin.
    
    [root@ESXI-8:~] esxcli nvme device config set --help
    Usage: esxcli nvme device config set [cmd options]
    
    Description:
      set                   Set the plugin's parameter
    
    Cmd options:
      -p|--parameter=<str>  Parameter name (required)
      -v|--value=<str>      Parameter value (required)
    [root@ESXI-8:~] esxcli nvme device log get --help
    Usage: esxcli nvme device log get [cmd options]
    
    Description:
      get                   Get NVMe log page
    
    Cmd options:
      -A|--adapter=<str>    Adapter to operate on (required)
      -l|--length=<long>    Log page length. (required)
      -i|--lid=<str>        Log page ID. Both decimal number and hexadecimal number are accepted. Hexadecimal number should start with '0x' or '0X'. (required)
      -I|--lsi=<long>       Log specific ID. The default value is 0.
      -s|--lsp=<long>       Log specific field. The default value is 0.
      -n|--namespace=<long> Namespace ID. The default value is 0xFFFFFFFF.
      -o|--offset=<long>    Log page offset. The default value is 0.
      -p|--path=<str>       Log path. If set, the raw log data will be wrote to the specified file. If not set, the log data will be printed in hex format.
      -r|--rae=<long>       Retain asynchronous event. The default value is 0.
      -u|--uuid=<long>      UUID index. The default value is 0.
    [root@ESXI-8:~] esxcli nvme device log persistentevent get
    Error: Missing required parameter -a|--action
           Missing required parameter -A|--adapter
    
    Usage: esxcli nvme device log persistentevent get [cmd options]
    
    Description:
      get                   Get NVMe persistent event log
    
    Cmd options:
      -a|--action=<long>    Action the controller shall take during processing this command. 0: Read log data. 1: Establish context and read log data. 2: Release context. (required)
      -A|--adapter=<str>    Adapter to operate on (required)
      -p|--path=<str>       Persistent event log path. This parameter is required if the --action parameter is 0 or 1.
    [root@ESXI-8:~] esxcli nvme device log telemetry controller get --help
    Usage: esxcli nvme device log telemetry controller get [cmd options]
    
    Description:
      get                   Get NVMe telemetry controller-initiated data
    
    Cmd options:
      -A|--adapter=<str>    Adapter to operate on (required)
      -d|--data=<long>      Data area to get telemetry data, 3 is selected if not set
      -p|--path=<str>       Telemetry log path (required)
    [root@ESXI-8:~] esxcli nvme device log telemetry host get --help
    Usage: esxcli nvme device log telemetry host get [cmd options]
    
    Description:
      get                   Get NVMe telemetry host-initiated data
    
    Cmd options:
      -A|--adapter=<str>    Adapter to operate on (required)
      -d|--data=<long>      Data area to get telemetry data, 3 is selected if not set
      -p|--path=<str>       Telemetry log path (required)

    esxcli storage core

    [root@ESXI-8:~] esxcli storage core nvme device list --help
    Usage: esxcli storage core nvme device list [cmd options]
    
    Description:
      list                  List the NVMe devices currently registered with the PSA.
    
    Cmd options:
      -d|--device=<str>     Filter the output of this command to only show a single device.
      -o|--exclude-offline  If set this flag will exclude the offline devices.
      -p|--pe-only          If set this flag will list the mount points of PE type.
      --skip-slow-fields    Do not show the value of some fields that need more time to fetch. The output will show the value <skipped> for such fields.
    [root@ESXI-8:~] esxcli storage core nvme path list --help
    Usage: esxcli storage core nvme path list [cmd options]
    
    Description:
      list                  List all the NVMe paths on the system.
    
    Cmd options:
      -d|--device=<str>     Limit the output to paths to a specific device. This name can be any of the UIDs for a specific device.
      -p|--path=<str>       Limit the output to a specific path. This name can be either the UID or the runtime name of the path.
    [root@ESXI-8:~] esxcli storage core scsi device list
    t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____
       Display Name: Local ATA Disk (t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____)
       Has Settable Display Name: true
       Size: 476940
       Device Type: Direct-Access
       Multipath Plugin: HPP
       Devfs Path: /vmfs/devices/disks/t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____
       Vendor: ATA
       Model: Samsung SSD 850
       Revision: 1B6Q
       SCSI Level: 5
       Is Pseudo: false
       Status: on
       Is RDM Capable: false
       Is Local: true
       Is Removable: false
       Is SSD: true
       Is VVOL PE: false
       Is Offline: false
       Is Perennially Reserved: false
       Queue Full Sample Size: 0
       Queue Full Threshold: 0
       Thin Provisioning Status: yes
       Attached Filters:
       VAAI Status: unsupported
       Other UIDs: vml.0100000000533333444e58304a36303034383854202020202053616d73756e
       Is Shared Clusterwide: false
       Is SAS: false
       Is USB: false
       Is Boot Device: true
       Device Max Queue Depth: 31
       IOs with competing worlds: 31
       Drive Type: unknown
       RAID Level: unknown
       Number of Physical Drives: unknown
       Protection Enabled: false
       PI Activated: false
       PI Type: 0
       PI Protection Mask: NO PROTECTION
       Supported Guard Types: NO GUARD SUPPORT
       DIX Enabled: false
       DIX Guard Type: NO GUARD SUPPORT
       Emulated DIX/DIF Enabled: false
    [root@ESXI-8:~] esxcli storage core scsi path list
    sata.vmhba0-sata.0:0-t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____
       UID: sata.vmhba0-sata.0:0-t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____
       Runtime Name: vmhba0:C0:T0:L0
       Device: t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____
       Device Display Name: Local ATA Disk (t10.ATA_____Samsung_SSD_850_EVO_M.2_500GB___________S33DNX0J600488T_____)
       Adapter: vmhba0
       Controller: Not Applicable
       Channel: 0
       Target: 0
       LUN: 0
       Plugin: HPP
       State: active
       Transport: sata
       Adapter Identifier: sata.vmhba0
       Target Identifier: sata.0:0
       Adapter Transport Details: Unavailable or path is unclaimed
       Target Transport Details: Unavailable or path is unclaimed
       Maximum IO Size: 33554432

    esxcli storage osdata

    [root@ESXI-8:~] esxcli storage osdata create --help
    Usage: esxcli storage osdata create [cmd options]
    
    Description:
      create                Create an OSData partition on a disk.
    
    Cmd options:
      --clearpartitions     Erase existing partitions and force the operation.
      -d|--diskname=<str>   Target disk device on which to create the OSData partition. (required)
      -m|--mediasize=<str>  The size of the created partition.
                                default: 128 GB
                                max: Use whole device
                                min: 32 GB
                                small: 64 GB
                              (required)

    esxcli storage vvol stats

    [root@ESXI-8:~] esxcli storage vvol stats add --help
    Usage: esxcli storage vvol stats add [cmd options]
    
    Description:
      add                   Add entity for stats tracking
    
    Cmd options:
      -e|--entity=<str>     entity Id (required)
      -n|--namespace=<str>  entity namespace (required)
    [root@ESXI-8:~] esxcli storage vvol stats disable --help
    Usage: esxcli storage vvol stats disable [cmd options]
    
    Description:
      disable               Disable stats for complete namespace
    [root@ESXI-8:~] esxcli storage vvol stats enable --help
    Usage: esxcli storage vvol stats enable [cmd options]
    
    Description:
      enable                Enable stats for complete namespace
    [root@ESXI-8:~] esxcli storage vvol stats get --help
    Usage: esxcli storage vvol stats get [cmd options]
    
    Description:
      get                   Get stats for given stats namespace
    
    Cmd options:
      -d|--dump=<str>       Dump the stats in log file with given custom message
      -e|--entity=<str>     entity Id
      -n|--namespace=<str>  node namespace expression
      -r|--raw              Enable raw format output
    [root@ESXI-8:~] esxcli storage vvol stats list --help
    Usage: esxcli storage vvol stats list [cmd options]
    
    Description:
      list                  List all supported stats
    
    Cmd options:
      -n|--namespace=<str>  node namespace expression
    [root@ESXI-8:~] esxcli storage vvol stats remove --help
    Usage: esxcli storage vvol stats remove [cmd options]
    
    Description:
      remove                Remove tracked entity
    
    Cmd options:
      -e|--entity=<str>     entity Id (required)
      -n|--namespace=<str>  entity namespace (required)
    [root@ESXI-8:~] esxcli storage vvol stats reset --help
    Usage: esxcli storage vvol stats reset [cmd options]
    
    Description:
      reset                 Reset stats for given namespace
    
    Cmd options:
      -e|--entity=<str>     entity Id
      -n|--namespace=<str>  node namespace (required)
    [root@ESXI-8:~] esxcli storage vvol vmstats get --help
    Usage: esxcli storage vvol vmstats get [cmd options]
    
    Description:
      get                   Get the VVol information and statistics for a specific virtual machine.
    
    Cmd options:
      -c|--get-config-vvol  Get config VVol stats along with data VVols.
      -v|--vm-name=<str>    Display name of the virtual machine. (required)

    esxcli system health report

    [root@ESXI-8:~] esxcli system health report get --help
    Usage: esxcli system health report get [cmd options]
    
    Description:
      get                   Displays one or more health reports
    
    Cmd options:
      --all-reports         Retrieve all the health reports. The default behavior is to retrieve only the latest health report.
      -f|--filename=<str>   The absolute path on the ESXi host where the health report(s) should be copied. If multiple reports are specified, they will be concatenated to this file.
      -r|--report-names=[ <str> ... ]
                            Specifies one or more health reports to display. The name(s) of the report can be obtained from the 'esxcli system health report list' command. (required)
    [root@ESXI-8:~] esxcli system health report list
    Name                    Time
    ----------------------  ----
    vmw.memoryHealth        2022-12-08T01:53:01+00:00
    vmw.ssdStorageHealth    2022-12-08T01:53:31+00:00
    vmw.coreServicesStatus  2022-12-08T01:54:01+00:00
    hostd-health            2022-12-08T01:55:01+00:00
    vmw.vpxaStatus          2022-12-08T01:54:31+00:00
    vmw.PSODCount           2022-12-08T01:20:01+00:00
    vmw.autoscaler          2022-12-08T01:55:01+00:00

    esxcli system ntp stats get

    [root@ESXI-8:~] esxcli system ntp stats get --help
    Usage: esxcli system ntp stats get [cmd options]
    
    Description:
      get                   Report operational state of Network Time Protocol Daemon

    esxcli system security

    [root@ESXI-8:~] esxcli system security keypersistence disable --help
    Usage: esxcli system security keypersistence disable [cmd options]
    
    Description:
      disable               Disable key persistence daemon.
    
    Cmd options:
      --remove-all-stored-keys
                            Confirm deletion of all stored keys. This confirmation is required.
    [root@ESXI-8:~] esxcli system security keypersistence enable --help
    Usage: esxcli system security keypersistence enable [cmd options]
    
    Description:
      enable                Enable key persistence daemon.
    [root@ESXI-8:~] esxcli system settings encryption get --help
    Usage: esxcli system settings encryption get [cmd options]
    
    Description:
      get                   Get the encryption mode and policy.
    [root@ESXI-8:~] esxcli system settings encryption recovery list --help
    Usage: esxcli system settings encryption recovery list [cmd options]
    
    Description:
      list                  List recovery keys.
    [root@ESXI-8:~] esxcli system settings encryption recovery rotate --help
    Usage: esxcli system settings encryption recovery rotate [cmd options]
    
    Description:
      rotate                Rotate the recover key.
    
    Cmd options:
      -k|--keyid=<str>      The ID of the new recovery key. If no value is specified, the system will generate a new key.
      -u|--uuid=<str>       The UUID of the recovery key to be rotated. (required)
    
    [root@ESXI-8:~] esxcli system settings encryption set --help
    Usage: esxcli system settings encryption set [cmd options]
    
    Description:
      set                   Set the encryption mode and policy.
    
    Cmd options:
      -m|--mode=<str>       Set the encryption mode.
      -e|--require-exec-installed-only=<bool>
                            Require executables to be loaded only from installed VIBs.
      -s|--require-secure-boot=<bool>
                            Require secure boot.
    [root@ESXI-8:~] esxcli system settings gueststore repository get --help
    Usage: esxcli system settings gueststore repository get [cmd options]
    
    Description:
      get                   Get GuestStore repository.
    [root@ESXI-8:~] esxcli system settings gueststore repository set --help
    Usage: esxcli system settings gueststore repository set [cmd options]
    
    Description:
      set                   Set or clear GuestStore repository.
    
    Cmd options:
      --url=<str>           URL of a repository to set; to clear GuestStore repository, set --url "" (required)

    esxcli system syslog config logfilter

    [root@ESXI-8:~] esxcli system syslog config logfilter add --help
    Usage: esxcli system syslog config logfilter add [cmd options]
    
    Description:
      add                   Add a log filter.
    
    Cmd options:
      -f|--filter=<str>     The filter to be added. Format is: numLogs | ident | logRegexp. 'numLogs' sets the maximum number of log entries for the specified log messages. After reaching this number, the specified log messages are filtered and ignored. 'ident' specifies one or more
                            system components to apply the filter to the log messages that these components generate. 'logRegexp' specifies a case-sensitive phrase with Python regular expression syntax to filter the log messages by their content. (required)
    [root@ESXI-8:~] esxcli system syslog config logfilter get --help
    Usage: esxcli system syslog config logfilter get [cmd options]
    
    Description:
      get                   Show the current log filter configuration values.
    [root@ESXI-8:~] esxcli system syslog config logfilter list --help
    Usage: esxcli system syslog config logfilter list [cmd options]
    
    Description:
      list                  Show the added log filters.
    [root@ESXI-8:~] esxcli system syslog config logfilter remove --help
    Usage: esxcli system syslog config logfilter remove [cmd options]
    
    Description:
      remove                Remove a log filter.
    
    Cmd options:
      -f|--filter=<str>     The filter to be removed. (required)
    [root@ESXI-8:~] esxcli system syslog config logfilter set --help
    Usage: esxcli system syslog config logfilter set [cmd options]
    
    Description:
      set                   Set log filtering configuration options.
    
    Cmd options:
      --log-filtering-enabled=<bool>
                            Enable or disable log filtering. (required)

    esxcli vsan hardware vcg

    [[root@ESXI-8:~] esxcli vsan hardware vcg add --help
    Usage: esxcli vsan hardware vcg add [cmd options]
    
    Description:
      add                   Map unidentified vSAN hardware device with VCG ID.
    
    Cmd options:
      -d|--device-id=<str>  Unidentified Device ID. It can be seen with command "esxcli storage core device list" (e.g. nqn.2014-08.org.nvmexpress_8086_Dell_Express_Flash_NVMe_P4610_1.6TB_SFF_BTLN9443030C1P6AGN). (required)
      -v|--vcg-id=<long>    VCG ID. (required)
    [root@ESXI-8:~] esxcli vsan hardware vcg get
    Usage: esxcli vsan hardware vcg get [cmd options]
    
    Description:
      get                   Get the vSAN VCG ID for a vSAN hardware device. Output is VCG ID while "N/A" means device ID is not mapped.
    
    Cmd options:
      -d|--device-id=<str>  Unidentified Device ID. It can be seen command "esxcli storage core device list" (e.g. nqn.2014-08.org.nvmexpress_8086_Dell_Express_Flash_NVMe_P4610_1.6TB_SFF_BTLN9443030C1P6AGN). (required)

    esxcli vsan storagepool

    [root@ESXI-8:~] esxcli vsan hardware vcg add --help
    Usage: esxcli vsan hardware vcg add [cmd options]
    
    Description:
      add                   Map unidentified vSAN hardware device with VCG ID.
    
    Cmd options:
      -d|--device-id=<str>  Unidentified Device ID. It can be seen with command "esxcli storage core device list" (e.g. nqn.2014-08.org.nvmexpress_8086_Dell_Express_Flash_NVMe_P4610_1.6TB_SFF_BTLN9443030C1P6AGN). (required)
      -v|--vcg-id=<long>    VCG ID. (required)
    [root@ESXI-8:~] esxcli vsan storagepool list --help
    Usage: esxcli vsan storagepool list [cmd options]
    
    Description:
      list                  List vSAN storage pool configuration.
    
    Cmd options:
      -d|--device=<str>     Filter the output of this command to only show a single device with specified device name.
      -u|--uuid=<str>       Filter the output of this command to only show a single device with specified UUID.
    [root@ESXI-8:~] esxcli vsan storagepool mount --help
    Usage: esxcli vsan storagepool mount [cmd options]
    
    Description:
      mount                 Mount vSAN disk from storage pool.
    
    Cmd options:
      -d|--disk=[ <str> ... ]
                            Name of disk to mount from storage pool. e.g.: mpx.vmhba2:C0:T1:L0. Multiple devices can be provided using format -d device1 -d device2 -d device3.
      -u|--uuid=[ <str> ... ]
                            The vSAN UUID of disk to mount from storage pool. e.g.: 52afa1de-4240-d5d6-17f9-8af1ec8509e5. Multiple UUIDs can be provided using format -u uuid1 -u uuid2 -u uuid3.
    [root@ESXI-8:~] esxcli vsan storagepool rebuild --help
    Usage: esxcli vsan storagepool rebuild [cmd options]
    
    Description:
      rebuild               Rebuild vSAN storage pool disks.
    
    Cmd options:
      -d|--disk=<str>       Name of disk to rebuild for use by vSAN storage pool. E.g.: mpx.vmhba2:C0:T1:L0.
      -m|--evacuation-mode=<str>
                            Action to take upon removing storage pool from vSAN (default noAction). Available modes are
                                EnsureObjectAccessibility: Evacuate data from the disk to ensure object accessibility in the vSAN cluster, before removing the disk.
                                EvacuateAllData: Evacuate all data from the disk before removing it.
                                NoAction: Do not move vSAN data out of the disk before removing it.
      -u|--uuid=<str>       The vSAN UUID of the disk to rebuild for use by vSAN storage pool. E.g.: 5291022a-ad03-df90-dd0f-b9f980cc005e.
    [root@ESXI-8:~] esxcli vsan storagepool remove --help
    Usage: esxcli vsan storagepool remove [cmd options]
    
    Description:
      remove                Remove physical disk from storage pool usage. Exactly one of --disk or --uuid param is required.
    
    Cmd options:
      -d|--disk=<str>       Specify individual vSAN disk to remove from storage pool. e.g.: mpx.vmhba2:C0:T1:L0.
      -m|--evacuation-mode=<str>
                            Action the vSAN service must take before the disk can be removed (default noAction). Allowed values are:
                            ensureObjectAccessibility: Evacuate data from the disk to ensure object accessibility in the vSAN cluster, before removing the disk.
                            evacuateAllData: Evacuate all data from the disk before removing it.
                            noAction: Do not move vSAN data out of the disk before removing it.
      -f|--force            Forcefully remove unhealthy disk that has run into permanent metadata read/write errors.
                            Use -f|--force option only if remove disk operation failed repeatedly without force option.
                            Only 'noAction' evacuation mode is supported with -f|--force option.
      -u|--uuid=<str>       Specify UUID of vSAN disk to remove from storage pool. e.g.: 52afa1de-4240-d5d6-17f9-8af1ec8509e5.
    [root@ESXI-8:~] esxcli vsan storagepool unmount --help
    Usage: esxcli vsan storagepool unmount [cmd options]
    
    Description:
      unmount               Unmount vSAN disk from storage pool.
    
    Cmd options:
      -d|--disk=<str>       Name of disk to unmount from storage pool. e.g.: mpx.vmhba2:C0:T1:L0.
      -m|--evacuation-mode=<str>
                            Action to take upon unmounting storage pool from vSAN (default noAction). Available modes are
                                EnsureObjectAccessibility: Evacuate data from the disk to ensure object accessibility in the vSAN cluster, before unmounting the disk.
                                EvacuateAllData: Evacuate all data from the disk before unmounting it.
                                NoAction: Do not move vSAN data out of the disk before unmounting it.
      -f|--force            Forcefully unmount unhealthy disk that has run into permanent metadata read/write errors.
                            Use -f|--force option only if unmount disk operation failed repeatedly without force option.
                            Only 'noAction' evacuation mode is supported with -f|--force option.
      -u|--uuid=<str>       The vSAN UUID of disk to unmount from storage pool. e.g.: 52afa1de-4240-d5d6-17f9-8af1ec8509e5.

    For reference ESXCLI full commands list for ESXi 8.0.

    For reference ESXCLI full commands list for ESXi 7.0.

    For reference ESXCLI full commands list for ESXi 6.x.