VMware ESXI and Intel Optane NVMe – intelmas firmware update

How to install intelmas tool

[~] esxcli software component apply -d /vmfs/volumes/SSD/_ISO/intel-mas-tool_2.2.18-1OEM.700.0.0.15843807_20956742.zip
Installation Result
   Components Installed: intel-mas-tool_2.2.18-1OEM.700.0.0.15843807
   Components Removed:
   Components Skipped:
   Message: Operation finished successfully.
   Reboot Required: false

Common information about the disc

[~] /opt/intel/intelmas/intelmas show -intelssd 1

- 1 Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -

Bootloader : EB3B0416
Capacity : 260.83 GB (280,065,171,456 bytes)
DevicePath : nvmeMgmt-nvmhba5
DeviceStatus : Healthy
Firmware : E201HPS2
FirmwareUpdateAvailable : The selected drive contains current firmware as of this tool release.
Index : 1
MaximumLBA : 547002287
ModelNumber : INTEL SSDPED1D280GAH
NamespaceId : 1
PercentOverProvisioned : 0.00
ProductFamily : Intel(R) Optane(TM) SSD 905P Series
SMARTEnabled : True
SectorDataSize : 512
SerialNumber : PHMB839000LW280IGN

S.M.A.R.T information

[~] /opt/intel/intelmas/intelmas show -nvmelog SmartHealthInfo -intelssd 1

-  PHMB839000LW280IGN -

- NVMeLog SMART and Health Information -

Volatile memory backup device has failed : False
Temperature has exceeded a critical threshold : False
Temperature - Celsius : 30
Media is in a read-only mode : False
Power On Hours : 0x0100
Power Cycles : 0x03
Number of Error Info Log Entries : 0x0
Controller Busy Time : 0x0
Available Spare Space has fallen below the threshold : False
Percentage Used : 0
Critical Warnings : 0
Data Units Read : 0x02
Available Spare Threshold Percentage : 0
Data Units Written : 0x0
Unsafe Shutdowns : 0x0
Host Write Commands : 0x0
Device reliability has degraded : False
Available Spare Normalized percentage of the remaining spare capacity available : 100
Media Errors : 0x0
Host Read Commands : 0x017F

Show all the SMART properties for the Intel® SSD at index 1

[~] /opt/intel/intelmas/intelmas show  -intelssd 1 -smart

- SMART Attributes PHMB839000LW280IGN -

- B8 -

Action : Pass
Description : End-to-End Error Detection Count
ID : B8
Normalized : 100
Raw : 0

- C7 -

Action : Pass
Description : CRC Error Count
ID : C7
Normalized : 100
Raw : 0

- E2 -

Action : Pass
Description : Timed Workload - Media Wear
ID : E2
Normalized : 100
Raw : 0

- E3 -

Action : Pass
Description : Timed Workload - Host Read/Write Ratio
ID : E3
Normalized : 100
Raw : 0

- E4 -

Action : Pass
Description : Timed Workload Timer
ID : E4
Normalized : 100
Raw : 0

- EA -

Action : Pass
Description : Thermal Throttle Status
ID : EA
Normalized : 100
Raw : 0
ThrottleStatus : 0 %
ThrottlingEventCount : 0

- F0 -

Action : Pass
Description : Retry Buffer Overflow Count
ID : F0
Normalized : 100
Raw : 0

- F3 -

Action : Pass
Description : PLI Lock Loss Count
ID : F3
Normalized : 100
Raw : 0

- F5 -

Action : Pass
Description : Host Bytes Written
ID : F5
Normalized : 100
Raw : 0
Raw (Bytes) : 0

- F6 -

Action : Pass
Description : System Area Life Remaining
ID : F6
Normalized : 100
Raw : 0

Disk firmware update

[~] /opt/intel/intelmas/intelmas load -intelssd 1
WARNING! You have selected to update the drives firmware!
Proceed with the update? (Y|N): Y
Checking for firmware update...

- Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -

Status : The selected drive contains current firmware as of this tool release.

How to Maxtang’s NX 6412 NUC add to vDS? Fix script /etc/rc.local.d/local.sh

How to fix network after adding to vDS. When you add NX6412 to vDS and reboot ESXi. I don’t have uplink for vDS. You could check it with:

# esxcfg-vswitch -l
DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vDS              2560        6           512               9000    vusb0
--cut
  DVPort ID                               In Use      Client
  468                                     0           
  469                                     0
  470                                     0
  471                                     0

We will have to note DVPort ID 468 – example. vDS is name of your vDS switch.

esxcfg-vswitch -P vusb0 -V 468 vDS

It is necessary add it to /etc/rc.local.d/local.sh before exit 0. You could have similar script from source Persisting USB NIC Bindings

vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
    sleep 10
    count=$(( $count + 1 ))
    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done

esxcfg-vswitch -R
esxcfg-vswitch -P vusb0 -V 468 vDS

exit 0

What’s the story with Optane?

I am using Intel SSD Optane 900P PCIe in my HomeLAB as ZIL L2ARC drives for TrueNAS, but in July of 2022 Intel announced their intention to wind down the Optane business. I will try summary information about Intel Optane from Simon Todd presentation.

My HomeLAB benchmark Optane 900P -TrueNAS ZIL L2ARC with HDD

Optane help a lot with IOPs for RAID with normal HDD. I reach 2,5GB/s peak write performance.

Writer Report – iozone -Raz -b lab.wks -g 1G – Optane 900P -TrueNAS ZIL L2ARC with HDD x-axis File size in KB; z-axis MB/s
Writer Report – iozone -Raz -b lab.wks -g 1G – Optane 900P -TrueNAS ZIL L2ARC with HDD

We call see great write performance for 40GB file size set about 1,7GB/s.

# perftests-nas ; cat iozone.txt
        Run began: Sun Dec 18 08:02:39 2022

        Record Size 128 kB
        File size set to 41943040 kB
        Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.

              kB  reclen    write  rewrite    read    reread
        41943040     128  1734542  1364683  2413381  2371527

iozone test complete.
# dd if=/dev/zero of=foo bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes transferred in 1.517452 secs (707595169 bytes/sec) 707 MB/s

# dd if=/dev/zero of=foo bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes transferred in 0.012079 secs (42386853 bytes/sec) 42 MB/s

Intel® Optane™ Business Update: What Does This Mean for Warranty and Support

As announced in Intel’s Q2 2022 earnings, after careful consideration, Intel plans to cease future development of our Optane products. We will continue development of Crow Pass on Sapphire Rapids as we engage with key customers to determine their plans to deploy this product. While we believe Optane is a superb technology, it has become impractical to deliver products at the necessary scale as a single-source supplier of Optane technology.

We are committed to supporting Optane customers and ecosystem partners through the transition of our existing memory and storage product lines through end-of-life. We continue to sell existing Optane products, and support and the 5-year warranty terms from date of sale remain unchanged.

Get to know intel® optane™ technology
Source Simon Todd – vExpert – Intel Webinar Slides

What makes Optane SSD’s different?

    NAND SSD

    NAND garbage collection requires background writes. NAND SSD block erase process results in slower writes and inconsistent performance.

    Intel® Optane™ technology

    Intel® Optane™ technology does not use garbage collection
    Rapid, in-place writes enable consistently fast response times

    Intel® Optane™ SSDs are different by design
    Source Simon Todd – vExpert – Intel Webinar Slides
    Consistent performance, even under heavy write loads
    Source Simon Todd – vExpert – Intel Webinar Slides
    ModelDies per channelChannelsRaw CapacitySpare Area
    Intel Optane SSD 900p 280GB37336 GB56 GB
    Intel Optane SSD DC P4800X 375GB47448 GB73 GB
    Intel Optane SSD 900p 480GB57560 GB80 GB
    Intel Optane SSD DC P4800X 750GB87896 GB146 GB
    The Optane SSD DC P4800X and the Optane SSD 900p both use the same 7-channel controller, which leads to some unusual drive capacities. The 900p comes with either 3 or 5 memory dies per channel while the P4800X has 4 or 8. All models reserve about 1/6th of the raw capacity for internal use Source

    Intel Optane SSD DC P4800X / 900P Hands-On Review

    Wow, How is Optane fast …

    The Intel Optane SSD DC P4800X is slightly faster than the Optane SSD 900p throughout this test, but either is far faster than the flash-based SSDs. Source

    Maxtang’s NX 6412 NUC – update ESXi 8.0a

    VMware ESXi 8.0a release was announced:

    How to prepare ESXi Custom ISO image 8U0a for NX6412 NUC?

    Download these files:

    Run those script to prepare Custom ISO image you should use PowerCLI version 13.0. Problem with upgrade to PowerCLI you could fix with blog PowerCLI 13 update and installation hurdles on Windows:

    Add-EsxSoftwareDepot .\VMware-ESXi-8.0a-20842819-depot.zip
    Add-EsxSoftwareDepot .\ESXi800-VMKUSB-NIC-FLING-61054763-component-20826251.zip
    New-EsxImageProfile -CloneProfile "ESXi-8.0a-20842819-standard" -name "ESXi-8.0.0-20842819-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-8.0.0-20842819-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-8.0.0-20842819-USBNIC" -ExportToBundle -filepath ESXi-8.0.0-20842819-USBNIC.zip

    Upgrade to ESXi 8.0

    TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.1-20842819-USBNIC.zip -p ESXi-8.0.1-20842819-USBNIC --no-hardware-warning
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true

    vSphere 8 Lab with Cohesity and VMware vExpert gift – Maxtang’s NX 6412 NUC

    During VMware Explore 2022 Barcelona, I’ve been given a gift as a vExpert. You could read it in my previous article. NX6412 doesn’t support onboard NICs. We will need Custom ISO with USB Network Native Driver for ESXi. Because of problem using latest PowerCLI 13 release Nov 25, 2022 with export ISO. I decided to install Custom ISO ESXi 7u2e and than upgrade to ESXi 8.0 with depot zip.

    Thank You Cohesity. Power consumpion is only 10 Watts …

    How to prepare ESXi Custom ISO image 7U2e for NX6412 NUC?

    Download these files:

    Run those script to prepare Custom ISO image you could use PowerCLI 12.7 or 13.0: You could use create_custom_esxi_iso.ps1 as well.

    Add-EsxSoftwareDepot .\VMware-ESXi-7.0U2e-19290878-depot.zip
    Add-EsxSoftwareDepot .\ESXi702-VMKUSB-NIC-FLING-47140841-component-18150468.zip
    New-EsxImageProfile -CloneProfile "ESXi-7.0U2e-19290878-standard" -name "ESXi-7.0U2e-19290878-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-7.0U2e-19290878-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-7.0U2e-19290878-USBNIC" -ExportToIso -filepath ESXi-7.0U2e-19290878-USBNIC.iso

    Create bootable ESXi USB Flash Drive from ESXi-7.0U2e-19290878-USBNIC.iso. More info How to create a bootable ESXi Installer USB Flash Drive

    • For Custom ISO image is necessary select Write in ISO -> ESP mode
    Dialog only for Custom ISO image

    Install ESXi 7U2e and fix Persisting USB NIC Bindings

    Currently there is a limitation in ESXi where USB NIC bindings are picked up much later in the boot process and to ensure settings are preserved upon a reboot, the following needs to be added to /etc/rc.local.d/local.sh based on your configurations.

    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
    count=0
    while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
    do
        sleep 10
        count=$(( $count + 1 ))
        vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
    done
    
    esxcfg-vswitch -R

    Prepare ESXi Custom zip depot 8.0 for NX6412 NUC

    Download these files:

    Run those script to prepare Custom ISO image you could use 13.0. Problem with upgrade to PowerCLI you could fix with blog PowerCLI 13 update and installation hurdles on Windows:

    Add-EsxSoftwareDepot .\VMware-ESXi-8.0-20513097-depot.zip
    Add-EsxSoftwareDepot .\ESXi800-VMKUSB-NIC-FLING-61054763-component-20826251.zip
    New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -name "ESXi-8.0.0-20513097-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-8.0.0-20513097-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-8.0.0-20513097-USBNIC" -ExportToBundle -filepath ESXi-8.0.0-20513097-USBNIC.zip

    Upgrade to ESXi 8.0

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC
    
    Hardware precheck of profile ESXi-8.0.0-20513097-USBNIC failed with warnings: <TPM_VERSION WARNING: TPM 1.2 device detected. Support for TPM version 1.2 is discontinued. Installation may proceed, but may cause the system to behave unexpectedly.>

    You could fix TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC --no-hardware-warning
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true
       VIBs Installed: VMW_bootbank_atlantic_1.0.3.0-10vmw.800.1.0.20513097, VMW_bootbank_bcm-mpi3_8.1.1.0.0.0-1vmw.800.1.0.20513097, VMW_bootbank_bfedac-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_bnxtnet_216.0.50.0-66vmw.800.1.0.20513097, VMW_bootbank_bnxtroce_216.0.58.0-27vmw.800.1.0.20513097, VMW_bootbank_brcmfcoe_12.0.1500.3-4vmw.800.1.0.20513097, VMW_bootbank_cndi-igc_1.2.9.0-1vmw.800.1.0.20513097, VMW_bootbank_dwi2c-esxio_0.1-2vmw.800.1.0.20513097, VMW_bootbank_dwi2c_0.1-2vmw.800.1.0.20513097, VMW_bootbank_elxiscsi_12.0.1200.0-10vmw.800.1.0.20513097, VMW_bootbank_elxnet_12.0.1250.0-8vmw.800.1.0.20513097, VMW_bootbank_i40en_1.11.2.5-1vmw.800.1.0.20513097, VMW_bootbank_iavmd_3.0.0.1010-5vmw.800.1.0.20513097, VMW_bootbank_icen_1.5.1.16-1vmw.800.1.0.20513097, VMW_bootbank_igbn_1.4.11.6-1vmw.800.1.0.20513097, VMW_bootbank_ionic-en-esxio_20.0.0-29vmw.800.1.0.20513097, VMW_bootbank_ionic-en_20.0.0-29vmw.800.1.0.20513097, VMW_bootbank_irdman_1.3.1.22-1vmw.800.1.0.20513097, VMW_bootbank_iser_1.1.0.2-1vmw.800.1.0.20513097, VMW_bootbank_ixgben_1.7.1.39-1vmw.800.1.0.20513097, VMW_bootbank_lpfc_14.0.635.3-14vmw.800.1.0.20513097, VMW_bootbank_lpnic_11.4.62.0-1vmw.800.1.0.20513097, VMW_bootbank_lsi-mr3_7.722.02.00-1vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt2_20.00.06.00-4vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt35_23.00.00.00-1vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt3_17.00.13.00-2vmw.800.1.0.20513097, VMW_bootbank_mlnx-bfbootctl-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_mnet-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.800.1.0.20513097, VMW_bootbank_ne1000_0.9.0-2vmw.800.1.0.20513097, VMW_bootbank_nenic_1.0.35.0-3vmw.800.1.0.20513097, VMW_bootbank_nfnic_5.0.0.35-3vmw.800.1.0.20513097, VMW_bootbank_nhpsa_70.0051.0.100-4vmw.800.1.0.20513097, VMW_bootbank_nmlx5-core-esxio_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-core_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-rdma-esxio_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-rdma_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlxbf-gige-esxio_2.1-1vmw.800.1.0.20513097, VMW_bootbank_ntg3_4.1.8.0-4vmw.800.1.0.20513097, VMW_bootbank_nvme-pcie-esxio_1.2.4.1-1vmw.800.1.0.20513097, VMW_bootbank_nvme-pcie_1.2.4.1-1vmw.800.1.0.20513097, VMW_bootbank_nvmerdma_1.0.3.9-1vmw.800.1.0.20513097, VMW_bootbank_nvmetcp_1.0.1.2-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-ens-esxio_2.0.0.23-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-ens_2.0.0.23-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-esxio_2.0.0.31-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3_2.0.0.31-1vmw.800.1.0.20513097, VMW_bootbank_penedac-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pengpio-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pensandoatlas_1.46.0.E.24.1.256-2vmw.800.1.0.20293628, VMW_bootbank_penspi-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pvscsi-esxio_0.1-5vmw.800.1.0.20513097, VMW_bootbank_pvscsi_0.1-5vmw.800.1.0.20513097, VMW_bootbank_qcnic_1.0.15.0-22vmw.800.1.0.20513097, VMW_bootbank_qedentv_3.40.5.70-4vmw.800.1.0.20513097, VMW_bootbank_qedrntv_3.40.5.70-1vmw.800.1.0.20513097, VMW_bootbank_qfle3_1.0.67.0-30vmw.800.1.0.20513097, VMW_bootbank_qfle3f_1.0.51.0-28vmw.800.1.0.20513097, VMW_bootbank_qfle3i_1.0.15.0-20vmw.800.1.0.20513097, VMW_bootbank_qflge_1.1.0.11-1vmw.800.1.0.20513097, VMW_bootbank_rd1173-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_rdmahl_1.0.0-1vmw.800.1.0.20513097, VMW_bootbank_rste_2.0.2.0088-7vmw.800.1.0.20513097, VMW_bootbank_sfvmk_2.4.0.2010-13vmw.800.1.0.20513097, VMW_bootbank_smartpqi_80.4253.0.5000-2vmw.800.1.0.20513097, VMW_bootbank_spidev-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_vmkata_0.1-1vmw.800.1.0.20513097, VMW_bootbank_vmksdhci-esxio_1.0.2-2vmw.800.1.0.20513097, VMW_bootbank_vmksdhci_1.0.2-2vmw.800.1.0.20513097, VMW_bootbank_vmkusb-esxio_0.1-14vmw.800.1.0.20513097, VMW_bootbank_vmkusb-nic-fling_1.11-1vmw.800.1.20.61054763, VMW_bootbank_vmkusb_0.1-14vmw.800.1.0.20513097, VMW_bootbank_vmw-ahci_2.0.14-1vmw.800.1.0.20513097, VMware_bootbank_bmcal-esxio_8.0.0-1.0.20513097, VMware_bootbank_bmcal_8.0.0-1.0.20513097, VMware_bootbank_clusterstore_8.0.0-1.0.20513097, VMware_bootbank_cpu-microcode_8.0.0-1.0.20513097, VMware_bootbank_crx_8.0.0-1.0.20513097, VMware_bootbank_drivervm-gpu_8.0.0-1.0.20513097, VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-6vmw.800.1.0.20513097, VMware_bootbank_esx-base_8.0.0-1.0.20513097, VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.0-1.0.20513097, VMware_bootbank_esx-ui_2.5.1-20374953, VMware_bootbank_esx-update_8.0.0-1.0.20513097, VMware_bootbank_esx-xserver_8.0.0-1.0.20513097, VMware_bootbank_esxio-base_8.0.0-1.0.20513097, VMware_bootbank_esxio-combiner-esxio_8.0.0-1.0.20513097, VMware_bootbank_esxio-combiner_8.0.0-1.0.20513097, VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.0-1.0.20513097, VMware_bootbank_esxio-update_8.0.0-1.0.20513097, VMware_bootbank_esxio_8.0.0-1.0.20513097, VMware_bootbank_gc-esxio_8.0.0-1.0.20513097, VMware_bootbank_gc_8.0.0-1.0.20513097, VMware_bootbank_loadesx_8.0.0-1.0.20513097, VMware_bootbank_loadesxio_8.0.0-1.0.20513097, VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.800.1.0.20513097, VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_2.7.2173-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-12vmw.800.1.0.20513097, VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.800.1.0.20513097, VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-8vmw.800.1.0.20513097, VMware_bootbank_native-misc-drivers-esxio_8.0.0-1.0.20513097, VMware_bootbank_native-misc-drivers_8.0.0-1.0.20513097, VMware_bootbank_qlnativefc_5.2.46.0-3vmw.800.1.0.20513097, VMware_bootbank_trx_8.0.0-1.0.20513097, VMware_bootbank_vdfs_8.0.0-1.0.20513097, VMware_bootbank_vmware-esx-esxcli-nvme-plugin-esxio_1.2.0.52-1vmw.800.1.0.20513097, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.52-1vmw.800.1.0.20513097, VMware_bootbank_vsan_8.0.0-1.0.20513097, VMware_bootbank_vsanhealth_8.0.0-1.0.20513097, VMware_locker_tools-light_12.0.6.20104755-20513097
       VIBs Removed: VMW_bootbank_atlantic_1.0.3.0-8vmw.702.0.0.17867351, VMW_bootbank_bnxtnet_216.0.50.0-34vmw.702.0.20.18426014, VMW_bootbank_bnxtroce_216.0.58.0-20vmw.702.0.20.18426014, VMW_bootbank_brcmfcoe_12.0.1500.1-2vmw.702.0.0.17867351, VMW_bootbank_brcmnvmefc_12.8.298.1-1vmw.702.0.0.17867351, VMW_bootbank_elxiscsi_12.0.1200.0-8vmw.702.0.0.17867351, VMW_bootbank_elxnet_12.0.1250.0-5vmw.702.0.0.17867351, VMW_bootbank_i40enu_1.8.1.137-1vmw.702.0.20.18426014, VMW_bootbank_iavmd_2.0.0.1152-1vmw.702.0.0.17867351, VMW_bootbank_icen_1.0.0.10-1vmw.702.0.0.17867351, VMW_bootbank_igbn_1.4.11.2-1vmw.702.0.0.17867351, VMW_bootbank_irdman_1.3.1.19-1vmw.702.0.0.17867351, VMW_bootbank_iser_1.1.0.1-1vmw.702.0.0.17867351, VMW_bootbank_ixgben_1.7.1.35-1vmw.702.0.0.17867351, VMW_bootbank_lpfc_12.8.298.3-2vmw.702.0.20.18426014, VMW_bootbank_lpnic_11.4.62.0-1vmw.702.0.0.17867351, VMW_bootbank_lsi-mr3_7.716.03.00-1vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt2_20.00.06.00-3vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt35_17.00.02.00-1vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt3_17.00.10.00-2vmw.702.0.0.17867351, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.702.0.0.17867351, VMW_bootbank_ne1000_0.8.4-11vmw.702.0.0.17867351, VMW_bootbank_nenic_1.0.33.0-1vmw.702.0.0.17867351, VMW_bootbank_nfnic_4.0.0.63-1vmw.702.0.0.17867351, VMW_bootbank_nhpsa_70.0051.0.100-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-core_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-en_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-rdma_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx5-core_4.19.16.10-1vmw.702.0.0.17867351, VMW_bootbank_nmlx5-rdma_4.19.16.10-1vmw.702.0.0.17867351, VMW_bootbank_ntg3_4.1.5.0-0vmw.702.0.0.17867351, VMW_bootbank_nvme-pcie_1.2.3.11-1vmw.702.0.0.17867351, VMW_bootbank_nvmerdma_1.0.2.1-1vmw.702.0.0.17867351, VMW_bootbank_nvmxnet3-ens_2.0.0.22-1vmw.702.0.0.17867351, VMW_bootbank_nvmxnet3_2.0.0.30-1vmw.702.0.0.17867351, VMW_bootbank_pvscsi_0.1-2vmw.702.0.0.17867351, VMW_bootbank_qcnic_1.0.15.0-11vmw.702.0.0.17867351, VMW_bootbank_qedentv_3.40.5.53-20vmw.702.0.20.18426014, VMW_bootbank_qedrntv_3.40.5.53-17vmw.702.0.20.18426014, VMW_bootbank_qfle3_1.0.67.0-14vmw.702.0.0.17867351, VMW_bootbank_qfle3f_1.0.51.0-19vmw.702.0.0.17867351, VMW_bootbank_qfle3i_1.0.15.0-12vmw.702.0.0.17867351, VMW_bootbank_qflge_1.1.0.11-1vmw.702.0.0.17867351, VMW_bootbank_rste_2.0.2.0088-7vmw.702.0.0.17867351, VMW_bootbank_sfvmk_2.4.0.2010-4vmw.702.0.0.17867351, VMW_bootbank_smartpqi_70.4000.0.100-6vmw.702.0.0.17867351, VMW_bootbank_vmkata_0.1-1vmw.702.0.0.17867351, VMW_bootbank_vmkfcoe_1.0.0.2-1vmw.702.0.0.17867351, VMW_bootbank_vmkusb-nic-fling_1.8-3vmw.702.0.20.47140841, VMW_bootbank_vmkusb_0.1-4vmw.702.0.20.18426014, VMW_bootbank_vmw-ahci_2.0.9-1vmw.702.0.0.17867351, VMware_bootbank_clusterstore_7.0.2-0.30.19290878, VMware_bootbank_cpu-microcode_7.0.2-0.30.19290878, VMware_bootbank_crx_7.0.2-0.30.19290878, VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-4vmw.702.0.0.17867351, VMware_bootbank_esx-base_7.0.2-0.30.19290878, VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.2-0.30.19290878, VMware_bootbank_esx-ui_1.34.8-17417756, VMware_bootbank_esx-update_7.0.2-0.30.19290878, VMware_bootbank_esx-xserver_7.0.2-0.30.19290878, VMware_bootbank_gc_7.0.2-0.30.19290878, VMware_bootbank_loadesx_7.0.2-0.30.19290878, VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.702.0.0.17867351, VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_2.0.0-2vmw.702.0.0.17867351, VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-5vmw.702.0.0.17867351, VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-hp-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-6vmw.702.0.0.17867351, VMware_bootbank_native-misc-drivers_7.0.2-0.30.19290878, VMware_bootbank_qlnativefc_4.1.14.0-5vmw.702.0.0.17867351, VMware_bootbank_vdfs_7.0.2-0.30.19290878, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.42-1vmw.702.0.0.17867351, VMware_bootbank_vsan_7.0.2-0.30.19290878, VMware_bootbank_vsanhealth_7.0.2-0.30.19290878, VMware_locker_tools-light_11.2.6.17901274-18295176
       VIBs Skipped:

    And reboot ESXi after upgrade with cmd reboot. Good luck

    How to create a bootable ESXi Installer USB Flash Drive

    ESXi Image Download

    Create a bootable ESXi Installer USB Flash Drive with Windows

    • Press SELECT and open the ESXi ISO image
    • Select your flash drive
    • Control Partition scheme: GPT and Target UEFI
    • Press START
    • For Custom ISO image is necessary select Write in ISO -> ESP mode
    Dialog only for Custom ISO image

    VMware Cohesity vExpert Gift VMware EXPLORE 2022 Barcelona

    During VMware Explore 2022 Barcelona, I’ve been given a gift as a vExpert.

    We could start start popcorn party with NX6412 …

    A huge shout out to the vExpert program and to Cohesity for supporting with such an amazing gift – a small but powerful quad-CPU Intel NUC. It’s fanless so it will be quiet too. Thank You

    NX6412 Specification:

    • CPU: Intel Elkhart Lake J6412 Processor
    • Memory: Dual Channel SO-DIMM DDR4 up to 32GB – 64GB could run – I will have to confirm it lately ….
    • Display via: Intel Integrated Graphics display via 2xHDMI2.0
    • I/O Ports: 2xLAN, 2xUSB3.2, 2xUSB2.0, Type-C, SIM
    • Ethernet: 10/100/1000Mbps
    • Storage: 1x M.2 2242/2280 SSD, SATA optional
    • Power: 12V DC-in
    ​Hardware: MaxTang N6412,  32gig memory, 512gig SSD, Quad Core Dual Gigabit Ethernet, Dual HDMI2

    Based on small form factors, the compact design at 127mm x 127mm x 37mm makes it great for space-saving.

    Intel Elkhart Lake J6412 Processor

    Powered by Intel Elkhart Lake Celeron J6412 processor, the NX6412 provides you excellent performance with long life expectancies. The processor has 4cores 4threads, 1.5MB L2 Cache, up to 2.60GHz with a 10W TDP rate. It has a 1.7x improvement in single-thread performance and 1.5x improvement in multi-thread performance generation over generation, 2x performance improvement in graphics over the previous generation

    CODE2769US Intel NUC Home Lab with Smart Sensors & Tanzu

    Links & information

    ESXi Arm Edition fix /bin/netdbg does not work

    I tested ESXi Arm Edition, I found bug /bin/netdbg does not work – Could be fixed with :

    export LANG=en_US.UTF-8
    [root@localhost:~] netdbg
    Traceback (most recent call last):
      File "/bin/netdbg", line 32, in <module>
        RootCommandGroup()
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
    FileNotFoundError: [Errno 2] No such file or directory: 'locale'
    
    [root@localhost:~] export LANG=en_US.UTF-8
    
    [root@localhost:~] netdbg
    Usage: netdbg [OPTIONS] COMMAND [ARGS]...
    
      Command line interface to access settings on ESX datapath
    
    Options:
      --help  Show this message and exit.
    
    Commands:
      vswitch
    

    Question is support for:

    netdbg vswitch mac-table …
    netdbg vswitch mac-learning …

    I suppose, that Native MAC Learning is NOT important on ARM, but could be usefull in futute for SmartNICS. Testing is here:

    [root@localhost:~] netdbg vswitch instance list
    DvsPortset-0 (vDS-LAB)           50 1b 4b 22 14 35 b5 ed-ec 99 d0 13 d2 ca 70 48
    Total Ports:2560 Available:2552
      Client                         PortID          DVPortID                             MAC                  Uplink
      Management                     67108867                                             00:00:00:00:00:00    n/a
      vmnic128                       2214592516      26                                   00:00:00:00:00:00
      Shadow of vmnic128             67108869                                             00:50:56:xx:xx:17    n/a
      vmk0                           67108870        14                                   dc:a6:32:xx:xx:4f    vmnic128
      vmk1                           67108871        33                                   00:50:56:xx:xx:df    vmnic128
      vmk2                           67108872        58                                   00:50:56:xx:xx:fc    vmnic128
      ubuntu-01.eth0                 67108874        266                                  00:0c:29:xx:xx:ed    vmnic128
    
    [root@localhost:~] netdbg vswitch mac-learning port get -p 266 -dvs _vmnet_ESXLAB1
    Traceback (most recent call last):
      File "/bin/netdbg", line 32, in <module>
        RootCommandGroup()
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 722, in __call__
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 697, in main
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 1071, in invoke
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 1071, in invoke
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 1071, in invoke
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 1071, in invoke
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 898, in invoke
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/core.py", line 535, in invoke
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/click/decorators.py", line 17, in new_func
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys-boot/lib64/python3.5/site-packages/netdbg/vswitch/mac_learning.py", line 49, in MACLearningPortGetCommand
      File "/lib/python3.5/site-packages/net/lib/libvswitch.py", line 5188, in GetPortMACLearning
        raise DVPortFailure('Get MAC learning config', dvs_alias, dvport, status)
    net.lib.exceptions.DVPortFailure: _vmnet_ESXLAB1:266:195887107::fail to Get MAC learning config failed
    
    And similar error for:
    [root@localhost:~] netdbg vswitch mac-table port get -p 266 -dvs _vmnet_ESXLAB1
    -- cut
      File "/lib/python3.5/site-packages/net/lib/libvswitch.py", line 5452, in GetPortMACTable
        raise DVPortFailure('Get MAC table', dvs_alias, dvport, result[0])
    net.lib.exceptions.DVPortFailure: _vmnet_ESXLAB1:266:195887107::fail to Get MAC table failed
    

    vSphere Clustering Service (vCLS) Workaround for ESXi-Arm in vSphere 7.0 Update 1

    The vSphere Clustering Service (vCLS) is a new capability that is introduced in the vSphere 7 Update 1 release. The issue is that the vCLS VMs are x86 and can not be deployed to an ESXi-Arm Cluster as the CPU architecture is not supported. But We could disable it according documentation:

    Putting a Cluster in Retreat Mode

    This task explains how to put a cluster in retreat mode.

    Procedure

    • Login to the vSphere Client.
    • Navigate to the cluster on which vCLS must be disabled.
    • Copy the cluster domain ID from the URL of the browser. It should be similar to domain-c(number).
    • Navigate to the vCenter Server Configure tab.
    • Under Advanced Settings, click the Edit Settings button.
    • Add new key “config.vcls.clusters.domain-c841.enabled” use the domain ID copied in previous step.
    • For the cluster with the domain ID, set the Value to False.
    Check new key “config.vcls.clusters.domain-c841.enabled” – Flase
    • Click Save.

    ESXi on ARM

    ESXi on ARM was released. Is it possible download from Flings ESXi Arm Edition.

    Instalation Fling on Raspberry Pi 4

    Preparation

    Raspberry PI EEPROM update

    Dowload Raspberry Pi OS from https://www.raspberrypi.org/downloads/ and update EEPRON

    sudo rpi-eeprom-update -a

    Create and setup UEFI on SD card

    md SDcard
    7z.exe x firmware-master.zip firmware-master\boot\*
    xcopy firmware-master\boot\* SDcard\*  /E /H /C /I /Y
    
    del SDcard\kernel*.img
    
    7z.exe x RPi4_UEFI_Firmware_v1.20.zip -oSDcard\* -y
    
    • For Raspberry PI 4GB only: Append the config.txt file on the SD Card and append gpu_mem=16:
    • Copy directory SDcard to root directory on SD card
    • Boot Raspberry Pi from SD card

    UEFI firmware configuration

    • Disable 3GiB memory limit
      • Device Manager / Raspberry Pi Config / Advanced Config / Limit RAM to 3 GB
      • The Raspberry Pi 4 UEFI is configured with a default limit of 3GiB of memory for OS compatibility purposes. It is neccesary to disable it.
    • Console Preference Selection
      • Device Manager / Raspberry Pi Config / Device Manager / Console Prefence Selection / Display Configuration
        • Virtual 800×600 – enable it
        • Virtual 1024×768 – enable it

    Install ESXi-Arm

    • Basic instalation ESXi
      • Enable ssh
      • Setup ntp – IMPORTANT make sure that host has clock synchronized
    • Add ESXi ARM host to vCenter
    • Enabling vMotion
    • Create a VM – example Ubuntu for ARM

    Known issues

    • JUMBO frame MTU – The ESX driver for Pi4’s NIC on the SoC (genet) currently does not support jumbo frames.
    • Native MAC Learning is not supported ???
    • /bin/netdbg does not work – Could be fixed with
      • https://flings.vmware.com/esxi-arm-edition/bugs/1113
    [root@localhost:~] netdbg
    Traceback (most recent call last):
      File "/bin/netdbg", line 32, in <module>
        RootCommandGroup()
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
      File "/build/mts/release/bora-16966451/bora/build/esxarm64/release/vmvisor/sys
    FileNotFoundError: [Errno 2] No such file or directory: 'locale'
    
    [root@localhost:~] export LANG=en_US.UTF-8
    [root@localhost:~] netdbg
    Usage: netdbg [OPTIONS] COMMAND [ARGS]...
    
      Command line interface to access settings on ESX datapath
    
    Options:
      --help  Show this message and exit.
    
    Commands:
      vswitch