Simplified Guide: How to Convert VM Snapshots into Memory Dumps Using vmss2core

Introduction

In the complex world of virtualization, developers often face the challenge of debugging guest operating systems and applications. A practical solution lies in converting virtual machine snapshots to memory dumps. This blog post delves into how you can efficiently use the vmss2core tool to transform a VM checkpoint, be it a snapshot or suspend file, into a core dump file, compatible with standard debuggers.

Step-by-Step Guide

Break down the process into clear, step-by-step instructions. Use bullet points or numbered lists for easier readability. Example:

Step 1: Create and download a virtual machine Snapshots .vmsn and .vmem
  1. Select the Problematic Virtual Machine
    • In your VMware environment, identify and select the virtual machine experiencing issues.
  2. Replicate the Issue
    • Attempt to replicate the problem within the virtual machine to ensure the snapshot captures the relevant state.
  3. Take a Snapshot
    • Right-click on the virtual machine.
    • Navigate to Snapshots → Take snapshot
    • Enter a name for the snapshot.
    • Ensure “Snapshot the Virtual Machine’s memory” is checked
    • Click ‘CREATE’ to proceed.
  4. Access VM Settings
    • Right-click on the virtual machine again.
    • Select ‘Edit Settings’
  5. Navigate to Datastores
    • Choose the virtual machine and click on ‘Datastores’.
    • Click on the datastore name
  6. Download the Snapshot
    • Locate the .vmsn ans .vmem files (VMware Snapshot file).
    • Select the file, click ‘Download’, and save it locally.
Step 2: Locate Your vmss2core Installation
  • For Windows (32bit): Navigate to C:\Program Files\VMware\VMware Workstation\
  • For Windows (64bit): Go to C:\Program Files(x86)\VMware\VMware Workstation\
  • For Linux: Access /usr/bin/
  • For Mac OS: Find it in /Library/Application Support/VMware Fusion/

Note: If vmss2core isn’t in these directories, download it from New Flings Link (use at your own risk).

Step 3: Run the vmss2core Tool
vmss2core.exe -N VM-Snapshot1.vmsn VM-Snapshot1.vmem                                                                                                                                                                                    vmss2core version 20800274 Copyright (C) 1998-2022 VMware, Inc. All rights reserved.
Started core writing.
Writing note section header.
Writing 1 memory section headers.
Writing notes.
... 100 MBs written.
... 200 MBs written.
... 300 MBs written.
... 400 MBs written.
... 500 MBs written.
... 600 MBs written.
... 700 MBs written.
... 800 MBs written.
... 900 MBs written.
... 1000 MBs written.
... 1100 MBs written.
... 1200 MBs written.
... 1300 MBs written.
... 1400 MBs written.
... 1500 MBs written.
... 1600 MBs written.
... 1700 MBs written.
... 1800 MBs written.
... 1900 MBs written.
... 2000 MBs written.
Finished writing core.
  • For general use: vmss2core.exe -W [VM_name].vmsn [VM_name].vmem
  • For Windows 8/8.1, Server 2012, 2016, 2019: vmss2core.exe -W8 [VM_name].vmsn [VM_name].vmem
  • For Linux: ./vmss2core-Linux64 -N [VM_name].vmsn [VM_name].vmem Note: Replace [VM_name] with your virtual machine’s name. The flag -W, -W8, or -N corresponds to the guest OS.
#vmss2core.exe
vmss2core version 20800274 Copyright (C) 1998-2022 VMware, Inc. All rights reserved.                                                                                                                                                                            A tool to convert VMware checkpoint state files into formats                                                                                                                                                                                                    that third party debugger tools understand. It can handle both                                                                                                                                                                                                  suspend (.vmss) and snapshot (.vmsn) checkpoint state files                                                                                                                                                                                                     (hereafter referred to as a 'vmss file') as well as both                                                                                                                                                                                                        monolithic and non-monolithic (separate .vmem file) encapsulation                                                                                                                                                                                               of checkpoint state data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Usage:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             GENERAL:  vmss2core [[options] | [-l linuxoffsets options]] \                                                                                                                                                                                                               <vmss file> [<vmem file>]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        The "-l" option specifies offsets (a stringset) within the                                                                                                                                                                                                      Linux kernel data structures, which is used by -P and -N modes.                                                                                                                                                                                                 It is ignored with other modes. Please use "getlinuxoffsets"                                                                                                                                                                                                    to automatically generate the correct stringset value for your                                                                                                                                                                                                  kernel, see README.txt for additional information.                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Without options one vmss.core<N> per vCPU with linear view of                                                                                                                                                                                                   memory is generated. Other types of core files and output can                                                                                                                                                                                                   be produced with these options:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    -q      Quiet(er) operation.                                                                                                                                                                                                                                    -M      Create core file with physical memory view (vmss.core).                                                                                                                                                                                                                                                                                                                                                                                                                                                                 -l str  Offset stringset expressed as 0xHEXNUM,0xHEXNUM,... .                                                                                                                                                                                                   -N      Red Hat crash core file for arbitrary Linux version                                                                                                                                                                                                             described by the "-l" option (vmss.core).                                                                                                                                                                                                               -N4     Red Hat crash core file for Linux 2.4 (vmss.core).                                                                                                                                                                                                      -N6     Red Hat crash core file for Linux 2.6 (vmss.core).                                                                                                                                                                                                      -O <x>  Use <x> as the offset of the entrypoint.                                                                                                                                                                                                                -U <i>  Create linear core file for vCPU <i> only.                                                                                                                                                                                                              -P      Print list of processes in Linux VM.                                                                                                                                                                                                                    -P<pid> Create core file for Linux process <pid> (core.<pid>).                                                                                                                                                                                                                                                                                                                                                                                                                                                                  -S      Create core for 64-bit Solaris (vmcore.0, unix.0).                                                                                                                                                                                                              Optionally specify the version: -S112 -S64SYM112                                                                                                                                                                                                                for 11.2.                                                                                                                                                                                                                                               -S32    Create core for 32-bit Solaris (vmcore.0, unix.0).                                                                                                                                                                                                      -S64SYM Create text symbols for 64-bit Solaris (solaris.text).                                                                                                                                                                                                  -S32SYM Create text symbols for 32-bit Solaris (solaris.text).                                                                                                                                                                                                  -W      Create WinDbg file (memory.dmp) with commonly used                                                                                                                                                                                                              build numbers ("2195" for Win32, "6000" for Win64).                                                                                                                                                                                                     -W<num> Create WinDbg file (memory.dmp), with <num> as the                                                                                                                                                                                                              build number (for example: "-W2600").                                                                                                                                                                                                                   -WK     Create a Windows kernel memory only dump file (memory.dmp).                                                                                                                                                                                             -WDDB<num> or -W8DDB<num>                                                                                                                                                                                                                                               Create WinDbg file (memory.dmp), with <num> as the                                                                                                                                                                                                              debugger data block address in hex (for example: "-W12ac34de").                                                                                                                                                                                         -WSCAN  Scan all of memory for Windows debugger data blocks                                                                                                                                                                                                             (instead of just low 256 MB).                                                                                                                                                                                                                           -W8     Generate a memory dump file from a suspended Windows 8 VM.                                                                                                                                                                                              -X32    <mach_kernel> Create core for 32-bit Mac OS.                                                                                                                                                                                                            -X64    <mach_kernel> Create core for 64-bit Mac OS.                                                                                                                                                                                                            -F      Create core for an EFI firmware exception.                                                                                                                                                                                                              -F<adr> Create core for an EFI firmware exception with system context                                                                                                                                                                                                   at the given guest virtual address.                         

Links:

Enhancing the Raspberry Pi 5 with PCIe Gen 3.0 Speeds

The Raspberry Pi 5, a remarkable addition to the Raspberry Pi series, boasts an advanced configuration with five active PCI Express lanes. These lanes are ingeniously distributed with four dedicated to the innovative RP1 chip, which supports a variety of I/O functionalities such as USB, Ethernet, MIPI Camera and Display, and GPIO. Additionally, there’s a fifth lane that interfaces with a novel external PCIe connector.

In its default setup, the Raspberry Pi 5 operates all PCIe lanes at Gen 2.0 speeds, offering a throughput of approximately 5 GT/sec per lane. This standard setting is fixed for the internal lanes connected to the RP1 chip. However, for users seeking enhanced performance, there’s an exciting tweak available for the external PCIe connector. By simply adding a couple of lines to the /boot/config.txt file and rebooting your device, you can elevate the external connector to Gen 3.0 speeds. This upgrade boosts the data transfer rate to 8 GT/sec, nearly doubling the default speed.

To achieve this, insert the following commands in your /boot/config.txt file:

dtparam=pciex1
dtparam=pciex1_gen=3

After these adjustments and a system reboot, your Raspberry Pi 5 will operate the external PCIe lane at the faster Gen 3.0 speed, unlocking new potential for your projects and applications.

Links:

Private AI in HomeLAB: Affordable GPU Solution with NVIDIA Tesla P40

For Private AI in HomeLAB, I was searching for budget-friendly GPUs with a minimum of 24GB RAM. Recently, I came across the refurbished NVIDIA Tesla P40 on eBay, which boasts some intriguing specifications:

  • GPU Chip: GP102
  • Cores: 3840
  • TMUs: 240
  • ROPs: 96
  • Memory Size: 24 GB
  • Memory Type: GDDR5
  • Bus Width: 384 bit

Since the NVIDIA Tesla P40 comes in a full-profile form factor, we needed to acquire a PCIe riser card.

A PCIe riser card, commonly known as a “riser card,” is a hardware component essential in computer systems for facilitating the connection of expansion cards to the motherboard. Its primary role comes into play when space limitations or specific orientation requirements prevent the direct installation of expansion cards into the PCIe slots on the motherboard.

Furthermore, I needed to ensure adequate cooling, but this posed no issue. I utilized a 3D model created by MiHu_Works for a Tesla P100 blower fan adapter, which you can find at this link: Tesla P100 Blower Fan Adapter.

As for the fan, the Titan TFD-B7530M12C served the purpose effectively. You can find it on Amazon: Titan TFD-B7530M12C.

Currently, I am using a single VM with PCIe pass-through. However, it was necessary to implement specific Advanced VM parameters:

  • pciPassthru.use64bitMMIO = true
  • pciPassthru.64bitMMIOSizeGB = 64

Now, you might wonder about the performance. It’s outstanding, delivering speeds up to 16x-26x times faster than the CPU. To provide you with an idea of the performance, I conducted a llama-bench test:

pp 512CPU t/sGPU t/sAcceleration
llama 7B mostly Q4_09.50155.3716x
llama 13B mostly Q4_05.18134.7426x
./llama-bench -t 8
| model                          |       size |     params | 
| ------------------------------ | ---------: | ---------: | 
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B | 
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B | 
| backend    |    threads | test       |              t/s |
| ---------- | ---------: | ---------- | ---------------: |
| CPU        |          8 | pp 512     |      9.50 ± 0.07 |
| CPU        |          8 | tg 128     |      8.74 ± 0.12 |
./llama-bench -ngl 3800
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: Tesla P40, compute capability 6.1
| model                          |       size |     params |
| ------------------------------ | ---------: | ---------: |
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B |
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B |
| backend    | ngl | test       |              t/s | 
| ---------- | --: | ---------- | ---------------: | 
| CUDA       | 3800 | pp 512     |    155.37 ± 1.26 |
| CUDA       | 3800 | tg 128     |      9.31 ± 0.19 |
 ./llama-bench -t 8 -m ./models/13B/ggml-model-q4_0.gguf
| model                          |       size |     params |
| ------------------------------ | ---------: | ---------: |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| backend    |    threads | test       |              t/s |
| ---------- | ---------: | ---------- | ---------------: |
| CPU        |          8 | pp 512     |      5.18 ± 0.00 |
| CPU        |          8 | tg 128     |      4.63 ± 0.14 |
./llama-bench -ngl 3800 -m ./models/13B/ggml-model-q4_0.gguf
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: Tesla P40, compute capability 6.1
| model                          |       size |     params |
| ------------------------------ | ---------: | ---------: |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| backend    | ngl | test       |              t/s | 
| ---------- | --: | ---------- | ---------------: | 
| CUDA       | 3800 | pp 512     |    134.74 ± 1.29 |
| CUDA       | 3800 | tg 128     |      8.42 ± 0.10 |

Feel free to explore this setup for your Private AI in HomeLAB.

DELL R610 or R710: How to Convert an H200A to H200I for Dedicated Slot Use

For my project involving the AI tool llama.cpp, I needed to free up a PCI slot for an NVIDIA Tesla P40 GPU. I found an excellent guide and a useful video from ArtOfServer.

Based on this helpful video from ArtOfServer:

ArtOfServer wrote a small tutorial on how to modify an H200A (external) into an H200I (internal) to be used into the dedicated slot (e.g. instead of a Perc6i)

ArtOfServer wrote a small tutorial on how to modify an H200A (external) into an H200I (internal) to be used into the dedicated slot (e.g. instead of a Perc6i)

Install compiler and build tools (those can be removed later)

# apt install build-essential unzip

Compile and install lsirec and lsitool

# mkdir lsi
# cd lsi
# wget https://github.com/marcan/lsirec/archive/master.zip
# wget https://github.com/exactassembly/meta-xa-stm/raw/master/recipes-support/lsiutil/files/lsiutil-1.72.tar.gz
# tar -zxvvf lsiutil-1.72.tar.gz
# unzip master.zip
# cd lsirec-master
# make
# chmod +x sbrtool.py
# cp -p lsirec /usr/bin/
# cp -p sbrtool.py /usr/bin/
# cd ../lsiutil
# make -f Makefile_Linux

Modify SBR to match an internal H200I

Get bus address:

# lspci -Dmmnn | grep LSI
0000:05:00.0 "Serial Attached SCSI controller [0107]" "LSI Logic / Symbios Logic [1000]" "SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [0072]" -r03 "Dell [1028]" "6Gbps SAS HBA Adapter [1f1c]"

Bus address 0000:05:00.0

We are going to change id 0x1f1c to 0x1f1e

Unbind and halt card:

# lsirec 0000:05:00.0 unbind
Trying unlock in MPT mode...
Device in MPT mode
Kernel driver unbound from device
# lsirec 0000:05:00.0 halt
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET

Read sbr:

# lsirec 0000:05:00.0 readsbr h200.sbr
Device in MPT mode
Using I2C address 0x54
Using EEPROM type 1
Reading SBR...
SBR saved to h200.sbr

Transform binary sbr to text file:

# sbrtool.py parse h200.sbr h200.cfg

Modify PID in line 9 (e.g using vi or vim):
from this:
SubsysPID = 0x1f1c
to this:
SubsysPID = 0x1f1e

Important: if in the cfg file you find a line with:
SASAddr = 0xfffffffffffff
remove it!
Save and close file.

Build new sbr file:

# sbrtool.py build h200.cfg h200-int.sbr

Write it back to card:

# lsirec 0000:05:00.0 writesbr h200-int.sbr
Device in MPT mode
Using I2C address 0x54
Using EEPROM type 1
Writing SBR...
SBR written from h200-int.sbr

Reset the card an rescan the bus:

# lsirec 0000:05:00.0 reset
Device in MPT mode
Resetting adapter...
IOC is RESET
IOC is READY
# lsirec 0000:05:00.0 info
Trying unlock in MPT mode...
Device in MPT mode
Registers:
DOORBELL: 0x10000000
DIAG: 0x000000b0
DCR_I2C_SELECT: 0x80030a0c
DCR_SBR_SELECT: 0x2100001b
CHIP_I2C_PINS: 0x00000003
IOC is READY
# lsirec 0000:05:00.0 rescan
Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.

Verify new id (H200I):

# lspci -Dmmnn | grep LSI
0000:05:00.0 "Serial Attached SCSI controller [0107]" "LSI Logic / Symbios Logic [1000]" "SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [0072]" -r03 "Dell [1028]" "PERC H200 Integrated [1f1e]"

You can now move the card to the dedicated slot 🙂

Thanks to ArtOfServer for a great video.

VMware ESXI and Intel Optane NVMe – intelmas firmware update

How to install intelmas tool

[~] esxcli software component apply -d /vmfs/volumes/SSD/_ISO/intel-mas-tool_2.2.18-1OEM.700.0.0.15843807_20956742.zip
Installation Result
   Components Installed: intel-mas-tool_2.2.18-1OEM.700.0.0.15843807
   Components Removed:
   Components Skipped:
   Message: Operation finished successfully.
   Reboot Required: false

Common information about the disc

[~] /opt/intel/intelmas/intelmas show -intelssd 1

- 1 Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -

Bootloader : EB3B0416
Capacity : 260.83 GB (280,065,171,456 bytes)
DevicePath : nvmeMgmt-nvmhba5
DeviceStatus : Healthy
Firmware : E201HPS2
FirmwareUpdateAvailable : The selected drive contains current firmware as of this tool release.
Index : 1
MaximumLBA : 547002287
ModelNumber : INTEL SSDPED1D280GAH
NamespaceId : 1
PercentOverProvisioned : 0.00
ProductFamily : Intel(R) Optane(TM) SSD 905P Series
SMARTEnabled : True
SectorDataSize : 512
SerialNumber : PHMB839000LW280IGN

S.M.A.R.T information

[~] /opt/intel/intelmas/intelmas show -nvmelog SmartHealthInfo -intelssd 1

-  PHMB839000LW280IGN -

- NVMeLog SMART and Health Information -

Volatile memory backup device has failed : False
Temperature has exceeded a critical threshold : False
Temperature - Celsius : 30
Media is in a read-only mode : False
Power On Hours : 0x0100
Power Cycles : 0x03
Number of Error Info Log Entries : 0x0
Controller Busy Time : 0x0
Available Spare Space has fallen below the threshold : False
Percentage Used : 0
Critical Warnings : 0
Data Units Read : 0x02
Available Spare Threshold Percentage : 0
Data Units Written : 0x0
Unsafe Shutdowns : 0x0
Host Write Commands : 0x0
Device reliability has degraded : False
Available Spare Normalized percentage of the remaining spare capacity available : 100
Media Errors : 0x0
Host Read Commands : 0x017F

Show all the SMART properties for the Intel® SSD at index 1

[~] /opt/intel/intelmas/intelmas show  -intelssd 1 -smart

- SMART Attributes PHMB839000LW280IGN -

- B8 -

Action : Pass
Description : End-to-End Error Detection Count
ID : B8
Normalized : 100
Raw : 0

- C7 -

Action : Pass
Description : CRC Error Count
ID : C7
Normalized : 100
Raw : 0

- E2 -

Action : Pass
Description : Timed Workload - Media Wear
ID : E2
Normalized : 100
Raw : 0

- E3 -

Action : Pass
Description : Timed Workload - Host Read/Write Ratio
ID : E3
Normalized : 100
Raw : 0

- E4 -

Action : Pass
Description : Timed Workload Timer
ID : E4
Normalized : 100
Raw : 0

- EA -

Action : Pass
Description : Thermal Throttle Status
ID : EA
Normalized : 100
Raw : 0
ThrottleStatus : 0 %
ThrottlingEventCount : 0

- F0 -

Action : Pass
Description : Retry Buffer Overflow Count
ID : F0
Normalized : 100
Raw : 0

- F3 -

Action : Pass
Description : PLI Lock Loss Count
ID : F3
Normalized : 100
Raw : 0

- F5 -

Action : Pass
Description : Host Bytes Written
ID : F5
Normalized : 100
Raw : 0
Raw (Bytes) : 0

- F6 -

Action : Pass
Description : System Area Life Remaining
ID : F6
Normalized : 100
Raw : 0

Disk firmware update

[~] /opt/intel/intelmas/intelmas load -intelssd 1
WARNING! You have selected to update the drives firmware!
Proceed with the update? (Y|N): Y
Checking for firmware update...

- Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -

Status : The selected drive contains current firmware as of this tool release.

How to Maxtang’s NX 6412 NUC add to vDS? Fix script /etc/rc.local.d/local.sh

How to fix network after adding to vDS. When you add NX6412 to vDS and reboot ESXi. I don’t have uplink for vDS. You could check it with:

# esxcfg-vswitch -l
DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vDS              2560        6           512               9000    vusb0
--cut
  DVPort ID                               In Use      Client
  468                                     0           
  469                                     0
  470                                     0
  471                                     0

We will have to note DVPort ID 468 – example. vDS is name of your vDS switch.

esxcfg-vswitch -P vusb0 -V 468 vDS

It is necessary add it to /etc/rc.local.d/local.sh before exit 0. You could have similar script from source Persisting USB NIC Bindings

vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
    sleep 10
    count=$(( $count + 1 ))
    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done

esxcfg-vswitch -R
esxcfg-vswitch -P vusb0 -V 468 vDS

exit 0

What’s the story with Optane?

I am using Intel SSD Optane 900P PCIe in my HomeLAB as ZIL L2ARC drives for TrueNAS, but in July of 2022 Intel announced their intention to wind down the Optane business. I will try summary information about Intel Optane from Simon Todd presentation.

My HomeLAB benchmark Optane 900P -TrueNAS ZIL L2ARC with HDD

Optane help a lot with IOPs for RAID with normal HDD. I reach 2,5GB/s peak write performance.

Writer Report – iozone -Raz -b lab.wks -g 1G – Optane 900P -TrueNAS ZIL L2ARC with HDD x-axis File size in KB; z-axis MB/s
Writer Report – iozone -Raz -b lab.wks -g 1G – Optane 900P -TrueNAS ZIL L2ARC with HDD

We call see great write performance for 40GB file size set about 1,7GB/s.

# perftests-nas ; cat iozone.txt
        Run began: Sun Dec 18 08:02:39 2022

        Record Size 128 kB
        File size set to 41943040 kB
        Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.

              kB  reclen    write  rewrite    read    reread
        41943040     128  1734542  1364683  2413381  2371527

iozone test complete.
# dd if=/dev/zero of=foo bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes transferred in 1.517452 secs (707595169 bytes/sec) 707 MB/s

# dd if=/dev/zero of=foo bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes transferred in 0.012079 secs (42386853 bytes/sec) 42 MB/s

Intel® Optane™ Business Update: What Does This Mean for Warranty and Support

As announced in Intel’s Q2 2022 earnings, after careful consideration, Intel plans to cease future development of our Optane products. We will continue development of Crow Pass on Sapphire Rapids as we engage with key customers to determine their plans to deploy this product. While we believe Optane is a superb technology, it has become impractical to deliver products at the necessary scale as a single-source supplier of Optane technology.

We are committed to supporting Optane customers and ecosystem partners through the transition of our existing memory and storage product lines through end-of-life. We continue to sell existing Optane products, and support and the 5-year warranty terms from date of sale remain unchanged.

Get to know intel® optane™ technology
Source Simon Todd – vExpert – Intel Webinar Slides

What makes Optane SSD’s different?

    NAND SSD

    NAND garbage collection requires background writes. NAND SSD block erase process results in slower writes and inconsistent performance.

    Intel® Optane™ technology

    Intel® Optane™ technology does not use garbage collection
    Rapid, in-place writes enable consistently fast response times

    Intel® Optane™ SSDs are different by design
    Source Simon Todd – vExpert – Intel Webinar Slides
    Consistent performance, even under heavy write loads
    Source Simon Todd – vExpert – Intel Webinar Slides
    ModelDies per channelChannelsRaw CapacitySpare Area
    Intel Optane SSD 900p 280GB37336 GB56 GB
    Intel Optane SSD DC P4800X 375GB47448 GB73 GB
    Intel Optane SSD 900p 480GB57560 GB80 GB
    Intel Optane SSD DC P4800X 750GB87896 GB146 GB
    The Optane SSD DC P4800X and the Optane SSD 900p both use the same 7-channel controller, which leads to some unusual drive capacities. The 900p comes with either 3 or 5 memory dies per channel while the P4800X has 4 or 8. All models reserve about 1/6th of the raw capacity for internal use Source

    Intel Optane SSD DC P4800X / 900P Hands-On Review

    Wow, How is Optane fast …

    The Intel Optane SSD DC P4800X is slightly faster than the Optane SSD 900p throughout this test, but either is far faster than the flash-based SSDs. Source

    Maxtang’s NX 6412 NUC – update ESXi 8.0a

    VMware ESXi 8.0a release was announced:

    How to prepare ESXi Custom ISO image 8U0a for NX6412 NUC?

    Download these files:

    Run those script to prepare Custom ISO image you should use PowerCLI version 13.0. Problem with upgrade to PowerCLI you could fix with blog PowerCLI 13 update and installation hurdles on Windows:

    Add-EsxSoftwareDepot .\VMware-ESXi-8.0a-20842819-depot.zip
    Add-EsxSoftwareDepot .\ESXi800-VMKUSB-NIC-FLING-61054763-component-20826251.zip
    New-EsxImageProfile -CloneProfile "ESXi-8.0a-20842819-standard" -name "ESXi-8.0.0-20842819-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-8.0.0-20842819-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-8.0.0-20842819-USBNIC" -ExportToBundle -filepath ESXi-8.0.0-20842819-USBNIC.zip

    Upgrade to ESXi 8.0

    TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.1-20842819-USBNIC.zip -p ESXi-8.0.1-20842819-USBNIC --no-hardware-warning
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true

    vSphere 8 Lab with Cohesity and VMware vExpert gift – Maxtang’s NX 6412 NUC

    During VMware Explore 2022 Barcelona, I’ve been given a gift as a vExpert. You could read it in my previous article. NX6412 doesn’t support onboard NICs. We will need Custom ISO with USB Network Native Driver for ESXi. Because of problem using latest PowerCLI 13 release Nov 25, 2022 with export ISO. I decided to install Custom ISO ESXi 7u2e and than upgrade to ESXi 8.0 with depot zip.

    Thank You Cohesity. Power consumpion is only 10 Watts …

    How to prepare ESXi Custom ISO image 7U2e for NX6412 NUC?

    Download these files:

    Run those script to prepare Custom ISO image you could use PowerCLI 12.7 or 13.0: You could use create_custom_esxi_iso.ps1 as well.

    Add-EsxSoftwareDepot .\VMware-ESXi-7.0U2e-19290878-depot.zip
    Add-EsxSoftwareDepot .\ESXi702-VMKUSB-NIC-FLING-47140841-component-18150468.zip
    New-EsxImageProfile -CloneProfile "ESXi-7.0U2e-19290878-standard" -name "ESXi-7.0U2e-19290878-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-7.0U2e-19290878-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-7.0U2e-19290878-USBNIC" -ExportToIso -filepath ESXi-7.0U2e-19290878-USBNIC.iso

    Create bootable ESXi USB Flash Drive from ESXi-7.0U2e-19290878-USBNIC.iso. More info How to create a bootable ESXi Installer USB Flash Drive

    • For Custom ISO image is necessary select Write in ISO -> ESP mode
    Dialog only for Custom ISO image

    Install ESXi 7U2e and fix Persisting USB NIC Bindings

    Currently there is a limitation in ESXi where USB NIC bindings are picked up much later in the boot process and to ensure settings are preserved upon a reboot, the following needs to be added to /etc/rc.local.d/local.sh based on your configurations.

    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
    count=0
    while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
    do
        sleep 10
        count=$(( $count + 1 ))
        vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
    done
    
    esxcfg-vswitch -R

    Prepare ESXi Custom zip depot 8.0 for NX6412 NUC

    Download these files:

    Run those script to prepare Custom ISO image you could use 13.0. Problem with upgrade to PowerCLI you could fix with blog PowerCLI 13 update and installation hurdles on Windows:

    Add-EsxSoftwareDepot .\VMware-ESXi-8.0-20513097-depot.zip
    Add-EsxSoftwareDepot .\ESXi800-VMKUSB-NIC-FLING-61054763-component-20826251.zip
    New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -name "ESXi-8.0.0-20513097-USBNIC" -Vendor "vdan.cz"
    Add-EsxSoftwarePackage -ImageProfile "ESXi-8.0.0-20513097-USBNIC" -SoftwarePackage "vmkusb-nic-fling"
    Export-ESXImageProfile -ImageProfile "ESXi-8.0.0-20513097-USBNIC" -ExportToBundle -filepath ESXi-8.0.0-20513097-USBNIC.zip

    Upgrade to ESXi 8.0

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC
    
    Hardware precheck of profile ESXi-8.0.0-20513097-USBNIC failed with warnings: <TPM_VERSION WARNING: TPM 1.2 device detected. Support for TPM version 1.2 is discontinued. Installation may proceed, but may cause the system to behave unexpectedly.>

    You could fix TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.

    esxcli software profile update -d  /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC --no-hardware-warning
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true
       VIBs Installed: VMW_bootbank_atlantic_1.0.3.0-10vmw.800.1.0.20513097, VMW_bootbank_bcm-mpi3_8.1.1.0.0.0-1vmw.800.1.0.20513097, VMW_bootbank_bfedac-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_bnxtnet_216.0.50.0-66vmw.800.1.0.20513097, VMW_bootbank_bnxtroce_216.0.58.0-27vmw.800.1.0.20513097, VMW_bootbank_brcmfcoe_12.0.1500.3-4vmw.800.1.0.20513097, VMW_bootbank_cndi-igc_1.2.9.0-1vmw.800.1.0.20513097, VMW_bootbank_dwi2c-esxio_0.1-2vmw.800.1.0.20513097, VMW_bootbank_dwi2c_0.1-2vmw.800.1.0.20513097, VMW_bootbank_elxiscsi_12.0.1200.0-10vmw.800.1.0.20513097, VMW_bootbank_elxnet_12.0.1250.0-8vmw.800.1.0.20513097, VMW_bootbank_i40en_1.11.2.5-1vmw.800.1.0.20513097, VMW_bootbank_iavmd_3.0.0.1010-5vmw.800.1.0.20513097, VMW_bootbank_icen_1.5.1.16-1vmw.800.1.0.20513097, VMW_bootbank_igbn_1.4.11.6-1vmw.800.1.0.20513097, VMW_bootbank_ionic-en-esxio_20.0.0-29vmw.800.1.0.20513097, VMW_bootbank_ionic-en_20.0.0-29vmw.800.1.0.20513097, VMW_bootbank_irdman_1.3.1.22-1vmw.800.1.0.20513097, VMW_bootbank_iser_1.1.0.2-1vmw.800.1.0.20513097, VMW_bootbank_ixgben_1.7.1.39-1vmw.800.1.0.20513097, VMW_bootbank_lpfc_14.0.635.3-14vmw.800.1.0.20513097, VMW_bootbank_lpnic_11.4.62.0-1vmw.800.1.0.20513097, VMW_bootbank_lsi-mr3_7.722.02.00-1vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt2_20.00.06.00-4vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt35_23.00.00.00-1vmw.800.1.0.20513097, VMW_bootbank_lsi-msgpt3_17.00.13.00-2vmw.800.1.0.20513097, VMW_bootbank_mlnx-bfbootctl-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_mnet-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.800.1.0.20513097, VMW_bootbank_ne1000_0.9.0-2vmw.800.1.0.20513097, VMW_bootbank_nenic_1.0.35.0-3vmw.800.1.0.20513097, VMW_bootbank_nfnic_5.0.0.35-3vmw.800.1.0.20513097, VMW_bootbank_nhpsa_70.0051.0.100-4vmw.800.1.0.20513097, VMW_bootbank_nmlx5-core-esxio_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-core_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-rdma-esxio_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlx5-rdma_4.23.0.36-8vmw.800.1.0.20513097, VMW_bootbank_nmlxbf-gige-esxio_2.1-1vmw.800.1.0.20513097, VMW_bootbank_ntg3_4.1.8.0-4vmw.800.1.0.20513097, VMW_bootbank_nvme-pcie-esxio_1.2.4.1-1vmw.800.1.0.20513097, VMW_bootbank_nvme-pcie_1.2.4.1-1vmw.800.1.0.20513097, VMW_bootbank_nvmerdma_1.0.3.9-1vmw.800.1.0.20513097, VMW_bootbank_nvmetcp_1.0.1.2-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-ens-esxio_2.0.0.23-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-ens_2.0.0.23-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3-esxio_2.0.0.31-1vmw.800.1.0.20513097, VMW_bootbank_nvmxnet3_2.0.0.31-1vmw.800.1.0.20513097, VMW_bootbank_penedac-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pengpio-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pensandoatlas_1.46.0.E.24.1.256-2vmw.800.1.0.20293628, VMW_bootbank_penspi-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_pvscsi-esxio_0.1-5vmw.800.1.0.20513097, VMW_bootbank_pvscsi_0.1-5vmw.800.1.0.20513097, VMW_bootbank_qcnic_1.0.15.0-22vmw.800.1.0.20513097, VMW_bootbank_qedentv_3.40.5.70-4vmw.800.1.0.20513097, VMW_bootbank_qedrntv_3.40.5.70-1vmw.800.1.0.20513097, VMW_bootbank_qfle3_1.0.67.0-30vmw.800.1.0.20513097, VMW_bootbank_qfle3f_1.0.51.0-28vmw.800.1.0.20513097, VMW_bootbank_qfle3i_1.0.15.0-20vmw.800.1.0.20513097, VMW_bootbank_qflge_1.1.0.11-1vmw.800.1.0.20513097, VMW_bootbank_rd1173-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_rdmahl_1.0.0-1vmw.800.1.0.20513097, VMW_bootbank_rste_2.0.2.0088-7vmw.800.1.0.20513097, VMW_bootbank_sfvmk_2.4.0.2010-13vmw.800.1.0.20513097, VMW_bootbank_smartpqi_80.4253.0.5000-2vmw.800.1.0.20513097, VMW_bootbank_spidev-esxio_0.1-1vmw.800.1.0.20513097, VMW_bootbank_vmkata_0.1-1vmw.800.1.0.20513097, VMW_bootbank_vmksdhci-esxio_1.0.2-2vmw.800.1.0.20513097, VMW_bootbank_vmksdhci_1.0.2-2vmw.800.1.0.20513097, VMW_bootbank_vmkusb-esxio_0.1-14vmw.800.1.0.20513097, VMW_bootbank_vmkusb-nic-fling_1.11-1vmw.800.1.20.61054763, VMW_bootbank_vmkusb_0.1-14vmw.800.1.0.20513097, VMW_bootbank_vmw-ahci_2.0.14-1vmw.800.1.0.20513097, VMware_bootbank_bmcal-esxio_8.0.0-1.0.20513097, VMware_bootbank_bmcal_8.0.0-1.0.20513097, VMware_bootbank_clusterstore_8.0.0-1.0.20513097, VMware_bootbank_cpu-microcode_8.0.0-1.0.20513097, VMware_bootbank_crx_8.0.0-1.0.20513097, VMware_bootbank_drivervm-gpu_8.0.0-1.0.20513097, VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-6vmw.800.1.0.20513097, VMware_bootbank_esx-base_8.0.0-1.0.20513097, VMware_bootbank_esx-dvfilter-generic-fastpath_8.0.0-1.0.20513097, VMware_bootbank_esx-ui_2.5.1-20374953, VMware_bootbank_esx-update_8.0.0-1.0.20513097, VMware_bootbank_esx-xserver_8.0.0-1.0.20513097, VMware_bootbank_esxio-base_8.0.0-1.0.20513097, VMware_bootbank_esxio-combiner-esxio_8.0.0-1.0.20513097, VMware_bootbank_esxio-combiner_8.0.0-1.0.20513097, VMware_bootbank_esxio-dvfilter-generic-fastpath_8.0.0-1.0.20513097, VMware_bootbank_esxio-update_8.0.0-1.0.20513097, VMware_bootbank_esxio_8.0.0-1.0.20513097, VMware_bootbank_gc-esxio_8.0.0-1.0.20513097, VMware_bootbank_gc_8.0.0-1.0.20513097, VMware_bootbank_loadesx_8.0.0-1.0.20513097, VMware_bootbank_loadesxio_8.0.0-1.0.20513097, VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.800.1.0.20513097, VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_2.7.2173-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-12vmw.800.1.0.20513097, VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.800.1.0.20513097, VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-2vmw.800.1.0.20513097, VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-8vmw.800.1.0.20513097, VMware_bootbank_native-misc-drivers-esxio_8.0.0-1.0.20513097, VMware_bootbank_native-misc-drivers_8.0.0-1.0.20513097, VMware_bootbank_qlnativefc_5.2.46.0-3vmw.800.1.0.20513097, VMware_bootbank_trx_8.0.0-1.0.20513097, VMware_bootbank_vdfs_8.0.0-1.0.20513097, VMware_bootbank_vmware-esx-esxcli-nvme-plugin-esxio_1.2.0.52-1vmw.800.1.0.20513097, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.52-1vmw.800.1.0.20513097, VMware_bootbank_vsan_8.0.0-1.0.20513097, VMware_bootbank_vsanhealth_8.0.0-1.0.20513097, VMware_locker_tools-light_12.0.6.20104755-20513097
       VIBs Removed: VMW_bootbank_atlantic_1.0.3.0-8vmw.702.0.0.17867351, VMW_bootbank_bnxtnet_216.0.50.0-34vmw.702.0.20.18426014, VMW_bootbank_bnxtroce_216.0.58.0-20vmw.702.0.20.18426014, VMW_bootbank_brcmfcoe_12.0.1500.1-2vmw.702.0.0.17867351, VMW_bootbank_brcmnvmefc_12.8.298.1-1vmw.702.0.0.17867351, VMW_bootbank_elxiscsi_12.0.1200.0-8vmw.702.0.0.17867351, VMW_bootbank_elxnet_12.0.1250.0-5vmw.702.0.0.17867351, VMW_bootbank_i40enu_1.8.1.137-1vmw.702.0.20.18426014, VMW_bootbank_iavmd_2.0.0.1152-1vmw.702.0.0.17867351, VMW_bootbank_icen_1.0.0.10-1vmw.702.0.0.17867351, VMW_bootbank_igbn_1.4.11.2-1vmw.702.0.0.17867351, VMW_bootbank_irdman_1.3.1.19-1vmw.702.0.0.17867351, VMW_bootbank_iser_1.1.0.1-1vmw.702.0.0.17867351, VMW_bootbank_ixgben_1.7.1.35-1vmw.702.0.0.17867351, VMW_bootbank_lpfc_12.8.298.3-2vmw.702.0.20.18426014, VMW_bootbank_lpnic_11.4.62.0-1vmw.702.0.0.17867351, VMW_bootbank_lsi-mr3_7.716.03.00-1vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt2_20.00.06.00-3vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt35_17.00.02.00-1vmw.702.0.0.17867351, VMW_bootbank_lsi-msgpt3_17.00.10.00-2vmw.702.0.0.17867351, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.702.0.0.17867351, VMW_bootbank_ne1000_0.8.4-11vmw.702.0.0.17867351, VMW_bootbank_nenic_1.0.33.0-1vmw.702.0.0.17867351, VMW_bootbank_nfnic_4.0.0.63-1vmw.702.0.0.17867351, VMW_bootbank_nhpsa_70.0051.0.100-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-core_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-en_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx4-rdma_3.19.16.8-2vmw.702.0.0.17867351, VMW_bootbank_nmlx5-core_4.19.16.10-1vmw.702.0.0.17867351, VMW_bootbank_nmlx5-rdma_4.19.16.10-1vmw.702.0.0.17867351, VMW_bootbank_ntg3_4.1.5.0-0vmw.702.0.0.17867351, VMW_bootbank_nvme-pcie_1.2.3.11-1vmw.702.0.0.17867351, VMW_bootbank_nvmerdma_1.0.2.1-1vmw.702.0.0.17867351, VMW_bootbank_nvmxnet3-ens_2.0.0.22-1vmw.702.0.0.17867351, VMW_bootbank_nvmxnet3_2.0.0.30-1vmw.702.0.0.17867351, VMW_bootbank_pvscsi_0.1-2vmw.702.0.0.17867351, VMW_bootbank_qcnic_1.0.15.0-11vmw.702.0.0.17867351, VMW_bootbank_qedentv_3.40.5.53-20vmw.702.0.20.18426014, VMW_bootbank_qedrntv_3.40.5.53-17vmw.702.0.20.18426014, VMW_bootbank_qfle3_1.0.67.0-14vmw.702.0.0.17867351, VMW_bootbank_qfle3f_1.0.51.0-19vmw.702.0.0.17867351, VMW_bootbank_qfle3i_1.0.15.0-12vmw.702.0.0.17867351, VMW_bootbank_qflge_1.1.0.11-1vmw.702.0.0.17867351, VMW_bootbank_rste_2.0.2.0088-7vmw.702.0.0.17867351, VMW_bootbank_sfvmk_2.4.0.2010-4vmw.702.0.0.17867351, VMW_bootbank_smartpqi_70.4000.0.100-6vmw.702.0.0.17867351, VMW_bootbank_vmkata_0.1-1vmw.702.0.0.17867351, VMW_bootbank_vmkfcoe_1.0.0.2-1vmw.702.0.0.17867351, VMW_bootbank_vmkusb-nic-fling_1.8-3vmw.702.0.20.47140841, VMW_bootbank_vmkusb_0.1-4vmw.702.0.20.18426014, VMW_bootbank_vmw-ahci_2.0.9-1vmw.702.0.0.17867351, VMware_bootbank_clusterstore_7.0.2-0.30.19290878, VMware_bootbank_cpu-microcode_7.0.2-0.30.19290878, VMware_bootbank_crx_7.0.2-0.30.19290878, VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-4vmw.702.0.0.17867351, VMware_bootbank_esx-base_7.0.2-0.30.19290878, VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.2-0.30.19290878, VMware_bootbank_esx-ui_1.34.8-17417756, VMware_bootbank_esx-update_7.0.2-0.30.19290878, VMware_bootbank_esx-xserver_7.0.2-0.30.19290878, VMware_bootbank_gc_7.0.2-0.30.19290878, VMware_bootbank_loadesx_7.0.2-0.30.19290878, VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.702.0.0.17867351, VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_2.0.0-2vmw.702.0.0.17867351, VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-5vmw.702.0.0.17867351, VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-hp-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-1vmw.702.0.0.17867351, VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-6vmw.702.0.0.17867351, VMware_bootbank_native-misc-drivers_7.0.2-0.30.19290878, VMware_bootbank_qlnativefc_4.1.14.0-5vmw.702.0.0.17867351, VMware_bootbank_vdfs_7.0.2-0.30.19290878, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.42-1vmw.702.0.0.17867351, VMware_bootbank_vsan_7.0.2-0.30.19290878, VMware_bootbank_vsanhealth_7.0.2-0.30.19290878, VMware_locker_tools-light_11.2.6.17901274-18295176
       VIBs Skipped:

    And reboot ESXi after upgrade with cmd reboot. Good luck

    How to create a bootable ESXi Installer USB Flash Drive

    ESXi Image Download

    Create a bootable ESXi Installer USB Flash Drive with Windows

    • Press SELECT and open the ESXi ISO image
    • Select your flash drive
    • Control Partition scheme: GPT and Target UEFI
    • Press START
    • For Custom ISO image is necessary select Write in ISO -> ESP mode
    Dialog only for Custom ISO image