Daniel Micanek virtual Blog – Like normal Dan, but virtual.
Category: HomeLab
The “HomeLAB” blog category is focused on providing practical knowledge and experiences in using VMware technologies in a home lab setting. This category is ideal for tech enthusiasts, IT professionals, and students who want to experiment with the latest VMware products and features in their own home environment.
With the release of VMware ESXi 8.0 Update 1, VMware quietly introduced nConnect support in the NFS client — a long-awaited enhancement for environments using NFSv3 datastores.
The nConnect feature allows administrators to establish multiple parallel TCP connections per NFS datastore, significantly improving throughput, resiliency, and traffic isolation.
This guide walks you through how to configure nConnect, supported versions, and key operational tips.
🆕 What’s New in ESXi 8.0U1
From ESXi 8.0U1 onwards, a new parameter -c (number of connections) is available in esxcli when mounting NFSv3 datastores.
⚠️ The total number of connections across all mounted NFSv3 datastores is capped at 256.
Runtime Scaling in ESXi 8.0U2
The nConnect feature got a useful upgrade in VMware ESXi 8.0 Update 2. You can now increase or decrease the number of TCP connections on an existing NFSv3 mount without unmounting the datastore:
This flexibility helps admins fine-tune NFS performance on the fly — for example, ramping up connections during heavy backup windows or scaling down during low I/O periods.
Note: In both 8.0U1 and 8.0U2, the default number of connections remains 1.
How to Check Active NFS Connections
To view the number of RPC clients currently used by mounted NFSv3 datastores, run:
vsish -e get /vmkModules/nfsclient/info
Sample output:
NFS Connections Info {
mountedVolumes:10
totalConnections 40
}
This command is particularly handy for performance troubleshooting and capacity planning.
🧪 Experimental: Host Profile Integration
Although official Host Profile support for nConnect is not yet released, VMware added an experimental field named “Number of TCP connections” in:
Host Profile → Storage Configuration → NFS
What it does: When you set a value here, ESXi will use that number of parallel TCP sessions when mounting the NFS datastore during profile remediation.
This is ideal for consistent deployment across multiple hosts in larger clusters.
Best Practices & Tips
Start with 4 connections and scale up as needed after performance testing.
Keep the 256 connection limit in mind when mounting multiple datastores.
Use consistent configuration across hosts to avoid mount mismatches.
Monitor performance metrics via VMware vSphere or CLI.
If you’re using Host Profiles, experiment carefully with the TCP connection parameter.
Summary
The nConnect feature in ESXi’s NFS client marks a major step forward in improving NFSv3 performance and scalability in enterprise environments.
Whether you’re running backup workloads, large VMs, or latency-sensitive applications, multiple TCP connections per NFS datastore can make a measurable difference.
If you’re upgrading to VMware ESXi 8.0U1 or later, nConnect is worth testing and adopting in your storage strategy.
In the complex world of virtualization, developers often face the challenge of debugging guest operating systems and applications. A practical solution lies in converting virtual machine snapshots to memory dumps. This blog post delves into how you can efficiently use the vmss2core tool to transform a VM checkpoint, be it a snapshot or suspend file, into a core dump file, compatible with standard debuggers.
Step-by-Step Guide
Break down the process into clear, step-by-step instructions. Use bullet points or numbered lists for easier readability. Example:
Step 1: Create and download a virtual machine Snapshots .vmsn and .vmem
Select the Problematic Virtual Machine
In your VMware environment, identify and select the virtual machine experiencing issues.
Replicate the Issue
Attempt to replicate the problem within the virtual machine to ensure the snapshot captures the relevant state.
Take a Snapshot
Right-click on the virtual machine.
Navigate to Snapshots → Take snapshot
Enter a name for the snapshot.
Ensure “Snapshot the Virtual Machine’s memory” is checked
Click ‘CREATE’ to proceed.
Access VM Settings
Right-click on the virtual machine again.
Select ‘Edit Settings’
Navigate to Datastores
Choose the virtual machine and click on ‘Datastores’.
Click on the datastore name
Download the Snapshot
Locate the .vmsn ans .vmem files (VMware Snapshot file).
Select the file, click ‘Download’, and save it locally.
Step 2: Locate Your vmss2core Installation
For Windows (32bit): Navigate to C:\Program Files\VMware\VMware Workstation\
For Windows (64bit): Go to C:\Program Files(x86)\VMware\VMware Workstation\
For Linux: Access /usr/bin/
For Mac OS: Find it in /Library/Application Support/VMware Fusion/
Note: If vmss2core isn’t in these directories, download it from New Flings Link (use at your own risk).
For general use: vmss2core.exe -W [VM_name].vmsn [VM_name].vmem
For Windows 8/8.1, Server 2012, 2016, 2019: vmss2core.exe -W8 [VM_name].vmsn [VM_name].vmem
For Linux: ./vmss2core-Linux64 -N [VM_name].vmsn[VM_name].vmemNote: Replace [VM_name] with your virtual machine’s name. The flag -W, -W8, or -N corresponds to the guest OS.
#vmss2core.exe
vmss2core version 20800274 Copyright (C) 1998-2022 VMware, Inc. All rights reserved. A tool to convert VMware checkpoint state files into formats that third party debugger tools understand. It can handle both suspend (.vmss) and snapshot (.vmsn) checkpoint state files (hereafter referred to as a 'vmss file') as well as both monolithic and non-monolithic (separate .vmem file) encapsulation of checkpoint state data. Usage: GENERAL: vmss2core [[options] | [-l linuxoffsets options]] \ <vmss file> [<vmem file>] The "-l" option specifies offsets (a stringset) within the Linux kernel data structures, which is used by -P and -N modes. It is ignored with other modes. Please use "getlinuxoffsets" to automatically generate the correct stringset value for your kernel, see README.txt for additional information. Without options one vmss.core<N> per vCPU with linear view of memory is generated. Other types of core files and output can be produced with these options: -q Quiet(er) operation. -M Create core file with physical memory view (vmss.core). -l str Offset stringset expressed as 0xHEXNUM,0xHEXNUM,... . -N Red Hat crash core file for arbitrary Linux version described by the "-l" option (vmss.core). -N4 Red Hat crash core file for Linux 2.4 (vmss.core). -N6 Red Hat crash core file for Linux 2.6 (vmss.core). -O <x> Use <x> as the offset of the entrypoint. -U <i> Create linear core file for vCPU <i> only. -P Print list of processes in Linux VM. -P<pid> Create core file for Linux process <pid> (core.<pid>). -S Create core for 64-bit Solaris (vmcore.0, unix.0). Optionally specify the version: -S112 -S64SYM112 for 11.2. -S32 Create core for 32-bit Solaris (vmcore.0, unix.0). -S64SYM Create text symbols for 64-bit Solaris (solaris.text). -S32SYM Create text symbols for 32-bit Solaris (solaris.text). -W Create WinDbg file (memory.dmp) with commonly used build numbers ("2195" for Win32, "6000" for Win64). -W<num> Create WinDbg file (memory.dmp), with <num> as the build number (for example: "-W2600"). -WK Create a Windows kernel memory only dump file (memory.dmp). -WDDB<num> or -W8DDB<num> Create WinDbg file (memory.dmp), with <num> as the debugger data block address in hex (for example: "-W12ac34de"). -WSCAN Scan all of memory for Windows debugger data blocks (instead of just low 256 MB). -W8 Generate a memory dump file from a suspended Windows 8 VM. -X32 <mach_kernel> Create core for 32-bit Mac OS. -X64 <mach_kernel> Create core for 64-bit Mac OS. -F Create core for an EFI firmware exception. -F<adr> Create core for an EFI firmware exception with system context at the given guest virtual address.
The Raspberry Pi 5, a remarkable addition to the Raspberry Pi series, boasts an advanced configuration with five active PCI Express lanes. These lanes are ingeniously distributed with four dedicated to the innovative RP1 chip, which supports a variety of I/O functionalities such as USB, Ethernet, MIPI Camera and Display, and GPIO. Additionally, there’s a fifth lane that interfaces with a novel external PCIe connector.
In its default setup, the Raspberry Pi 5 operates all PCIe lanes at Gen 2.0 speeds, offering a throughput of approximately 5 GT/sec per lane. This standard setting is fixed for the internal lanes connected to the RP1 chip. However, for users seeking enhanced performance, there’s an exciting tweak available for the external PCIe connector. By simply adding a couple of lines to the /boot/config.txt file and rebooting your device, you can elevate the external connector to Gen 3.0 speeds. This upgrade boosts the data transfer rate to 8 GT/sec, nearly doubling the default speed.
To achieve this, insert the following commands in your /boot/config.txt file:
dtparam=pciex1
dtparam=pciex1_gen=3
After these adjustments and a system reboot, your Raspberry Pi 5 will operate the external PCIe lane at the faster Gen 3.0 speed, unlocking new potential for your projects and applications.
For Private AI in HomeLAB, I was searching for budget-friendly GPUs with a minimum of 24GB RAM. Recently, I came across the refurbished NVIDIA Tesla P40 on eBay, which boasts some intriguing specifications:
GPU Chip: GP102
Cores: 3840
TMUs: 240
ROPs: 96
Memory Size: 24 GB
Memory Type: GDDR5
Bus Width: 384 bit
Since the NVIDIA Tesla P40 comes in a full-profile form factor, we needed to acquire a PCIe riser card.
A PCIe riser card, commonly known as a “riser card,” is a hardware component essential in computer systems for facilitating the connection of expansion cards to the motherboard. Its primary role comes into play when space limitations or specific orientation requirements prevent the direct installation of expansion cards into the PCIe slots on the motherboard.
Furthermore, I needed to ensure adequate cooling, but this posed no issue. I utilized a 3D model created by MiHu_Works for a Tesla P100 blower fan adapter, which you can find at this link: Tesla P100 Blower Fan Adapter.
As for the fan, the Titan TFD-B7530M12C served the purpose effectively. You can find it on Amazon: Titan TFD-B7530M12C.
Currently, I am using a single VM with PCIe pass-through. However, it was necessary to implement specific Advanced VM parameters:
pciPassthru.use64bitMMIO = true
pciPassthru.64bitMMIOSizeGB = 64
Now, you might wonder about the performance. It’s outstanding, delivering speeds up to 16x-26x times faster than the CPU. To provide you with an idea of the performance, I conducted a llama-bench test:
For my project involving the AI tool llama.cpp, I needed to free up a PCI slot for an NVIDIA Tesla P40 GPU. I found an excellent guide and a useful video from ArtOfServer.
Based on this helpful video from ArtOfServer:
ArtOfServer wrote a small tutorial on how to modify an H200A (external) into an H200I (internal) to be used into the dedicated slot (e.g. instead of a Perc6i)
ArtOfServer wrote a small tutorial on how to modify an H200A (external) into an H200I (internal) to be used into the dedicated slot (e.g. instead of a Perc6i)
Install compiler and build tools (those can be removed later)
# apt install build-essential unzip
Compile and install lsirec and lsitool
# mkdir lsi
# cd lsi
# wget https://github.com/marcan/lsirec/archive/master.zip
# wget https://github.com/exactassembly/meta-xa-stm/raw/master/recipes-support/lsiutil/files/lsiutil-1.72.tar.gz
# tar -zxvvf lsiutil-1.72.tar.gz
# unzip master.zip
# cd lsirec-master
# make
# chmod +x sbrtool.py
# cp -p lsirec /usr/bin/
# cp -p sbrtool.py /usr/bin/
# cd ../lsiutil
# make -f Makefile_Linux
# lsirec 0000:05:00.0 unbind
Trying unlock in MPT mode...
Device in MPT mode
Kernel driver unbound from device
# lsirec 0000:05:00.0 halt
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Read sbr:
# lsirec 0000:05:00.0 readsbr h200.sbr
Device in MPT mode
Using I2C address 0x54
Using EEPROM type 1
Reading SBR...
SBR saved to h200.sbr
Transform binary sbr to text file:
# sbrtool.py parse h200.sbr h200.cfg
Modify PID in line 9 (e.g using vi or vim):
from this:
SubsysPID = 0x1f1c
to this:
SubsysPID = 0x1f1e
Important: if in the cfg file you find a line with:
SASAddr = 0xfffffffffffff
remove it!
Save and close file.
Build new sbr file:
# sbrtool.py build h200.cfg h200-int.sbr
Write it back to card:
# lsirec 0000:05:00.0 writesbr h200-int.sbr
Device in MPT mode
Using I2C address 0x54
Using EEPROM type 1
Writing SBR...
SBR written from h200-int.sbr
Reset the card an rescan the bus:
# lsirec 0000:05:00.0 reset
Device in MPT mode
Resetting adapter...
IOC is RESET
IOC is READY
# lsirec 0000:05:00.0 info
Trying unlock in MPT mode...
Device in MPT mode
Registers:
DOORBELL: 0x10000000
DIAG: 0x000000b0
DCR_I2C_SELECT: 0x80030a0c
DCR_SBR_SELECT: 0x2100001b
CHIP_I2C_PINS: 0x00000003
IOC is READY
# lsirec 0000:05:00.0 rescan
Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.
[~] /opt/intel/intelmas/intelmas show -intelssd 1
- 1 Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -
Bootloader : EB3B0416
Capacity : 260.83 GB (280,065,171,456 bytes)
DevicePath : nvmeMgmt-nvmhba5
DeviceStatus : Healthy
Firmware : E201HPS2
FirmwareUpdateAvailable : The selected drive contains current firmware as of this tool release.
Index : 1
MaximumLBA : 547002287
ModelNumber : INTEL SSDPED1D280GAH
NamespaceId : 1
PercentOverProvisioned : 0.00
ProductFamily : Intel(R) Optane(TM) SSD 905P Series
SMARTEnabled : True
SectorDataSize : 512
SerialNumber : PHMB839000LW280IGN
S.M.A.R.T information
[~] /opt/intel/intelmas/intelmas show -nvmelog SmartHealthInfo -intelssd 1
- PHMB839000LW280IGN -
- NVMeLog SMART and Health Information -
Volatile memory backup device has failed : False
Temperature has exceeded a critical threshold : False
Temperature - Celsius : 30
Media is in a read-only mode : False
Power On Hours : 0x0100
Power Cycles : 0x03
Number of Error Info Log Entries : 0x0
Controller Busy Time : 0x0
Available Spare Space has fallen below the threshold : False
Percentage Used : 0
Critical Warnings : 0
Data Units Read : 0x02
Available Spare Threshold Percentage : 0
Data Units Written : 0x0
Unsafe Shutdowns : 0x0
Host Write Commands : 0x0
Device reliability has degraded : False
Available Spare Normalized percentage of the remaining spare capacity available : 100
Media Errors : 0x0
Host Read Commands : 0x017F
Show all the SMART properties for the Intel® SSD at index 1
[~] /opt/intel/intelmas/intelmas show -intelssd 1 -smart
- SMART Attributes PHMB839000LW280IGN -
- B8 -
Action : Pass
Description : End-to-End Error Detection Count
ID : B8
Normalized : 100
Raw : 0
- C7 -
Action : Pass
Description : CRC Error Count
ID : C7
Normalized : 100
Raw : 0
- E2 -
Action : Pass
Description : Timed Workload - Media Wear
ID : E2
Normalized : 100
Raw : 0
- E3 -
Action : Pass
Description : Timed Workload - Host Read/Write Ratio
ID : E3
Normalized : 100
Raw : 0
- E4 -
Action : Pass
Description : Timed Workload Timer
ID : E4
Normalized : 100
Raw : 0
- EA -
Action : Pass
Description : Thermal Throttle Status
ID : EA
Normalized : 100
Raw : 0
ThrottleStatus : 0 %
ThrottlingEventCount : 0
- F0 -
Action : Pass
Description : Retry Buffer Overflow Count
ID : F0
Normalized : 100
Raw : 0
- F3 -
Action : Pass
Description : PLI Lock Loss Count
ID : F3
Normalized : 100
Raw : 0
- F5 -
Action : Pass
Description : Host Bytes Written
ID : F5
Normalized : 100
Raw : 0
Raw (Bytes) : 0
- F6 -
Action : Pass
Description : System Area Life Remaining
ID : F6
Normalized : 100
Raw : 0
Disk firmware update
[~] /opt/intel/intelmas/intelmas load -intelssd 1
WARNING! You have selected to update the drives firmware!
Proceed with the update? (Y|N): Y
Checking for firmware update...
- Intel(R) Optane(TM) SSD 905P Series PHMB839000LW280IGN -
Status : The selected drive contains current firmware as of this tool release.
How to fix network after adding to vDS. When you add NX6412 to vDS and reboot ESXi. I don’t have uplink for vDS. You could check it with:
# esxcfg-vswitch -l
DVS Name Num Ports Used Ports Configured Ports MTU Uplinks
vDS 2560 6 512 9000 vusb0
--cut
DVPort ID In Use Client
468 0
469 0
470 0
471 0
We will have to note DVPort ID 468 – example. vDS is name of your vDS switch.
esxcfg-vswitch -P vusb0 -V 468 vDS
It is necessary add it to /etc/rc.local.d/local.sh before exit 0. You could have similar script from source Persisting USB NIC Bindings
vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
sleep 10
count=$(( $count + 1 ))
vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done
esxcfg-vswitch -R
esxcfg-vswitch -P vusb0 -V 468 vDS
exit 0
I am using Intel SSD Optane 900P PCIe in my HomeLAB as ZIL L2ARC drives for TrueNAS, but in July of 2022 Intel announced their intention to wind down the Optane business. I will try summary information about Intel Optane from Simon Todd presentation.
My HomeLAB benchmark Optane 900P -TrueNAS ZIL L2ARC with HDD
Optane help a lot with IOPs for RAID with normal HDD. I reach 2,5GB/s peak write performance.
We call see great write performance for 40GB file size set about 1,7GB/s.
# perftests-nas ; cat iozone.txt
Run began: Sun Dec 18 08:02:39 2022
Record Size 128 kB
File size set to 41943040 kB
Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
kB reclen write rewrite read reread
41943040 128 1734542 1364683 2413381 2371527
iozone test complete.
# dd if=/dev/zero of=foo bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes transferred in 1.517452 secs (707595169 bytes/sec) 707 MB/s
# dd if=/dev/zero of=foo bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes transferred in 0.012079 secs (42386853 bytes/sec) 42 MB/s
As announced in Intel’s Q2 2022 earnings, after careful consideration, Intel plans to cease future development of our Optane products. We will continue development of Crow Pass on Sapphire Rapids as we engage with key customers to determine their plans to deploy this product. While we believe Optane is a superb technology, it has become impractical to deliver products at the necessary scale as a single-source supplier of Optane technology.
We are committed to supporting Optane customers and ecosystem partners through the transition of our existing memory and storage product lines through end-of-life. We continue to sell existing Optane products, and support and the 5-year warranty terms from date of sale remain unchanged.
Get to know intel® optane™ technology Source Simon Todd – vExpert – Intel Webinar Slides
What makes Optane SSD’s different?
NAND SSD
NAND garbage collection requires background writes. NAND SSD block erase process results in slower writes and inconsistent performance.
Intel® Optane™ technology
Intel® Optane™ technology does not use garbage collection Rapid, in-place writes enable consistently fast response times
Intel® Optane™ SSDs are different by design Source Simon Todd – vExpert – Intel Webinar SlidesConsistent performance, even under heavy write loads Source Simon Todd – vExpert – Intel Webinar Slides
Model
Dies per channel
Channels
Raw Capacity
Spare Area
Intel Optane SSD 900p 280GB
3
7
336 GB
56 GB
Intel Optane SSD DC P4800X 375GB
4
7
448 GB
73 GB
Intel Optane SSD 900p 480GB
5
7
560 GB
80 GB
Intel Optane SSD DC P4800X 750GB
8
7
896 GB
146 GB
The Optane SSD DC P4800X and the Optane SSD 900p both use the same 7-channel controller, which leads to some unusual drive capacities. The 900p comes with either 3 or 5 memory dies per channel while the P4800X has 4 or 8. All models reserve about 1/6th of the raw capacity for internal use Source
The Intel Optane SSD DC P4800X is slightly faster than the Optane SSD 900p throughout this test, but either is far faster than the flash-based SSDs. Source
TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.
esxcli software profile update -d /vmfs/volumes/datastore1/_ISO/ESXi-8.0.1-20842819-USBNIC.zip -p ESXi-8.0.1-20842819-USBNIC --no-hardware-warning
Update Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
During VMware Explore 2022 Barcelona, I’ve been given a gift as a vExpert. You could read it in my previous article. NX6412 doesn’t support onboard NICs. We will need Custom ISO with USB Network Native Driver for ESXi. Because of problem using latest PowerCLI 13 release Nov 25, 2022 with export ISO. I decided to install Custom ISO ESXi 7u2e and than upgrade to ESXi 8.0 with depot zip.
Thank You Cohesity. Power consumpion is only 10 Watts …
How to prepare ESXi Custom ISO image 7U2e for NX6412 NUC?
Currently there is a limitation in ESXi where USB NIC bindings are picked up much later in the boot process and to ensure settings are preserved upon a reboot, the following needs to be added to /etc/rc.local.d/local.sh based on your configurations.
vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
sleep 10
count=$(( $count + 1 ))
vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done
esxcfg-vswitch -R
esxcli software profile update -d /vmfs/volumes/datastore1/_ISO/ESXi-8.0.0-20513097-USBNIC.zip -p ESXi-8.0.0-20513097-USBNIC
Hardware precheck of profile ESXi-8.0.0-20513097-USBNIC failed with warnings: <TPM_VERSION WARNING: TPM 1.2 device detected. Support for TPM version 1.2 is discontinued. Installation may proceed, but may cause the system to behave unexpectedly.>
You could fix TPM_VERSION WARNING: Support for TPM version 1.2 is discontinued. With Apply –no-hardware-warning option to ignore the warnings and proceed with the transaction.