New ESXCLI Commands in vSphere 8.0

In ESXi 8 / vSphere 8.0 the command line interface esxcli has been extended with new features.

Here is list with new and extended namespaces:

NEW ESXi 8.0 ESXCLI Command Reference

The ESXCLI command set allows you to run common system administration commands against vSphere systems from an administration server of your choice. The actual list of commands depends on the system that you are running on. Run esxcli --help for a list of commands on your system.
NamespaceCommandDescriptionNEW
daemon entitlementaddAdd Partner REST entitlements to the partner user.1
daemon entitlementlistList the installed DSDK built daemons.1
daemon entitlementremoveRemove Partner REST entitlments from the partner user.1
hardware devicecomponentlistList all device components on this host.1
network ip hostsaddAdd association of IP addresses with host names.1
network ip hostslistList the user specified associations of IP addresses with host names.1
network ip hostsremoveRemove association of IP addresses with host names.1
nvme device configlistList the configurable parameters for this plugin1
nvme device configsetSet the plugin's parameter1
nvme device loggetGet NVMe log page1
nvme device log persistenteventgetGet NVMe persistent event log1
nvme device log telemetry controllergetGet NVMe telemetry controller-initiated data1
nvme device log telemetry hostgetGet NVMe telemetry host-initiated data1
storage core nvme devicelistList the NVMe devices currently registered with the PSA.1
storage core nvme pathlistList all the NVMe paths on the system.1
storage core scsi devicelistList the SCSI devices currently registered with the PSA.1
storage core scsi pathlistList all the SCSI paths on the system.1
storage osdatacreateCreate an OSData partition on a disk.1
storage vvol statsaddAdd entity for stats tracking1
storage vvol statsdisableDisable stats for complete namespace1
storage vvol statsenableEnable stats for complete namespace1
storage vvol statsgetGet stats for given stats namespace1
storage vvol statslistList all supported stats1
storage vvol statsremoveRemove tracked entity1
storage vvol statsresetReset stats for given namespace1
storage vvol vmstatsgetGet the VVol information and statistics for a specific virtual machine.1
system health reportgetDisplays one or more health reports1
system health reportlistList all the health reports currently generated.1
system ntp statsgetReport operational state of Network Time Protocol Daemon1
system security keypersistencedisableDisable key persistence daemon.1
system security keypersistenceenableEnable key persistence daemon.1
system settings encryptiongetGet the encryption mode and policy.1
system settings encryption recoverylistList recovery keys.1
system settings encryption recoveryrotateRotate the recover key.1
system settings encryptionsetSet the encryption mode and policy.1
system settings gueststore repositorygetGet GuestStore repository.1
system settings gueststore repositorysetSet or clear GuestStore repository.1
system syslog config logfilteraddAdd a log filter.1
system syslog config logfiltergetShow the current log filter configuration values.1
system syslog config logfilterlistShow the added log filters.1
system syslog config logfilterremoveRemove a log filter.1
system syslog config logfiltersetSet log filtering configuration options.1
vsan hardware vcgaddMap unidentified vSAN hardware device with VCG ID.1
vsan hardware vcggetGet the vSAN VCG ID for a vSAN hardware device. Output is VCG ID while "N/A" means device ID is not mapped.1
vsan storagepooladdAdd physical disk for vSAN usage.1
vsan storagepoollistList vSAN storage pool configuration.1
vsan storagepoolmountMount vSAN disk from storage pool.1
vsan storagepoolrebuildRebuild vSAN storage pool disks.1
vsan storagepoolremoveRemove physical disk from storage pool usage. Exactly one of --disk or --uuid param is required.1
vsan storagepoolunmountUnmount vSAN disk from storage pool.1

VMware Cohesity vExpert Gift VMware EXPLORE 2022 Barcelona

During VMware Explore 2022 Barcelona, I’ve been given a gift as a vExpert.

We could start start popcorn party with NX6412 …

A huge shout out to the vExpert program and to Cohesity for supporting with such an amazing gift – a small but powerful quad-CPU Intel NUC. It’s fanless so it will be quiet too. Thank You

NX6412 Specification:

  • CPU: Intel Elkhart Lake J6412 Processor
  • Memory: Dual Channel SO-DIMM DDR4 up to 32GB – 64GB could run – I will have to confirm it lately ….
  • Display via: Intel Integrated Graphics display via 2xHDMI2.0
  • I/O Ports: 2xLAN, 2xUSB3.2, 2xUSB2.0, Type-C, SIM
  • Ethernet: 10/100/1000Mbps
  • Storage: 1x M.2 2242/2280 SSD, SATA optional
  • Power: 12V DC-in
​Hardware: MaxTang N6412,  32gig memory, 512gig SSD, Quad Core Dual Gigabit Ethernet, Dual HDMI2

Based on small form factors, the compact design at 127mm x 127mm x 37mm makes it great for space-saving.

Intel Elkhart Lake J6412 Processor

Powered by Intel Elkhart Lake Celeron J6412 processor, the NX6412 provides you excellent performance with long life expectancies. The processor has 4cores 4threads, 1.5MB L2 Cache, up to 2.60GHz with a 10W TDP rate. It has a 1.7x improvement in single-thread performance and 1.5x improvement in multi-thread performance generation over generation, 2x performance improvement in graphics over the previous generation

CODE2769US Intel NUC Home Lab with Smart Sensors & Tanzu

Links & information

How to Boot ESXi 7.0 on UCS-M2-HWRAID Boot-Optimized M.2 RAID Controller

VMware strongly advises that you move away completely from using SD card/USB as a boot device option on any future server hardware.

SD cards can continue to be used for the bootbank partition provided that a separate persistent local device to store the OSDATA partition (32GB min., 128GB recommended) is available in the host.
Preferably, the SD cards should be replaced with an M.2 or another local persistent device as the standalone boot option.

vSphere 7 – ESXi System Storage Changes

Please refer to the following blog:
https://core.vmware.com/resource/esxi-system-storage-changes

How to setup ESXi boot on UCS-M2-HWRAID ?

Create Disk Group Policies – Storage / Storage Policies / root / Disk Group Policies / M.2-RAID1

Create Storage Profile – Storage / Storage Profiles / root / Storage Profile M.2-RAID1

Create Local LUNs – Storage / Storage Profiles / root / Storage Profile M.2-RAID1

Modify Storage Profile inside Service Profile

Change Boot Order to Local Disk

Links

Fastest workaround instructions to address CVE-2021-44228 (log4j) in vCenter Server

https://logging.apache.org/log4j/2.x/

Apache Log4j open source component has security bug (CVE-2021-44228 – VMSA-2021-0028). It is neccesary to fix vCenter Server 7.0.x, vCenter 6.7.x & vCenter 6.5.x.

Fastest and recommended is workaround with KB 87081 script (vc_log4j_mitigator.py).

Run ssh and create script via vim
Connected to service

    * List APIs: "help api list"
    * List Plugins: "help pi list"
    * Launch BASH: "shell"

Command> shell
Shell access is granted to root
root@localhost [ ~ ]# cd /tmp
root@localhost [ /tmp ]# vim vc_log4j_mitigator.py
Run script python vc_log4j_mitigator.py
root@localhost [ /tmp ]# python vc_log4j_mitigator.py
2021-12-21T10:38:20 INFO main: Script version: 1.6.0
2021-12-21T10:38:20 INFO main: vCenter type: Version: 7.0.2.00500; Build: 18455184; Deployment type: embedded; Gateway: False; VCHA: False; Windows: False;
A service stop and start is required to complete this operation.  Continue?[y]y
2021-12-21T10:38:23 INFO stop: stopping services
2021-12-21T10:38:46 INFO process_jar: Found a VULNERABLE FILE: /opt/vmware/lib64/log4j-core-2.13.0.jar
2021-12-21T10:38:46 INFO backup_file: VULNERABLE FILE: /opt/vmware/lib64/log4j-core-2.13.0.jar backed up to /tmp/tmpxi89fco8/opt/vmware/lib64/log4j-core-2.13.0.jar.bak
2021-12-21T10:38:47 INFO process_jar: VULNERABLE FILE: /opt/vmware/lib64/log4j-core-2.13.0.jar backed up to /tmp/tmpxi89fco8/opt/vmware/lib64/log4j-core-2.13.0.jar.bak
2021-12-21T10:39:03 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.13.1.jar
2021-12-21T10:39:03 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:04 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:04 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.8.2.jar
2021-12-21T10:39:04 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.8.2.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.8.2.jar.bak
2021-12-21T10:39:04 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.8.2.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.8.2.jar.bak
2021-12-21T10:39:06 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.11.0.jar
2021-12-21T10:39:06 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.11.0.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.11.0.jar.bak
2021-12-21T10:39:06 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.11.0.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.11.0.jar.bak
2021-12-21T10:39:07 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.11.2.jar
2021-12-21T10:39:07 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.11.2.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.11.2.jar.bak
2021-12-21T10:39:07 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware/common-jars/log4j-core-2.11.2.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/common-jars/log4j-core-2.11.2.jar.bak
2021-12-21T10:39:08 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware/cis_upgrade_runner/payload/component-scripts/sso/lstool/lib/log4j-core-2.13.1.jar
2021-12-21T10:39:08 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware/cis_upgrade_runner/payload/component-scripts/sso/lstool/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/cis_upgrade_runner/payload/component-scripts/sso/lstool/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:08 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware/cis_upgrade_runner/payload/component-scripts/sso/lstool/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware/cis_upgrade_runner/payload/component-scripts/sso/lstool/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:14 INFO process_jar: Found a VULNERABLE FILE: /tmp/tmpn2a_0ql2/WEB-INF/lib/log4j-core-2.13.3.jar
2021-12-21T10:39:14 INFO backup_file: VULNERABLE FILE: /tmp/tmpn2a_0ql2/WEB-INF/lib/log4j-core-2.13.3.jar backed up to /tmp/tmpxi89fco8/tmp/tmpn2a_0ql2/WEB-INF/lib/log4j-core-2.13.3.jar.bak
2021-12-21T10:39:15 INFO process_war: Found a VULNERABLE WAR file with: /usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-fileupload.war
2021-12-21T10:39:15 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-fileupload.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-fileupload.war.bak
2021-12-21T10:39:15 INFO process_war: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-fileupload.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-fileupload.war.bak
2021-12-21T10:39:15 INFO process_jar: Found a VULNERABLE FILE: /tmp/tmpxn5_4ah_/WEB-INF/lib/log4j-core-2.13.3.jar
2021-12-21T10:39:15 INFO backup_file: VULNERABLE FILE: /tmp/tmpxn5_4ah_/WEB-INF/lib/log4j-core-2.13.3.jar backed up to /tmp/tmpxi89fco8/tmp/tmpxn5_4ah_/WEB-INF/lib/log4j-core-2.13.3.jar.bak
2021-12-21T10:39:16 INFO process_war: Found a VULNERABLE WAR file with: /usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war
2021-12-21T10:39:16 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war.bak
2021-12-21T10:39:16 INFO process_war: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war.bak
2021-12-21T10:39:16 INFO process_jar: Found a VULNERABLE FILE: /tmp/tmpa4w275ot/WEB-INF/lib/log4j-core-2.13.3.jar
2021-12-21T10:39:16 INFO backup_file: VULNERABLE FILE: /tmp/tmpa4w275ot/WEB-INF/lib/log4j-core-2.13.3.jar backed up to /tmp/tmpxi89fco8/tmp/tmpa4w275ot/WEB-INF/lib/log4j-core-2.13.3.jar.bak
2021-12-21T10:39:17 INFO process_war: Found a VULNERABLE WAR file with: /usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-filedownload.war
2021-12-21T10:39:17 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-filedownload.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-filedownload.war.bak
2021-12-21T10:39:18 INFO process_war: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-filedownload.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-filedownload.war.bak
2021-12-21T10:39:21 INFO process_jar: Found a VULNERABLE FILE: /tmp/tmpxv_znca3/WEB-INF/lib/log4j-core-2.13.1.jar
2021-12-21T10:39:21 INFO backup_file: VULNERABLE FILE: /tmp/tmpxv_znca3/WEB-INF/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/tmp/tmpxv_znca3/WEB-INF/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:22 INFO process_war: Found a VULNERABLE WAR file with: /usr/lib/vmware-sso/vmware-sts/webapps/ROOT.war
2021-12-21T10:39:22 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-sso/vmware-sts/webapps/ROOT.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-sso/vmware-sts/webapps/ROOT.war.bak
2021-12-21T10:39:24 INFO process_war: VULNERABLE FILE: /usr/lib/vmware-sso/vmware-sts/webapps/ROOT.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-sso/vmware-sts/webapps/ROOT.war.bak
2021-12-21T10:39:25 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar
2021-12-21T10:39:25 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:26 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:28 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware-dbcc/lib/log4j-core-2.8.2.jar
2021-12-21T10:39:28 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-dbcc/lib/log4j-core-2.8.2.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware-dbcc/lib/log4j-core-2.8.2.jar.bak
2021-12-21T10:39:29 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware-dbcc/lib/log4j-core-2.8.2.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware-dbcc/lib/log4j-core-2.8.2.jar.bak
2021-12-21T10:39:32 INFO process_jar: Found a VULNERABLE FILE: /tmp/tmprq0yfnd1/WEB-INF/lib/log4j-core-2.13.1.jar
2021-12-21T10:39:32 INFO backup_file: VULNERABLE FILE: /tmp/tmprq0yfnd1/WEB-INF/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/tmp/tmprq0yfnd1/WEB-INF/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:33 INFO process_war: Found a VULNERABLE WAR file with: /usr/lib/vmware-lookupsvc/webapps/ROOT.war
2021-12-21T10:39:33 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-lookupsvc/webapps/ROOT.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-lookupsvc/webapps/ROOT.war.bak
2021-12-21T10:39:34 INFO process_war: VULNERABLE FILE: /usr/lib/vmware-lookupsvc/webapps/ROOT.war backed up to /tmp/tmpxi89fco8/usr/lib/vmware-lookupsvc/webapps/ROOT.war.bak
2021-12-21T10:39:34 INFO process_jar: Found a VULNERABLE FILE: /usr/lib/vmware-lookupsvc/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar
2021-12-21T10:39:35 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-lookupsvc/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware-lookupsvc/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:35 INFO process_jar: VULNERABLE FILE: /usr/lib/vmware-lookupsvc/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar backed up to /tmp/tmpxi89fco8/usr/lib/vmware-lookupsvc/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar.bak
2021-12-21T10:39:37 INFO _patch_file: Found VULNERABLE FILE: /usr/lib/vmware-vmon/java-wrapper-vmon
2021-12-21T10:39:37 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-vmon/java-wrapper-vmon backed up to /tmp/tmpxi89fco8/usr/lib/vmware-vmon/java-wrapper-vmon.bak
2021-12-21T10:39:37 INFO patch_vum: Found a VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/start.ini
2021-12-21T10:39:37 INFO backup_file: VULNERABLE FILE: /usr/lib/vmware-updatemgr/bin/jetty/start.ini backed up to /tmp/tmpxi89fco8/usr/lib/vmware-updatemgr/bin/jetty/start.ini.bak
2021-12-21T10:39:37 INFO print_summary:
=====     Summary     =====
Backup Directory: /tmp/tmpxi89fco8
List of processed java archive files:

/opt/vmware/lib64/log4j-core-2.13.0.jar
/usr/lib/vmware/common-jars/log4j-core-2.13.1.jar
/usr/lib/vmware/common-jars/log4j-core-2.8.2.jar
/usr/lib/vmware/common-jars/log4j-core-2.11.0.jar
/usr/lib/vmware/common-jars/log4j-core-2.11.2.jar
/usr/lib/vmware/cis_upgrade_runner/payload/component-scripts/sso/lstool/lib/log4j-core-2.13.1.jar
/usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-fileupload.war
/usr/lib/vmware-updatemgr/bin/jetty/webapps/root.war
/usr/lib/vmware-updatemgr/bin/jetty/webapps/vum-filedownload.war
/usr/lib/vmware-sso/vmware-sts/webapps/ROOT.war
/usr/lib/vmware-sso/vmware-sts/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar
/usr/lib/vmware-dbcc/lib/log4j-core-2.8.2.jar
/usr/lib/vmware-lookupsvc/webapps/ROOT.war
/usr/lib/vmware-lookupsvc/webapps/ROOT/WEB-INF/lib/log4j-core-2.13.1.jar

List of processed configuration files:

/usr/lib/vmware-vmon/java-wrapper-vmon
/usr/lib/vmware-updatemgr/bin/jetty/start.ini

Total fixed: 16

    NOTE: Running this script again with the --dryrun
    flag should now yield 0 vulnerable files.

Log file: /var/log/vmsa-2021-0028_2021_12_21_10_38_20.log
===========================
2021-12-21T10:39:37 INFO start: starting services
2021-12-21T10:52:47 INFO main: Done.
Verify python vc_log4j_mitigator.py -r
root@localhost [ /tmp ]# python vc_log4j_mitigator.py -r
2021-12-21T11:10:01 INFO main: Script version: 1.6.0
2021-12-21T11:10:01 INFO main: vCenter type: Version: 7.0.2.00500; Build: 18455184; Deployment type: embedded; Gateway: False; VCHA: False; Windows: False;
2021-12-21T11:10:01 INFO main: Running in dryrun mode.
2021-12-21T11:11:01 INFO print_summary:
=====     Summary     =====

No vulnerable files found!

Total found: 0
Log file: /var/log/vmsa-2021-0028_2021_12_21_11_10_01.log
===========================
2021-12-21T11:11:01 INFO main: Done.

vc_log4j_mitigator.py [-h] – helps and more options

root@localhost [ /tmp ]# python vc_log4j_mitigator.py -h
usage: vc_log4j_mitigator.py [-h] [-d dirnames [dirnames ...]] [-a] [-r] [-b BACKUP_DIR] [-l LOG_DIR]

VMSA-2021-0028 vCenter tool; Version: 1.6.0 This tool deletes the JndiLookup.class file from *.jar and *.war files. On Windows systems the tool will by default traverse the folders identified by the VMWARE_CIS_HOME, VMWARE_CFG_DIR, VMWARE_DATA_DIR and VMWARE_RUNTIME_DATA_DIR
variables. On vCenter Appliances the tool will search by default from the root of the filesystem. All modified files are backed up if the process needs to be reversed due to an error.

optional arguments:
  -h, --help            show this help message and exit
  -d dirnames [dirnames ...], --directories dirnames [dirnames ...]
                        space separated list of directories to check recursively for CVE-2021-44228 vulnerable java archive files.
  -a, --accept-services-restart
                        accept the restart of the services without having manual prompt confirmation for the same
  -r, --dryrun          Run the script and log vulnerable files without mitigating them. The vCenter services are not restarted with this option.
  -b BACKUP_DIR, --backup-dir BACKUP_DIR
                        Specify a backup directory to store original files.
  -l LOG_DIR, --log-dir LOG_DIR
                        Specify a directory to store log files.

Links:

Cisco UCS Manager Plugin for VMware vSphere HTML Client (Version 3.0(6))

Cisco has released the 3.0(6) version of the Cisco UCS Manager VMware vSphere HTML client plugin. The UCS Manager vSphere HTML client plugin enables a virtualization administrator to view, manage, and monitor the Cisco UCS physical infrastructure. The plugin provides a physical view of the UCS hardware inventory on the HTML client.

I notify BUG “Host not going into monitoring state vCenter restart”. Thank You for fix.

Release 3.0(6)

Here are the new features in Release 3.0(6):

  • Custom fault addition for proactive HA monitoring
  • Resolved host not going into monitoring state vCenter restart
  • Included defect fixes

VMware vSphere HTML Client Releases

Cisco UCS Manager plug-in is compatible with the following vSphere HTML Client releases:

VMware vSphere HTML Client VersionCisco UCS Manager Plugin for VMware vSphere Version
6.73.0(1), 3.0(2), 3.0(3), 3.0(4), 3.0(5), 3.0(6)
7.03.0(4), 3.0(5), 3.0(6)
7.0u1, 7.0u23.0(5), 3.0(6)

Note
VMware vSphere HTML Client Version 7.0u3 is not supported.
More info here.

Updated Plan for CPU Support Discontinuation In Future Major vSphere Releases after 7.0 (82794)

During instalation vSphere 7.0 Update 2 and later we can see new message for Haswell and Broadwell CPUs.

This is the warning message shown by the ESX installer:

CPU_SUPPORT_WARNING: The CPUs in this host may not be supported in future ESXi releases. Please plan accordingly.
Summary for warnings
  • vSphere 7.0
    • onwards for Intel Sandy Bridge, Intel Ivy Bridge-DT CPUs and AMD Bulldozer CPUs.
  • Sphere 7.0 Update 2 and later
    • onwards for Intel Haswell,Broadwell, Avoton CPUs and AMD Piledriver CPUs

I wish we could solve this in the future major release with – How to fix? The CPU in this host is not supported by ESXi 7.0.0. -> allowLegacyCPU=True

Table 1: VMware vSphere Planned Intel CPU Support Discontinuation List

VCG CPU Series NameCPU InfoRaw CPUIDsCode Name
Intel Xeon E3-1200 Series6.2A0x000206A0Intel Xeon E3 (Sandy Bridge)
Intel Xeon E3-1100 Series6.2A0x000206A0Intel Xeon E3 (Sandy Bridge)
Intel Xeon E5-1600 Series6.2D0x000206D0SandyBridge-EP WS
Intel Xeon E5-2600 Series6.2D0x000206D0SandyBridge-EP 2S
Intel Xeon E5-1400 Series6.2D0x000206D0SandyBridge-EN
Intel Xeon E5-4600 Series6.2D0x000206D0SandyBridge-EP 4S
Intel Xeon E5-2400 Series6.2D0x000206D0SandyBridge-EN
    
Intel i7-3600-QE6.3A0x000306A0IvyBridge-DT
Intel Xeon E3-1200-v2 Series6.3A0x000306A0IvyBridge-DT
Intel i7-3500-LE/UE6.3A0x000306A0IvyBridge-DT
Intel i3-3200 Series6.3A0x000306A0IvyBridge-DT
Intel Pentium B925C6.3A0x000306A0IvyBridge-DT- Gladden
Intel Xeon E3-1100-C-v2 Series6.3A0x000306A0IvyBridge-DT- Gladden
Intel Xeon E5-1600-v2 Series6.3E0x000306E0IvyBridge EP
Intel Xeon E5-2600-v2 Series6.3E0x000306E0IvyBridge EP
Intel Xeon E5-2400-v2 Series6.3E0x000306E0IvyBridge-EN
Intel Xeon E5-1400-v2 Series6.3E0x000306E0IvyBridge-EN
Intel Xeon E5-4600-v2 Series6.3E0x000306E0IvyBridge-EP 4S
Intel Xeon E7-8800/4800/2800-v26.3E0x000306E0IvyBridge-EX
    
Intel i7-4700-EQ Series6.3C0x000306C0Haswell-DT
Intel Xeon E3-1200-v3 Series6.3C0x000306C0Haswell-DT
Intel i5-4500-TE Series6.3C0x000306C0Haswell-DT
Intel i3-4300 Series6.3C0x000306C0Haswell-DT
    
Intel Xeon E5-1600-v3 Series6.3F0x000306F0Haswell-EP
Intel Xeon E5-2600-v3 Series6.3F0x000306F0Haswell-EP
Intel Xeon E5-1400-v3 Series6.3F0x000306F0Haswell-EN
Intel Xeon E5-2400-v3 Series6.3F0x000306F0Haswell-EN
Intel Xeon E5-4600-v3 Series6.3F0x000306F0Haswell-EP
Intel Xeon E7-8800/4800-v3 Series6.3F0x000306F0Intel Haswell-EX
    
Intel Xeon E3-1200-v4 Series6.470x00040670Broadwell-DT
Intel Core i7-5700EQ6.470x00040670Broadwell-H
    
Intel Atom C2700 Series6.4D0x000406D0Intel Avoton 8c
Intel Atom C2300 Series6.4D0x000406D0Intel Avoton 2c
Intel Atom C2500 Series6.4D0x000406D0Intel Avoton 4c

Table 2: VMware vSphere Planned AMD CPU Support Discontinuation List

VCG CPU Series NameCPU InfoRaw CPUIDsCode Name
AMD Opteron 6200 Series15.010x00600F10Interlagos BullDozer G34
AMD Opteron 4200 Series15.010x00600F10Valencia BullDozer C32
AMD Opteron 3200 Series15.010x00600F10Zurich AM3
AMD Opteron 4300 Series15.020x00600F20Piledriver-C32 (seoul)
AMD Opteron 3300 Series15.020x00600F20Piledriver-AM3 (Dehli)
AMD Opteron 6300 Series15.020x00600F20PileDriver-G34 (Abu Dhabi)
AMD Opteron X2250 Series15.300x00630F00Steamroller-Berlin
AMD Opteron X1250 Series15.300x00630F00Steamroller-Berlin
AMD Opteron X1100 Series16.000x00700F00Kyoto
AMD Opteron X2100 Series16.000x00700F00Kyoto

Links:

Fault Resilient Memory (FRM) for Cisco UCS

We can see annual incidence of uncorrectable errors is rissing. Here is one possibility – How to solved it with FRM.

ESXi supports reliable memory.

Some systems have reliable memory, which is a part of memory that is less likely to have hardware memory errors than other parts of the memory in the system. If the hardware exposes information about the different levels of reliability, ESXi might be able to achieve higher system reliability.

How to enable in Cisco UCS

Configuration is in BIOS policy / Advanced / RAS Memory

8GB Could be enough for ESXi hypervisor …

This forces the Hypervisor and some core kernel processes to be mirrored between DIMMs so ESXi itself can survive the complete and total failure of a memory DIMM.

# esxcli hardware memory get
    Physical Memory: 540800864256 Bytes
    Reliable Memory: 8589934592 Bytes
    NUMA Node Count: 2 
#  esxcli system settings kernel list | grep useReliableMem
 useReliableMem Bool TRUE TRUE TRUE System is aware of reliable memory. 

Configuring Reliable Memory in Per-virtual machine basis (2146595)

I can decided to configure more Reliable Memory for VM – not only 8GB for hypervisor.

To turn on the feature per VM:

  1. Edit the .vmx file using a text editor
  2. Add the parameter:
    sched.mem.reliable = "True"
  3. Save and close the file

Conclusion:

  • For enable Fault Resilient Memory (FRM) I had to disable ADDDC Sparing in BIOS policy / Advanced / RAS Memory / Memory RAS configuration
  • With ADDDC and Proactive HA I can save about 95% failures – Personaly I prefer to use ADDDC
  • The Best possibility is to have both in future firmware …

Interesting links:

Field Notice: FN – 70432 – Improved Memory RAS Features for UCS M5 Platforms – Software Upgrade Recommended

Memory Errors and Dell EMC PowerEdge YX4X Server Memory RAS Features

How to check CPU microcode revision in ESXi

Occasionally ESXi users want to check CPU microcode. You can check CPU microcode revision on running ESXi host this easy way.

[root@tanzu-esxi-1:~] vsish -e cat /hardware/cpu/cpuList/0 | grep -i -E 'family|model|stepping|microcode|revision'
   Family:0x06
   Model:0x25
   Stepping:0x01
   Number of microcode updates:0
   Original Revision:0x0000001f
   Current Revision:0x0000001f

More info: Using the ESXi 6.0 CPU Microcode Loading Feature

New ESXCLI Commands in vSphere 7.0

In ESXi 7 / vSphere 7.0 the command line interface esxcli has been extended with new features.

For reference ESXCLI full commands list for ESXi 7.0.

Here is list with new and extended namespaces:

NEW ESXi 7.0 ESXCLI Command Reference

Command groupsCMDDescription
daemon controlrestart Restart the daemons for the specified solution ID.
daemon controlstart Start the daemons for the specified solution ID.
daemon controlstop Stop the daemons for the specified DSDK built solution.
daemon infoget Get running daemon status for the specified solution ID.
daemon infolist List the installed DSDK built daemons.
hardware pci pcipassthrulist Display PCI device passthru configuration.
hardware pci pcipassthruset Configure PCI device for passthrough.
network nic attachmentadd Attach one uplink as a branch to a trunk uplink with specified VLAN ID.
network nic attachmentlist Show uplink attachment information.
network nic attachmentremove Detach a branch uplink from its trunk.
network nic dcb statusget Get the DCB information for a NIC.
network nic hwCap activatedlist List activated hardware capabilities of physical NICs.
network nic hwCap supportedlist List supported hardware capabilities of physical NICs.
nvme adapterlist List all NVMe adapters.
nvme controlleridentify Get NVMe Identify Controller data.
nvme controllerlist List all NVMe controllers.
nvme fabricsconnect Connect to an NVMe controller on a specified target through an adapter.
nvme fabrics connectiondelete Delete persistent NVMe over Fabrics connection entries. Reboot required for settings to take effect.
nvme fabrics connectionlist List all persistent NVMe over Fabrics connection entries.
nvme fabricsdisable Disable NVMe over Fabrics for a transport protocol.
nvme fabricsdisconnect Disconnect a specified NVMe controller on the specified NVMe adapter.
nvme fabricsdiscover Discover NVMe controllers on the specified target port through the specified NVMe adapter and list all of them.
nvme fabricsenable Enable NVMe over Fabrics for a transport protocol.
nvme infoget Get NVMe host information.
nvme namespaceidentify Get NVMe Identify Namespace data.
nvme namespacelist List all NVMe namespaces.
rdma iser paramsset Change iSER kernel driver settings.
software addonget Display the installed Addon on the host.
softwareapply Applies a complete image with a software spec that specifies base image, addon and components to install on the host.
software baseimageget Display the installed baseimage on the host.
software componentapply Installs Component packages from a depot. Components may be installed, upgraded. WARNING: If your installation requires a reboot, you need to disable HA first.
software componentget Displays detailed information about one or more installed Components
software componentlist Lists the installed Component packages
software componentremove Removes components from the host. WARNING: If your installation requires a reboot, you need to disable HA first.
software component signatureverify Verifies the signatures of installed Components and displays the name, version, vendor, acceptance level and the result of signature verification for each of them.
software component viblist List VIBs in an installed Component.
software sources addonget Display details about Addons in the depots.
software sources addonlist List all Addons in the depots.
software sources baseimageget Display details about a Base Image from the depot.
software sources baseimagelist List all the Base Images in a depot.
software sources componentget Displays detailed information about one or more Components in the depot
software sources componentlist List all the Components from depots.
software sources component viblist List VIB packages in the specified Component in a depot.
storage core device smart daemonstart Enable smartd.
storage core device smart daemon statusget Get status of smartd.
storage core device smart daemonstop Disable smartd.
storage core device smart statusget Get status of SMART stats on a device.
storage core device smart statusset Enable or disable SMART stats gathering on a device.
system ntp configget Display Network Time Protocol configuration.
system ntpget Display Network Time Protocol configuration
system ntpset Configures the ESX Network Time Protocol agent.
system ptpget Display Precision Time Protocol configuration
system ptpset Configures the ESX Precision Time Protocol agent.
system ptp statsget Report operational state of Precision Time Protocol Daemon
vm appinfoget Get the state of appinfo component on the ESXi host.
vm appinfoset Modify the appinfo component on the ESXi host.
vsan network securityget Get vSAN network security configurations.
vsan network securityset Configure vSAN network security settings.
The ESXCLI command set allows you to run common system administration commands against vSphere systems from an administration server of your choice. The actual list of commands depends on the system that you are running on. Run esxcli --help for a list of commands on your system.

For reference ESXCLI full commands list for ESXi 6.x.

ESXi 7.0 and Mellanox ConnectX 2 – support fix patch

I upgraded vCenter to version 7 successfully but failed when it came to updating my hosts from 6.7 to 7.

I got some warning stating PCI devices were incompatible but tried anyways. Turns out that failed, my Mellanox ConnectX 2 wasn’t showing up as an available physical NIC.

At first It was necessary to have VID/DID device code for MT26448 [ConnectX EN 10GigE , PCIe 2.0 5GT/s].

PartnerProductDriverVIDDID
MellanoxMT26448 [ConnectX EN 10GigE , PCIe 2.0 5GT/s]mlx4_core15b36750
Whole table We could check here or search mlx to see all Mellanox cards list.

Deprecated devices supported by VMKlinux drivers

Devices that were only supported in 6.7 or earlier by a VMKlinux inbox driver. These devices are no longer supported because all support for VMKlinux drivers and their devices have been completely removed in 7.0.
PartnerProductDriverVID
MellanoxMT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]mlx4_core15b3
MellanoxMT26488 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE Virtualization+]mlx4_core15b3
MellanoxMT26438 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE Virtualization+]mlx4_core15b3
MellanoxMT25408 [ConnectX VPI - 10GigE / IB SDR]mlx4_core15b3
MellanoxMT27560 Familymlx4_core15b3
MellanoxMT27551 Familymlx4_core15b3
MellanoxMT27550 Familymlx4_core15b3
MellanoxMT27541 Familymlx4_core15b3
MellanoxMT27540 Familymlx4_core15b3
MellanoxMT27531 Familymlx4_core15b3
MellanoxMT25448 [ConnectX EN 10GigE, PCIe 2.0 2.5GT/s]mlx4_core15b3
MellanoxMT25408 [ConnectX EN 10GigE 10GBaseT, PCIe Gen2 5GT/s]mlx4_core15b3
MellanoxMT27561 Familymlx4_core15b3
MellanoxMT26468 [ConnectX EN 10GigE, PCIe 2.0 5GT/s Virtualization+]mlx4_core15b3
MellanoxMT26418 [ConnectX VPI - 10GigE / IB DDR, PCIe 2.0 5GT/s]mlx4_core15b3
MellanoxMT27510 Familymlx4_core15b3
MellanoxMT26488 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE Virtualization+]mlx4_core15b3
MellanoxMT26448 [ConnectX EN 10GigE , PCIe 2.0 5GT/s]mlx4_core15b3
MellanoxMT25418 [ConnectX VPI - 10GigE / IB DDR, PCIe 2.0 2.5GT/s]mlx4_core15b3
MellanoxMT27530 Familymlx4_core15b3
MellanoxMT27521 Familymlx4_core15b3
MellanoxMT27511 Familymlx4_core15b3
MellanoxMT25408 [ConnectX EN 10GigE 10BASE-T, PCIe 2.0 2.5GT/s]mlx4_core15b3
MellanoxMT25408 [ConnectX IB SDR Flash Recovery]mlx4_core15b3
MellanoxMT25400 Family [ConnectX-2 Virtual Function]mlx4_core15b3

Deprecated devices supported by VMKlinux drivers – full table list

How to fix it? I tuned small script ESXi7-enable-nmlx4_co.v00.sh to DO IT. Notes:

  • edit patch to Your Datastore example is /vmfs/volumes/ISO
  • nmlx4_co.v00.orig is backup for original nmlx4_co.v00
  • New VIB is without signatures – ALERT message will be in log during reboot:
    • ALERT: Failed to verify signatures of the following vib
  • ESXi reboot is needed for load new driver
cp /bootbank/nmlx4_co.v00 /vmfs/volumes/ISO/nmlx4_co.v00.orig
cp /bootbank/nmlx4_co.v00 /vmfs/volumes/ISO/n.tar
cd /vmfs/volumes/ISO/
vmtar -x n.tar -o output.tar
rm -f n.tar
mkdir tmp-network
mv output.tar tmp-network/output.tar
cd tmp-network
tar xf output.tar
rm output.tar
echo '' >> /vmfs/volumes/ISO/tmp-network/etc/vmware/default.map.d/nmlx4_core.map
echo 'regtype=native,bus=pci,id=15b36750..............,driver=nmlx4_core' >> /vmfs/volumes/ISO/tmp-network/etc/vmware/default.map.d/nmlx4_core.map
cat /vmfs/volumes/ISO/tmp-network/etc/vmware/default.map.d/nmlx4_core.map
echo '        6750  Mellanox ConnectX-2 Dual Port 10GbE '                 >> /vmfs/volumes/ISO/tmp-network/usr/share/hwdata/default.pciids.d/nmlx4_core.ids 
cat /vmfs/volumes/ISO/tmp-network/usr/share/hwdata/default.pciids.d/nmlx4_core.ids
tar -cf /vmfs/volumes/ISO/FILE.tar *
cd /vmfs/volumes/ISO/
vmtar -c FILE.tar -o output.vtar
gzip output.vtar
mv output.vtar.gz nmlx4_co.v00
rm FILE.tar
cp /vmfs/volumes/ISO/nmlx4_co.v00 /bootbank/nmlx4_co.v00

Scripts add HW ID support in file nmlx4_core.map:

*********************************************************************
/vmfs/volumes/ISO/tmp-network/etc/vmware/default.map.d/nmlx4_core.map
*********************************************************************
regtype=native,bus=pci,id=15b301f6..............,driver=nmlx4_core
regtype=native,bus=pci,id=15b301f8..............,driver=nmlx4_core
regtype=native,bus=pci,id=15b31003..............,driver=nmlx4_core
regtype=native,bus=pci,id=15b31004..............,driver=nmlx4_core
regtype=native,bus=pci,id=15b31007..............,driver=nmlx4_core
regtype=native,bus=pci,id=15b3100715b30003......,driver=nmlx4_core
regtype=native,bus=pci,id=15b3100715b30006......,driver=nmlx4_core
regtype=native,bus=pci,id=15b3100715b30007......,driver=nmlx4_core
regtype=native,bus=pci,id=15b3100715b30008......,driver=nmlx4_core
regtype=native,bus=pci,id=15b3100715b3000c......,driver=nmlx4_core
regtype=native,bus=pci,id=15b3100715b3000d......,driver=nmlx4_core
regtype=native,bus=pci,id=15b36750..............,driver=nmlx4_core
------------------------->Last Line is FIX

And add HW ID support in file nmlx4_core.ids:

**************************************************************************************
/vmfs/volumes/FreeNAS/ISO/tmp-network/usr/share/hwdata/default.pciids.d/nmlx4_core.ids 
**************************************************************************************
#
# This file is mechanically generated.  Any changes you make
# manually will be lost at the next build.
#
# Please edit <driver>_devices.py file for permanent changes.
#
# Vendors, devices and subsystems.
#
# Syntax (initial indentation must be done with TAB characters):
#
# vendor  vendor_name
#       device  device_name                            <-- single TAB
#               subvendor subdevice  subsystem_name    <-- two TABs

15b3  Mellanox Technologies
        01f6  MT27500 [ConnectX-3 Flash Recovery]
        01f8  MT27520 [ConnectX-3 Pro Flash Recovery]
        1003  MT27500 Family [ConnectX-3]
        1004  MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
        1007  MT27520 Family [ConnectX-3 Pro]
                15b3 0003  ConnectX-3 Pro VPI adapter card; dual-port QSFP; FDR IB (56Gb/s) and 40GigE (MCX354A-FCC)
                15b3 0006  ConnectX-3 Pro EN network interface card 40/56GbE dual-port QSFP(MCX314A-BCCT )
                15b3 0007  ConnectX-3 Pro EN NIC; 40GigE; dual-port QSFP (MCX314A-BCC)
                15b3 0008  ConnectX-3 Pro VPI adapter card; single-port QSFP; FDR IB (56Gb/s) and 40GigE (MCX353A-FCC)
                15b3 000c  ConnectX-3 Pro EN NIC; 10GigE; dual-port SFP+ (MCX312B-XCC)
                15b3 000d  ConnectX-3 Pro EN network interface card; 10GigE; single-port SFP+ (MCX311A-XCC)
        6750  Mellanox ConnectX-2 Dual Port 10GbE
-------->Last Line is FIX

After reboot I could see support for MT26448 [ConnectX EN 10GigE , PCIe 2.0 5GT/s].

Only ALERT: Failed to verify signatures of the following vib(s): [nmlx4-core].

2020-XX-XXTXX:XX:44.473Z cpu0:2097509)ALERT: Failed to verify signatures of the following vib(s): [nmlx4-core]. All tardisks validated
2020-XX-XXTXX:XX:47.909Z cpu1:2097754)Loading module nmlx4_core ...
2020-XX-XXTXX:XX:47.912Z cpu1:2097754)Elf: 2052: module nmlx4_core has license BSD
2020-XX-XXTXX:XX:47.921Z cpu1:2097754)<NMLX_INF> nmlx4_core: init_module called
2020-XX-XXTXX:XX:47.921Z cpu1:2097754)Device: 194: Registered driver 'nmlx4_core' from 42
2020-XX-XXTXX:XX:47.921Z cpu1:2097754)Mod: 4845: Initialization of nmlx4_core succeeded with module ID 42.
2020-XX-XXTXX:XX:47.921Z cpu1:2097754)nmlx4_core loaded successfully.
2020-XX-XXTXX:XX:47.951Z cpu1:2097754)<NMLX_INF> nmlx4_core: 0000:05:00.0: nmlx4_core_Attach - (nmlx4_core_main.c:2476) running
2020-XX-XXTXX:XX:47.951Z cpu1:2097754)DMA: 688: DMA Engine 'nmlx4_core' created using mapper 'DMANull'.
2020-XX-XXTXX:XX:47.951Z cpu1:2097754)DMA: 688: DMA Engine 'nmlx4_core' created using mapper 'DMANull'.
2020-XX-XXTXX:XX:47.951Z cpu1:2097754)DMA: 688: DMA Engine 'nmlx4_core' created using mapper 'DMANull'.
2020-XX-XXTXX:XX:49.724Z cpu1:2097754)<NMLX_INF> nmlx4_core: 0000:05:00.0: nmlx4_ChooseRoceMode - (nmlx4_core_main.c:382) Requested RoCE mode RoCEv1
2020-XX-XXTXX:XX:49.724Z cpu1:2097754)<NMLX_INF> nmlx4_core: 0000:05:00.0: nmlx4_ChooseRoceMode - (nmlx4_core_main.c:422) Requested RoCE mode is supported - choosing RoCEv1
2020-XX-XXTXX:XX:49.934Z cpu1:2097754)<NMLX_INF> nmlx4_core: 0000:05:00.0: nmlx4_CmdInitHca - (nmlx4_core_fw.c:1408) Initializing device with B0 steering support
2020-XX-XXTXX:XX:50.561Z cpu1:2097754)<NMLX_INF> nmlx4_core: 0000:05:00.0: nmlx4_InterruptsAlloc - (nmlx4_core_main.c:1744) Granted 38 MSIX vectors
2020-XX-XXTXX:XX:50.561Z cpu1:2097754)<NMLX_INF> nmlx4_core: 0000:05:00.0: nmlx4_InterruptsAlloc - (nmlx4_core_main.c:1766) Using MSIX
2020-XX-XXTXX:XX:50.781Z cpu1:2097754)Device: 330: Found driver nmlx4_core for device 0xxxxxxxxxxxxxxxxxxxxxxx

Some 10 Gbps tuning testing looks great, between 2x ESXi 7.0 with 2x MT2644:

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-120.00 sec   131 GBytes  9380 Mbits/sec    0             sender
[  4]   0.00-120.00 sec   131 GBytes  9380 Mbits/sec                  receiver
RDMA support for RoCEv1

RoCEv1 is only supported, because:

  • Support for RoCEv2 is above card – Mellanox ConnectX-3 Pro
  • We can see RoCEv2 options in nmlx2_core driver, but when I enabled enable_rocev2 It is NOT working
[root@esxi~] esxcli system module parameters list -m nmlx4_core
Name                    Type  Value  Description
----------------------  ----  -----  -----------
enable_64b_cqe_eqe      int          Enable 64 byte CQEs/EQEs when the the FW supports this
enable_dmfs             int          Enable Device Managed Flow Steering
enable_qos              int          Enable Quality of Service support in the HCA
enable_rocev2           int          Enable RoCEv2 mode for all devices
enable_vxlan_offloads   int          Enable VXLAN offloads when supported by NIC
log_mtts_per_seg        int          Log2 number of MTT entries per segment
log_num_mgm_entry_size  int          Log2 MGM entry size, that defines the number of QPs per MCG, for example: value 10 results in 248 QP per MGM entry
msi_x                   int          Enable MSI-X
mst_recovery            int          Enable recovery mode(only NMST module is loaded)
rocev2_udp_port         int          Destination port for RoCEv2

It is officialy NOT supported. Use it only in your HomeLAB. But We could save some money for new 10Gbps network cards.