Add nConnect Support to NFS v4.1

Starting with vSphere 8.0 Update 3, nConnect support has been added for NFS v4.1 datastores. This feature enables multiple connections using a single IP address within a session, thereby extending session trunking functionality to that IP. With nConnect, multipathing and nConnect coexist, allowing for more flexible and efficient network configurations.

Benefits of nConnect

Traditionally, vSphere NFSv4.1 implementations create a single TCP/IP connection from each host to each datastore. This setup can become a bottleneck in scenarios requiring high performance. By enabling multiple connections per IP, nConnect significantly enhances data throughput and performance. Here’s a quick overview of the benefits:

  • Increased Performance: Multiple connections can be established per session, reducing congestion and improving data transfer speeds.
  • Flexibility: Customers can configure datastores with multiple IPs to the same server and also multiple connections with the same IP.
  • Scalability: Supports up to 8 connections per IP, enhancing scalability for demanding workloads.

Configuring nConnect

Adding a New NFSv4.1 Datastore

When adding a new NFSv4.1 datastore, you can specify the number of connections at the time of the mount using the following command:

esxcli storage nfs41 add -H <host> -v <volume-label> -s <remote_share> -c <number_of_connections>

By default, the maximum number of connections per session is set to 4. However, this can be increased to 8 using advanced NFS options. Here’s how you can configure it:

  1. Set the maximum number of connections to 8: esxcfg-advcfg -s 8 /NFS41/MaxNConnectConns
  2. Verify the configuration: esxcfg-advcfg -g /NFS41/MaxNConnectConns

The total number of connections used across all mounted NFSv4.1 datastores is limited to 256.

Modifying Connections for an Existing NFSv4.1 Datastore

For an existing NFSv4.1 datastore, the number of connections can be increased or decreased at runtime using the following command:

esxcli storage nfs41 param set -v <volume-label> -c <number_of_connections>

Multipathing and nConnect Coexistence

There is no impact on multipathing when using nConnect. Both NFSv4.1 nConnect and multipaths can coexist seamlessly. Connections are created for each of the multipathing IPs, allowing for enhanced redundancy and performance.

Example Configuration with Multiple IPs

To add a datastore with multiple IPs and specify the number of connections, use:

esxcli storage nfs41 add -H <IP1,IP2> -v <volume-label> -s <remote_share> -c <number_of_connections>

This command ensures that multiple connections are created for each of the specified IPs, leveraging the full potential of nConnect.

Summary

The introduction of nConnect support in vSphere 8.0 U3 for NFS v4.1 datastores marks a significant enhancement in network performance and flexibility. By allowing multiple connections per IP, nConnect addresses the limitations of single connection setups, providing a robust solution for high-performance environments. Whether you’re configuring a new datastore or updating an existing one, nConnect offers a scalable and efficient way to manage your NFS workloads.

https://core.vmware.com/resource/whats-new-vsphere-8-core-storage

Simplified Guide: How to Convert VM Snapshots into Memory Dumps Using vmss2core

Introduction

In the complex world of virtualization, developers often face the challenge of debugging guest operating systems and applications. A practical solution lies in converting virtual machine snapshots to memory dumps. This blog post delves into how you can efficiently use the vmss2core tool to transform a VM checkpoint, be it a snapshot or suspend file, into a core dump file, compatible with standard debuggers.

Step-by-Step Guide

Break down the process into clear, step-by-step instructions. Use bullet points or numbered lists for easier readability. Example:

Step 1: Create and download a virtual machine Snapshots .vmsn and .vmem
  1. Select the Problematic Virtual Machine
    • In your VMware environment, identify and select the virtual machine experiencing issues.
  2. Replicate the Issue
    • Attempt to replicate the problem within the virtual machine to ensure the snapshot captures the relevant state.
  3. Take a Snapshot
    • Right-click on the virtual machine.
    • Navigate to Snapshots → Take snapshot
    • Enter a name for the snapshot.
    • Ensure “Snapshot the Virtual Machine’s memory” is checked
    • Click ‘CREATE’ to proceed.
  4. Access VM Settings
    • Right-click on the virtual machine again.
    • Select ‘Edit Settings’
  5. Navigate to Datastores
    • Choose the virtual machine and click on ‘Datastores’.
    • Click on the datastore name
  6. Download the Snapshot
    • Locate the .vmsn ans .vmem files (VMware Snapshot file).
    • Select the file, click ‘Download’, and save it locally.
Step 2: Locate Your vmss2core Installation
  • For Windows (32bit): Navigate to C:\Program Files\VMware\VMware Workstation\
  • For Windows (64bit): Go to C:\Program Files(x86)\VMware\VMware Workstation\
  • For Linux: Access /usr/bin/
  • For Mac OS: Find it in /Library/Application Support/VMware Fusion/

Note: If vmss2core isn’t in these directories, download it from New Flings Link (use at your own risk).

Step 3: Run the vmss2core Tool
vmss2core.exe -N VM-Snapshot1.vmsn VM-Snapshot1.vmem                                                                                                                                                                                    vmss2core version 20800274 Copyright (C) 1998-2022 VMware, Inc. All rights reserved.
Started core writing.
Writing note section header.
Writing 1 memory section headers.
Writing notes.
... 100 MBs written.
... 200 MBs written.
... 300 MBs written.
... 400 MBs written.
... 500 MBs written.
... 600 MBs written.
... 700 MBs written.
... 800 MBs written.
... 900 MBs written.
... 1000 MBs written.
... 1100 MBs written.
... 1200 MBs written.
... 1300 MBs written.
... 1400 MBs written.
... 1500 MBs written.
... 1600 MBs written.
... 1700 MBs written.
... 1800 MBs written.
... 1900 MBs written.
... 2000 MBs written.
Finished writing core.
  • For general use: vmss2core.exe -W [VM_name].vmsn [VM_name].vmem
  • For Windows 8/8.1, Server 2012, 2016, 2019: vmss2core.exe -W8 [VM_name].vmsn [VM_name].vmem
  • For Linux: ./vmss2core-Linux64 -N [VM_name].vmsn [VM_name].vmem Note: Replace [VM_name] with your virtual machine’s name. The flag -W, -W8, or -N corresponds to the guest OS.
#vmss2core.exe
vmss2core version 20800274 Copyright (C) 1998-2022 VMware, Inc. All rights reserved.                                                                                                                                                                            A tool to convert VMware checkpoint state files into formats                                                                                                                                                                                                    that third party debugger tools understand. It can handle both                                                                                                                                                                                                  suspend (.vmss) and snapshot (.vmsn) checkpoint state files                                                                                                                                                                                                     (hereafter referred to as a 'vmss file') as well as both                                                                                                                                                                                                        monolithic and non-monolithic (separate .vmem file) encapsulation                                                                                                                                                                                               of checkpoint state data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Usage:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             GENERAL:  vmss2core [[options] | [-l linuxoffsets options]] \                                                                                                                                                                                                               <vmss file> [<vmem file>]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        The "-l" option specifies offsets (a stringset) within the                                                                                                                                                                                                      Linux kernel data structures, which is used by -P and -N modes.                                                                                                                                                                                                 It is ignored with other modes. Please use "getlinuxoffsets"                                                                                                                                                                                                    to automatically generate the correct stringset value for your                                                                                                                                                                                                  kernel, see README.txt for additional information.                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Without options one vmss.core<N> per vCPU with linear view of                                                                                                                                                                                                   memory is generated. Other types of core files and output can                                                                                                                                                                                                   be produced with these options:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    -q      Quiet(er) operation.                                                                                                                                                                                                                                    -M      Create core file with physical memory view (vmss.core).                                                                                                                                                                                                                                                                                                                                                                                                                                                                 -l str  Offset stringset expressed as 0xHEXNUM,0xHEXNUM,... .                                                                                                                                                                                                   -N      Red Hat crash core file for arbitrary Linux version                                                                                                                                                                                                             described by the "-l" option (vmss.core).                                                                                                                                                                                                               -N4     Red Hat crash core file for Linux 2.4 (vmss.core).                                                                                                                                                                                                      -N6     Red Hat crash core file for Linux 2.6 (vmss.core).                                                                                                                                                                                                      -O <x>  Use <x> as the offset of the entrypoint.                                                                                                                                                                                                                -U <i>  Create linear core file for vCPU <i> only.                                                                                                                                                                                                              -P      Print list of processes in Linux VM.                                                                                                                                                                                                                    -P<pid> Create core file for Linux process <pid> (core.<pid>).                                                                                                                                                                                                                                                                                                                                                                                                                                                                  -S      Create core for 64-bit Solaris (vmcore.0, unix.0).                                                                                                                                                                                                              Optionally specify the version: -S112 -S64SYM112                                                                                                                                                                                                                for 11.2.                                                                                                                                                                                                                                               -S32    Create core for 32-bit Solaris (vmcore.0, unix.0).                                                                                                                                                                                                      -S64SYM Create text symbols for 64-bit Solaris (solaris.text).                                                                                                                                                                                                  -S32SYM Create text symbols for 32-bit Solaris (solaris.text).                                                                                                                                                                                                  -W      Create WinDbg file (memory.dmp) with commonly used                                                                                                                                                                                                              build numbers ("2195" for Win32, "6000" for Win64).                                                                                                                                                                                                     -W<num> Create WinDbg file (memory.dmp), with <num> as the                                                                                                                                                                                                              build number (for example: "-W2600").                                                                                                                                                                                                                   -WK     Create a Windows kernel memory only dump file (memory.dmp).                                                                                                                                                                                             -WDDB<num> or -W8DDB<num>                                                                                                                                                                                                                                               Create WinDbg file (memory.dmp), with <num> as the                                                                                                                                                                                                              debugger data block address in hex (for example: "-W12ac34de").                                                                                                                                                                                         -WSCAN  Scan all of memory for Windows debugger data blocks                                                                                                                                                                                                             (instead of just low 256 MB).                                                                                                                                                                                                                           -W8     Generate a memory dump file from a suspended Windows 8 VM.                                                                                                                                                                                              -X32    <mach_kernel> Create core for 32-bit Mac OS.                                                                                                                                                                                                            -X64    <mach_kernel> Create core for 64-bit Mac OS.                                                                                                                                                                                                            -F      Create core for an EFI firmware exception.                                                                                                                                                                                                              -F<adr> Create core for an EFI firmware exception with system context                                                                                                                                                                                                   at the given guest virtual address.                         

Links:

How to Rebuild a VMX File from vmware.log on an ESXi 8 Host via SSH

Introduction

Rebuilding a VMX file from vmware.log in VMware ESXi can be crucial for restoring a virtual machine’s configuration. This guide will walk you through the process using SSH according KB 1023880, but with update for ESXi 8.0. It was necessary add #!/bin/ash because of error “Operation not permitted”.

Step 1: SSH into Your ESXi Host

First, ensure SSH is enabled on your ESXi host. Then, use an SSH client to connect to the host.

Step 2: Navigate to the Virtual Machine’s Directory

Change to the directory where your VM resides. This is usually under /vmfs/volumes/.

cd /vmfs/volumes/name-volume/name-vm

Step 3: Create and Prepare the Script File

Create a new file named vmxrebuild.sh and make it executable:

touch vmxrebuild.sh && chmod +x vmxrebuild.sh

Step 4: Edit the Script File for ESXi 8

Edit the vmxrebuild.sh file using the vi editor:

  1. Run vi vmxrebuild.sh.
  2. Press i to enter insert mode.
  3. Copy and paste the following script (adjust for your ESXi host version).
  4. Press ESC, then type :wq! to save and exit.

Script Content for ESXi 8.0:

#!/bin/ash
VMXFILENAME=$(sed -n 's/^.*Config file: .*\/\(.\+\)$/\1/p' vmware.log)
echo -e "#\041/usr/bin/vmware" > ${VMXFILENAME}
echo '.encoding = "UTF-8"' >> ${VMXFILENAME}
sed -n '/DICT --- CONFIGURATION/,/DICT ---/ s/^.*DICT \+\(.\+\) = \(.\+\)$/\1 = \2/p' vmware.log >> ${VMXFILENAME

Step 5: Run the Script

Execute the script to rebuild the VMX file:

./vmxrebuild.sh

Conclusion

This process extracts the necessary configuration details from the vmware.log file and recreates the VMX file, which is vital for VM configuration. Always back up your VM files before performing such operations.

vCenter Server 8.0 U2 Issue with Edit Settings Virtual Machine Hardware 9 or older

Introduction

I was unable to manage Virtual Machines with virtual Hardware Version 9 or older via the vSphere Client while they are in a powered on state.

Symptoms

  1. vCenter Server Version: The problem is specific to vCenter Server version 8.0 U2 – 22385739.
  2. Virtual Machine Hardware Version: Affected VMs are those with hardware version 9 or below.
  3. VM State: The issue occurs when the Virtual Machine is in a powered-on state.
  4. UI Glitches: In the vSphere Client, when attempting to open the ‘Edit Settings’ for the affected VMs, users notice red exclamation marks next to the Virtual Hardware and VM Options tabs. Additionally, the rest of the window appears empty, hindering any further action.

Impact and Risks:

The primary impact of this issue is a significant management challenge:

  • Users are unable to manage Virtual Machines with Virtual Hardware Version 9 or older through the vSphere Client while they remain powered on. This limitation can affect routine operations, maintenance, and potentially urgent modifications needed for these VMs.

Workarounds:

In the meantime, users can employ either of the following workarounds to manage their older VMs effectively:

  1. Power Off the VM: By powering off the VM, the ‘Edit Settings’ window should function correctly. While this is not ideal for VMs that need to remain operational, it can be a temporary solution for making necessary changes.
  2. Use ESXi Host Client: Alternatively, users can connect directly to the ESXi Host Client to perform the ‘Edit Settings’ operations. This method allows the VM to remain powered on, which is beneficial for critical systems that cannot afford downtime.

Resolution:

Keep an eye on updates from VMware for a permanent resolution to this issue Edit settings window fails to load on Virtual Machines with virtual hardware version 9 or older on vCenter Server 8.0U2 (94979).

Private AI in HomeLAB: Affordable GPU Solution with NVIDIA Tesla P40

For Private AI in HomeLAB, I was searching for budget-friendly GPUs with a minimum of 24GB RAM. Recently, I came across the refurbished NVIDIA Tesla P40 on eBay, which boasts some intriguing specifications:

  • GPU Chip: GP102
  • Cores: 3840
  • TMUs: 240
  • ROPs: 96
  • Memory Size: 24 GB
  • Memory Type: GDDR5
  • Bus Width: 384 bit

Since the NVIDIA Tesla P40 comes in a full-profile form factor, we needed to acquire a PCIe riser card.

A PCIe riser card, commonly known as a “riser card,” is a hardware component essential in computer systems for facilitating the connection of expansion cards to the motherboard. Its primary role comes into play when space limitations or specific orientation requirements prevent the direct installation of expansion cards into the PCIe slots on the motherboard.

Furthermore, I needed to ensure adequate cooling, but this posed no issue. I utilized a 3D model created by MiHu_Works for a Tesla P100 blower fan adapter, which you can find at this link: Tesla P100 Blower Fan Adapter.

As for the fan, the Titan TFD-B7530M12C served the purpose effectively. You can find it on Amazon: Titan TFD-B7530M12C.

Currently, I am using a single VM with PCIe pass-through. However, it was necessary to implement specific Advanced VM parameters:

  • pciPassthru.use64bitMMIO = true
  • pciPassthru.64bitMMIOSizeGB = 64

Now, you might wonder about the performance. It’s outstanding, delivering speeds up to 16x-26x times faster than the CPU. To provide you with an idea of the performance, I conducted a llama-bench test:

pp 512CPU t/sGPU t/sAcceleration
llama 7B mostly Q4_09.50155.3716x
llama 13B mostly Q4_05.18134.7426x
./llama-bench -t 8
| model                          |       size |     params | 
| ------------------------------ | ---------: | ---------: | 
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B | 
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B | 
| backend    |    threads | test       |              t/s |
| ---------- | ---------: | ---------- | ---------------: |
| CPU        |          8 | pp 512     |      9.50 ± 0.07 |
| CPU        |          8 | tg 128     |      8.74 ± 0.12 |
./llama-bench -ngl 3800
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: Tesla P40, compute capability 6.1
| model                          |       size |     params |
| ------------------------------ | ---------: | ---------: |
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B |
| llama 7B mostly Q4_0           |   3.56 GiB |     6.74 B |
| backend    | ngl | test       |              t/s | 
| ---------- | --: | ---------- | ---------------: | 
| CUDA       | 3800 | pp 512     |    155.37 ± 1.26 |
| CUDA       | 3800 | tg 128     |      9.31 ± 0.19 |
 ./llama-bench -t 8 -m ./models/13B/ggml-model-q4_0.gguf
| model                          |       size |     params |
| ------------------------------ | ---------: | ---------: |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| backend    |    threads | test       |              t/s |
| ---------- | ---------: | ---------- | ---------------: |
| CPU        |          8 | pp 512     |      5.18 ± 0.00 |
| CPU        |          8 | tg 128     |      4.63 ± 0.14 |
./llama-bench -ngl 3800 -m ./models/13B/ggml-model-q4_0.gguf
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: Tesla P40, compute capability 6.1
| model                          |       size |     params |
| ------------------------------ | ---------: | ---------: |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| llama 13B mostly Q4_0          |   6.86 GiB |    13.02 B |
| backend    | ngl | test       |              t/s | 
| ---------- | --: | ---------- | ---------------: | 
| CUDA       | 3800 | pp 512     |    134.74 ± 1.29 |
| CUDA       | 3800 | tg 128     |      8.42 ± 0.10 |

Feel free to explore this setup for your Private AI in HomeLAB.

Exciting Update: Cisco Unveils UCS Manager VMware vSphere 8U2 HTML Client Plugin Version 4.0(0)

I am thrilled to share my experience with the latest UCSM-plugin 4.0 for VMware vSphere 8U2, a remarkable tool that has significantly enhanced our virtualization management capabilities. Having tested its functionality across an extensive network of approximately 13 UCSM domains and 411 ESXi 8U2 hosts. A notable instance of its efficacy was observed with Alert F1236, where the Proactive HA feature seamlessly transitioned the Blade into Quarantine mode, showcasing the plugin’s advanced automation capabilities.

However, I did encounter a challenge with the configuration of Custom Alerts, particularly Alert F1705. Despite my efforts, Proactive HA failed to activate, suggesting a potential misconfiguration on my part. To streamline this process, I propose the integration of Alert F1705 into the default alert settings, thereby simplifying the setup and ensuring more efficient system monitoring.

The release of Cisco’s 4.0(0) version of the UCS Manager VMware vSphere 8U2 HTML remote client plugin marks a significant advancement in the field of virtualization administration. This plugin not only offers a comprehensive physical view of the UCS hardware inventory through the HTML client but also enhances the overall management and monitoring of the Cisco UCS physical infrastructure.

Key functionalities provided by this plugin include:

  1. Detailed Physical Hierarchy View: Gain a clear understanding of the Cisco UCS physical structure.
  2. Comprehensive Inventory Insights: Access detailed information on inventory, installed firmware, faults, and power and temperature statistics.
  3. Physical Server to ESXi Host Mapping: Easily correlate your ESXi hosts with their corresponding physical servers.
  4. Firmware Management: Efficiently manage firmware for both B and C series servers.
  5. Direct Access to Cisco UCS Manager GUI: Launch the Cisco UCS Manager GUI directly from the plugin.
  6. KVM Console Integration: Instantly launch KVM consoles of UCS servers for immediate access and control.
  7. Locator LED Control: Switch the state of the locator LEDs as needed for enhanced hardware identification.
  8. Proactive HA Fault Configuration: Customize and configure faults used in Proactive HA for improved system resilience.

Links

Detailed Release Notes

Software download link

Please see the User Guide for specific information on installing and using the plugin with the vSphere HTML client.

Add F1705 Alert to Cisco UCS Manager Plugin 4.0(0)

New Cisco UCS firmware brings possibility to have notification about F1705 Alerts – Rank VLS.

In latest version of Cisco UCS Manager Plugin for VMware vSphere HTML Client (Version 4.0(0)) we could add Custom fault addition for proactive HA monitoring. How to do it?

Cisco UCS / Proactive HA Registration / Registered Fault / Add / ADDDC_Memory_Rank_VLS

If You can’t Add, it is necessary to Unregister UCSM Manager Plugin.

Cisco UCS / Proactive HA Registration / Registered Fault / Add
Cisco UCS / Proactive HA Registration / vCenter server credentials / Register
Cisco UCS / Proactive HA Registration / Register
How Could I check it? Edit Proactive HA / Providers
It is better use Name “ADDDC_Memory_Rank_VLS” without spaces. On my picture I used “My F1705 Alerts”

Adding Custom Alert is only possible with unregistered Cisco UCS Provider, it is better to do it immediatly after Cisco UCS Manager Plugin instalation.

Now I can deceided If I will block F1705 or NOT. I personaly preffer to have F1705 Alert under Proactive HA. Then I only restart Blades with F1705. During reboot Hard-PPR permanently remaps accesses from a designated faulty row to a designated spare row.

Links:

vSphere 8.0 Update 2: Introducing Azure Active Directory Federated Authentication

At the 2023 VMware Explore event in Barcelona. I was on presentation with Viviana Miranda relates to the way users authenticate to vCenter.

vSphere 8.0 Update 2 update heralds a number of groundbreaking features, with one standout enhancement in user authentication methods for vCenter. vCenter Server 8.0 Update 2 has now incorporated federated authentication capabilities with Azure Active Directory.

Dive into the details of this integration and discover how to activate it.

The Advantages of External Identity Providers

Organizations that leverage external identity providers can expect to reap substantial benefits:

- Integration with existing identity provider infrastructure to streamline processes.
- Implementation of Single Sign-On to simplify access across services.
- Adherence to the best practices of role separation between infrastructure management and identity administration.
- Utilization of robust multi-factor authentication options that come with their chosen identity providers.

Supported Identity Providers in vSphere 8.0 U2

While our focus here is the Azure Active Directory integration, it’s essential to highlight the comprehensive range of authentication methods now supported with vCenter Server 8.0 U2.

More info: https://core.vmware.com/resource/vCenterAzureADFederation#Intro

Host Cache can significantly extend reboot times

Exploring the vSphere environment, I’ve found that configuring a large Host Cache with VMFS Datastore can significantly extend reboot times.

It’s a delicate balance of performance gains versus system availability. For an in-depth look at my findings and the impact on your VMware setup, stay tuned.

Host Cache can significantly extend reboot times

Downfall CVE-2022-40982 – Intel’s Downfall Mitigation

Security Fixes in Release 4.3(2b)

Defect ID – CSCwf30468

Cisco UCS M5 C-series servers are affected by vulnerabilities identified by the following Common Vulnerability and Exposures (CVE) IDs:

  • CVE-2022-40982—Information exposure through microarchitectural state after transient execution in certain vector execution units for some Intel® Processors may allow an authenticated user to potentially enable information disclosure through local access
  • CVE-2022-43505—Insufficient control flow management in the BIOS firmware for some Intel® Processors may allow a privileged user to potentially enable denial of service through local access

https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/gather-data-sampling.html

Workaround EVC Intel “Broadwell” Generation or gather_data_sampling

wget https://raw.githubusercontent.com/speed47/spectre-meltdown-checker/master/spectre-meltdown-checker.sh# sh spectre-meltdown-checker.sh --variant downfall --explain

EVC Intel “Skylake” Generation

CVE-2022-40982 aka 'Downfall, gather data sampling (GDS)'
> STATUS:  VULNERABLE  (Your microcode doesn't mitigate the vulnerability, and your kernel doesn't support mitigation)
> SUMMARY: CVE-2022-40982:KO

EVC Intel “Broadwell” Generation

CVE-2022-40982 aka 'Downfall, gather data sampling (GDS)'
> STATUS:  NOT VULNERABLE  (your CPU vendor reported your CPU model as not affected)
> SUMMARY: CVE-2022-40982:OK

Mitigation with an updated kernel

When an update of the microcode is not available via a firmware update package, you may update the Kernel with a version that implements a way to shut off AVX instruction set support. It can be achieved by adding the following kernel command line parameter:

gather_data_sampling=force

Mitigation Options

When the mitigation is enabled, there is additional latency before results of the gather load can be consumed. Although the performance impact to most workloads is minimal, specific workloads may show performance impacts of up to 50%. Depending on their threat model, customers can decide to opt-out of the mitigation.

Intel® Software Guard Extensions (Intel® SGX)

There will be an Intel SGX TCB Recovery for those Intel SGX-capable affected processors. This TCB Recovery will only attest as up-to-date when the patch has been FIT-loaded (for example, with an updated BIOS), Intel SGX has been enabled by BIOS, and hyperthreading is disabled. In this configuration, the mitigation will be locked to the enabled state. If Intel SGX is not enabled or if hyperthreading is enabled, the mitigation will not be locked, and system software can choose to enable or disable the GDS mitigation.

Links: