How to Configure vSphere 6.7 Proactive HA with Cisco UCS Manager Plugin for VMware vSphere?

I wrote in previous blog latest Cisco UCS Manager Plugin is working with vCenter 6.7 U3b.

Install Cisco UCS Manager Plugin

vSphere Web Client – Enable Proactive HA

From vSphere Web Client -> Cluster Properties -> Configure -> vSphere Availability -> Proactive HA is Turned OFF – Click on Edit. You can notice vSphere Proactive HA is disabled by default.

  • Automation Level – Determine whether host quarantine or maintenance mode and VM migrations are recommendations or automatic.
    • Manual – vCenter Server suggests migration recommendations for virtual machines.
    • Automated – Virtual machines are migrated to healthy hosts and degraded hosts are entered into quarantine or maintenance mode depending on the configured Proactive HA automation level.
  • Remediation – Determine what happens to partially degraded hosts.
    • Quarantine mode – for all failures. Balances performance and availability, by avoiding the usage of partially degraded hosts provided that virtual machine performance is unaffected.
    • Mixed mode – Quarantine mode for moderate and Maintenance mode for severe failure (Mixed). Balances performance and availability, by avoiding the usage of moderately degraded hosts provided that virtual machine performance is unaffected. Ensures that virtual machines do not run on severely failed hosts.
    • Maintenance mode – for all failures. Ensures that virtual machines do not run on partially failed hosts.
Best options is Automated + Mixed Mode
Select Cisco UCS Provider – NOT Block Failure Conditions

How is Proactive HA working?

With settings Automatic Level – Automated and Remediation – Mixed Mode after HW Failure. Proactive HA is Entering Host Into Quarantine Mode and Migrate all VMs from ESXi with HW Failure:

After 4:10 mintes Proactive HA migrated all VMs from ESXi host with failure.

vSphere 6 Update Manager -> vSphere 7 Lifecycle Manager : upgrade ESXi 6.7 to ESXi 7

vSphere Update Manager – vSphere 6

In vSphere 6 we can use various methods and tools to deploy ESXi hosts and maintain their software lifecycle.

To deploy and boot an ESXi host, you can use an ESXi installer image or VMware vSphere® Auto Deploy™. The availability of choice options results in two different underlying ESXi platforms:

  • Using vSphere Auto Deploy – stateless mode
  • Using an installer ESXi image – statefull mode

vSphere Lifecycle Manager Images: A Single Platform, Single Tool, Single Workflow

By introducing the concept of images, vSphere Lifecycle Manager provides a unified platform for ESXi lifecycle management.
You can use vSphere Lifecycle Manager for stateful hosts only, but starting with vSphere 7.0, you can convert the Auto Deploy-based stateless hosts into stateful hosts, which you can add to clusters that you manage with vSphere Lifecycle Manager images.

How to Upgrade ESXi 6.7 to 7 with vSphere Lifecycle Manager?

After upgrade VCSA 7.0, We prepare upgrade for ESXi 6.7. It is simular logic like in vSphere Update Manager:

IMPORT ISO – We can upload ISO for example VMware-VMvisor-Installer-7.0.0-15843807.x86_64.iso
Step 1 of 2 – Uploading file to server
Step 2 of 2 – Adding to repository
After ISO upload We could check ISO image context
Next step is Create Baseline
Create Baseline – for ISO image
Select uploaded ISO image
Check summary
On Targer Cluster We attach our Baseline
Select ESXi 7 Baseline
– Check Compliance
– We can see Non-compliant for 3x ESXi host
REMEDIATE will start upgrade dialoge
It is necessary accept EULA
REMEDIATE will start ESXi 7 upgrade
In Recent Task We caould chek progress.
We can check ESXi 7.0 upgrade result.

How to Get vSphere with Kubernetes

How to Get vSphere with Kubernetes

How to Get vSphere with Kubernetes

We’re very excited to announce the general availability of vSphere 7 today! It caps off a massive across-the-board effort by the many engineering teams within VMware. We have built a ton of new capabilities into vSphere 7, including drastically improved lifecycle management, many new security features, and broader application focus and support. But of course, The post How to Get vSphere with Kubernetes appeared first on VMware vSphere Blog.


VMware Social Media Advocacy

Cisco UCS M5 Boot Time Enhancements

How to speedup BOOT time in Cisco UCS M5?

Adaptive Memory Training drop-down list

When this token is enabled, the BIOS saves the memory training results (optimized timing/voltage values) along with CPU/memory configuration information and reuses them on subsequent reboots to save boot time. The saved memory training results are used only if the reboot happens within 24 hours of the last save operation. This can be one of the following:

  • Disabled—Adaptive Memory Training is disabled.
  • Enabled—Adaptive Memory Training is enabled.
  • Platform Default—The BIOS uses the value for this attribute contained in the BIOS defaults for the server type and vendor.

BIOS Techlog Level

Enabling this token allows the BIOS Tech log output to be controlled at more a granular level. This reduces the number of BIOS Tech log messages that are redundant, or of little use. This can be one of the following:

This option denotes the type of messages in BIOS tech log file. The log file can be one of the following types:

  • Minimum – Critical messages will be displayed in the log file.
  • Normal – Warning and loading messages will be displayed in the log file.
  • Maximum – Normal and information related messages will be displayed in the log file.

Note: This option is mainly for internal debugging purposes.

Note: To disable the Fast Boot option, the end user must set the following tokens as mentioned below:

OptionROM Launch Optimization

The Option ROM launch is controlled at the PCI Slot level, and is enabled by default. In configurations that consist of a large number of network controllers and storage HBAs having Option ROMs, all the Option ROMs may get launched if the PCI Slot Option ROM Control is enabled for all. However, only a subset of controllers may be used in the boot process. When this token is enabled, Option ROMs are launched only for those controllers that are present in boot policy. This can be one of the following:

  • Disabled—OptionROM Launch Optimization is disabled.
  • Enabled—OptionROM Launch Optimization is enabled.
  • Platform Default—The BIOS uses the value for this attribute contained in the BIOS defaults for the server type and vendor.

Results

First BOOT after New settings is longer about 1-2 minutes.

Then We can save about 2 minutes on each BOOT from Second BOOT with 3TB RAM B480M5:

A first look at vSphere with Kubernetes in action

A first look at vSphere with Kubernetes in action

A first look at vSphere with Kubernetes in action

In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes….Read More


VMware Social Media Advocacy

vCenter Appliance (VCSA) root Partition full

Symptons: VCSA cannot provide an update or Unable to connect to the vCenter Server as services are not started.

Validate filesystem

Check disk space

root@vcsa [ /var/spool/clientmqueue ]# df -h
 Filesystem                                     Size  Used Avail Use% Mounted on
 /dev/sda3                                       11G  8.2G  1.9G  82% /

Check INODES

root@vcsa [ ~ ]# df -i
 Filesystem                   Inodes IUsed    IFree IUse% Mounted on
 /dev/sda3                     712704 97100   615604   14% /

Problem is with disk space.

How to resize partition

It is possible to increase the disk space of a specific VMDK , according KB. But After some time You could have the same issues.

https://kb.vmware.com/s/article/2126276

How to cleanup partition

It is necessary find where is a problem:

root@vcsa [ ~ ]# cd /var
root@vcsa [ /var ]# du -sh *
 2.1G    log
 5.2G    spool

clientmqueue

Problem with clientmqueue directory could be related with config for SMTP relay. It is possible to cleanup easily:

find /var/spool/clientmqueue -name "*" -delete

audit.log

Problem with audit.log is describe in KB. Size of audit.log file is very large and /var/log/audit folder consumes majority of the space.

https://kb.vmware.com/s/article/2149278

root@vcsa [ /var/log/audit ]# ls -l
 total 411276
 -rw------- 1 root root 420973104 Mar 31 00:53 audit.log
 truncate -s 0 audit.log

Multiple-NIC vMotion tunning 2x 40Gbps

Because for Monster SAP HANA VM (1-3 TB RAM) I tuned several AdvSystemSettings.

In the end I was able to speedup vMotion 4x times and utilize 2x flow with 40 Gbps – VIC 1340 with PE.

Inspiration was:

It is in production from 04/2018, My tuned final settings is:

AdvSystemSettingsDefaultTunningDesc
Migrate.VMotionStreamHelpers08Number of helpers to allocate for VMotion streams
Net.NetNetqTxPackKpps300600Max TX queue load (in thousand packet per second) to allow packing on the corresponding RX queue
Net.NetNetqTxUnpackKpps6001200Threshold (in thousand packet per second) for TX queue load to trigger unpacking of the corresponding RX queue
Net.MaxNetifTxQueueLen200010000Maximum length of the Tx queue for the physical NICs – toto stačí pro urychlení VM komunikace

VMware vNIC placement order not adhered to Cisco UCS configuration – How to fix it?

It is better to use Cisco UCS Consistent Device Naming CDN + ESXi 6.7, but in same casses. It is necessary fix manualy according KB – How VMware ESXi determines the order in which names are assigned to devices (2091560) .

Here is an example How to fix it:

Check current mapping

[~] esxcfg-nics -l
 Name    PCI           MAC Address       
 vmnic0  0000:67:00.0  00:25:b5:00:a0:0e 
 vmnic1  0000:67:00.1  00:25:b5:00:b2:2f 
 vmnic2  0000:62:00.0  00:25:b5:00:a0:2e 
 vmnic3  0000:62:00.1  00:25:b5:00:b2:4f 
 vmnic4  0000:62:00.2  00:25:b5:00:a0:3e 

[~] localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias list
 Bus type  Bus address            Alias
 pci       s00000002:03.02        vmnic4
 pci       s00000002:03.01        vmnic3
 pci       s0000000b:03.00        vmnic0
 pci       s0000000b:03.01        vmnic1
 pci       p0000:00:11.5          vmhba0
 pci       s00000002:03.00        vmnic2
 logical   pci#s0000000b:03.00#0  vmnic0
 logical   pci#s0000000b:03.01#0  vmnic1
 logical   pci#s00000002:03.01#0  vmnic3
 logical   pci#s00000002:03.02#0  vmnic4
 logical   pci#p0000:00:11.5#0    vmhba0
 logical   pci#s00000002:03.00#0  vmnic2

Fix transfer table for physical devices

Bus type  Bus address            Alias
 pci       s0000000b:03.00        vmnic0 --> vmnic3
 pci       s0000000b:03.01        vmnic1 --> vmnic4
 pci       s00000002:03.00        vmnic2 --> vmnic0
 pci       s00000002:03.01        vmnic3 --> vmnic1
 pci       s00000002:03.02        vmnic4 --> vmnic2

Fix commands for physical devices

localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic0 --bus-address s00000002:03.00 --bus-type pci

 localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic1 --bus-address s00000002:03.01 --bus-type pci

 localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic2 --bus-address s00000002:03.02 --bus-type pci

 localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic3 --bus-address s0000000b:03.00 --bus-type pci

 localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic4 --bus-address s0000000b:03.01 --bus-type pci

Fix transfer table for logical devices

[~] localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias list
 Bus type  Bus address            Alias
 logical   pci#s0000000b:03.00#0  vmnic0 --> vmnic3
 logical   pci#s0000000b:03.01#0  vmnic1 --> vmnic4
 logical   pci#s00000002:03.00#0  vmnic2 --> vmnic0
 logical   pci#s00000002:03.01#0  vmnic3 --> vmnic1
 logical   pci#s00000002:03.02#0  vmnic4 --> vmnic2

Fix commands for logical devices

localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic0 --bus-address pci#s00000002:03.00#0 --bus-type logical

localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic1 --bus-address pci#s00000002:03.01#0 --bus-type logical

localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic2 --bus-address pci#s00000002:03.02#0 --bus-type logical

localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic3 --bus-address pci#s0000000b:03.00#0 --bus-type logical

localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic4 --bus-address pci#s0000000b:03.01#0 --bus-type logical

Reboot

reboot

Crosscheck – Now we have target order

[~] esxcfg-nics -l
 Name    PCI           MAC Address       
 vmnic0  0000:62:00.0  00:25:b5:00:a0:2e 
 vmnic1  0000:62:00.1  00:25:b5:00:b2:4f 
 vmnic2  0000:62:00.2  00:25:b5:00:a0:3e 
 vmnic3  0000:67:00.0  00:25:b5:00:a0:0e 
 vmnic4  0000:67:00.1  00:25:b5:00:b2:2f 

Cisco UCS supports Consistent Device Naming CDN in ESXi 6.7

Cisco introduced Consistent Device Naming in Cisco UCS Manager Release 2.2(4).

In the past I could saw that VMware vNIC placement order not adhered to Cisco UCS configuration. But his issue will not be seen in the latest ESXi updates – ESXi 6.5 U2 and ESXi 6.7 U1.

How VMware ESXi determines the order in which names are assigned to devices (2091560)

When there is no mechanism for the Operating System to label Ethernet interfaces in a consistent manner. It becomes difficult to manage network connections with server configuration changes.

Allows Ethernet interfaces to be named in a consistent manner. This makes Ethernet interface names more persistent when adapter or other configuration changes are made.

To configure CDN for a vNIC, do the following:

This makes Ethernet interface names more uniform, easy to identify, and persistent when adapter or other configuration changes are made.

 set consistent-device-name-control cdn-name

Whether consistent device naming is enabled or not. This can be one of the following:

  • enabled—Consistent device naming is enabled for the BIOS policy. This enables Ethernet interfaces to be named consistently.
  • disabled—Consistent device naming is disabled for the BIOS policy.
  • platform-default—The BIOS uses the value for this attribute contained in the BIOS defaults for the server type and vendor.

Homelab considerations for vSphere 7

Homelab considerations for vSphere 7

Homelab considerations for vSphere 7

With the vSphere 7 Launch Event just a few days away, I know many of you are eager to get your hands on this latest release of vSphere and start playing with it in you homelab. A number of folks in the VMware community have already started covering some of the amazing capabilities that will […]


VMware Social Media Advocacy