After sucessfull ESXi 7.0 upgrade. We can start using vSphere Lifecycle Manager and convert VUM Baselines -> vLCM Image.





– ADD COMPONENTS – example VMWare USB NIC Fling Driver





Daniel Micanek virtual Blog – Like normal Dan, but virtual.
After sucessfull ESXi 7.0 upgrade. We can start using vSphere Lifecycle Manager and convert VUM Baselines -> vLCM Image.
We are excited to announce the general availability of VMware NSX-T™ 3.0, a major release of our full stack Layer 2 to Layer 7 networking platform that offers virtual networking, security, load balancing, visibility, and analytics in a single platform. NSX-T 3.0 includes key innovations across cloud-scale networking, security, containers, and operations that help enterprises achieve one-click public cloud experience wherever their workloads are deployed. As enterprises adopt cloud,…Read More
From vSphere Web Client -> Cluster Properties -> Configure -> vSphere Availability -> Proactive HA is Turned OFF – Click on Edit. You can notice vSphere Proactive HA is disabled by default.
With settings Automatic Level – Automated and Remediation – Mixed Mode after HW Failure. Proactive HA is Entering Host Into Quarantine Mode and Migrate all VMs from ESXi with HW Failure:
In vSphere 6 we can use various methods and tools to deploy ESXi hosts and maintain their software lifecycle.
To deploy and boot an ESXi host, you can use an ESXi installer image or VMware vSphere® Auto Deploy™. The availability of choice options results in two different underlying ESXi platforms:
By introducing the concept of images, vSphere Lifecycle Manager provides a unified platform for ESXi lifecycle management.
You can use vSphere Lifecycle Manager for stateful hosts only, but starting with vSphere 7.0, you can convert the Auto Deploy-based stateless hosts into stateful hosts, which you can add to clusters that you manage with vSphere Lifecycle Manager images.
After upgrade VCSA 7.0, We prepare upgrade for ESXi 6.7. It is simular logic like in vSphere Update Manager:
How to Get vSphere with Kubernetes
We’re very excited to announce the general availability of vSphere 7 today! It caps off a massive across-the-board effort by the many engineering teams within VMware. We have built a ton of new capabilities into vSphere 7, including drastically improved lifecycle management, many new security features, and broader application focus and support. But of course, The post How to Get vSphere with Kubernetes appeared first on VMware vSphere Blog.
How to speedup BOOT time in Cisco UCS M5?
When this token is enabled, the BIOS saves the memory training results (optimized timing/voltage values) along with CPU/memory configuration information and reuses them on subsequent reboots to save boot time. The saved memory training results are used only if the reboot happens within 24 hours of the last save operation. This can be one of the following:
Enabling this token allows the BIOS Tech log output to be controlled at more a granular level. This reduces the number of BIOS Tech log messages that are redundant, or of little use. This can be one of the following:
This option denotes the type of messages in BIOS tech log file. The log file can be one of the following types:
Note: This option is mainly for internal debugging purposes.
Note: To disable the Fast Boot option, the end user must set the following tokens as mentioned below:
The Option ROM launch is controlled at the PCI Slot level, and is enabled by default. In configurations that consist of a large number of network controllers and storage HBAs having Option ROMs, all the Option ROMs may get launched if the PCI Slot Option ROM Control is enabled for all. However, only a subset of controllers may be used in the boot process. When this token is enabled, Option ROMs are launched only for those controllers that are present in boot policy. This can be one of the following:
First BOOT after New settings is longer about 1-2 minutes.
Then We can save about 2 minutes on each BOOT from Second BOOT with 3TB RAM B480M5:
A first look at vSphere with Kubernetes in action
In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes….Read More
Symptons: VCSA cannot provide an update or Unable to connect to the vCenter Server as services are not started.
root@vcsa [ /var/spool/clientmqueue ]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 11G 8.2G 1.9G 82% /
root@vcsa [ ~ ]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 712704 97100 615604 14% /
Problem is with disk space.
It is possible to increase the disk space of a specific VMDK , according KB. But After some time You could have the same issues.
https://kb.vmware.com/s/article/2126276
It is necessary find where is a problem:
root@vcsa [ ~ ]# cd /var root@vcsa [ /var ]# du -sh * 2.1G log 5.2G spool
Problem with clientmqueue directory could be related with config for SMTP relay. It is possible to cleanup easily:
find /var/spool/clientmqueue -name "*" -delete
Problem with audit.log is describe in KB. Size of audit.log file is very large and /var/log/audit folder consumes majority of the space.
https://kb.vmware.com/s/article/2149278
root@vcsa [ /var/log/audit ]# ls -l total 411276 -rw------- 1 root root 420973104 Mar 31 00:53 audit.log
truncate -s 0 audit.log
Because for Monster SAP HANA VM (1-3 TB RAM) I tuned several AdvSystemSettings.
In the end I was able to speedup vMotion 4x times and utilize 2x flow with 40 Gbps – VIC 1340 with PE.
Inspiration was:
It is in production from 04/2018, My tuned final settings is:
AdvSystemSettings | Default | Tunning | Desc |
---|---|---|---|
Migrate.VMotionStreamHelpers | 0 | 8 | Number of helpers to allocate for VMotion streams |
Net.NetNetqTxPackKpps | 300 | 600 | Max TX queue load (in thousand packet per second) to allow packing on the corresponding RX queue |
Net.NetNetqTxUnpackKpps | 600 | 1200 | Threshold (in thousand packet per second) for TX queue load to trigger unpacking of the corresponding RX queue |
Net.MaxNetifTxQueueLen | 2000 | 10000 | Maximum length of the Tx queue for the physical NICs – toto stačí pro urychlení VM komunikace |
It is better to use Cisco UCS Consistent Device Naming CDN + ESXi 6.7, but in same casses. It is necessary fix manualy according KB – How VMware ESXi determines the order in which names are assigned to devices (2091560) .
Here is an example How to fix it:
[~] esxcfg-nics -l Name PCI MAC Address vmnic0 0000:67:00.0 00:25:b5:00:a0:0e vmnic1 0000:67:00.1 00:25:b5:00:b2:2f vmnic2 0000:62:00.0 00:25:b5:00:a0:2e vmnic3 0000:62:00.1 00:25:b5:00:b2:4f vmnic4 0000:62:00.2 00:25:b5:00:a0:3e [~] localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias list Bus type Bus address Alias pci s00000002:03.02 vmnic4 pci s00000002:03.01 vmnic3 pci s0000000b:03.00 vmnic0 pci s0000000b:03.01 vmnic1 pci p0000:00:11.5 vmhba0 pci s00000002:03.00 vmnic2 logical pci#s0000000b:03.00#0 vmnic0 logical pci#s0000000b:03.01#0 vmnic1 logical pci#s00000002:03.01#0 vmnic3 logical pci#s00000002:03.02#0 vmnic4 logical pci#p0000:00:11.5#0 vmhba0 logical pci#s00000002:03.00#0 vmnic2
Bus type Bus address Alias pci s0000000b:03.00 vmnic0 --> vmnic3 pci s0000000b:03.01 vmnic1 --> vmnic4 pci s00000002:03.00 vmnic2 --> vmnic0 pci s00000002:03.01 vmnic3 --> vmnic1 pci s00000002:03.02 vmnic4 --> vmnic2
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic0 --bus-address s00000002:03.00 --bus-type pci localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic1 --bus-address s00000002:03.01 --bus-type pci localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic2 --bus-address s00000002:03.02 --bus-type pci localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic3 --bus-address s0000000b:03.00 --bus-type pci localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic4 --bus-address s0000000b:03.01 --bus-type pci
[~] localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias list Bus type Bus address Alias logical pci#s0000000b:03.00#0 vmnic0 --> vmnic3 logical pci#s0000000b:03.01#0 vmnic1 --> vmnic4 logical pci#s00000002:03.00#0 vmnic2 --> vmnic0 logical pci#s00000002:03.01#0 vmnic3 --> vmnic1 logical pci#s00000002:03.02#0 vmnic4 --> vmnic2
localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic0 --bus-address pci#s00000002:03.00#0 --bus-type logical localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic1 --bus-address pci#s00000002:03.01#0 --bus-type logical localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic2 --bus-address pci#s00000002:03.02#0 --bus-type logical localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic3 --bus-address pci#s0000000b:03.00#0 --bus-type logical localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias store --alias vmnic4 --bus-address pci#s0000000b:03.01#0 --bus-type logical
reboot
[~] esxcfg-nics -l Name PCI MAC Address vmnic0 0000:62:00.0 00:25:b5:00:a0:2e vmnic1 0000:62:00.1 00:25:b5:00:b2:4f vmnic2 0000:62:00.2 00:25:b5:00:a0:3e vmnic3 0000:67:00.0 00:25:b5:00:a0:0e vmnic4 0000:67:00.1 00:25:b5:00:b2:2f