Homelab considerations for vSphere 7

Homelab considerations for vSphere 7

Homelab considerations for vSphere 7

With the vSphere 7 Launch Event just a few days away, I know many of you are eager to get your hands on this latest release of vSphere and start playing with it in you homelab. A number of folks in the VMware community have already started covering some of the amazing capabilities that will […]


VMware Social Media Advocacy

vSpeaking Podcast Ep 150: What’s New in vSphere 7

vSpeaking Podcast Ep 150: What’s New in vSphere 7

This month VMware announced vSphere 7 touting it as the biggest innovation since the launch of ESXi. This is a prettty signifigant release. So far the virtually speaking podcast covered part of the release in two previous episodes (vSphere with Kubernetes and vSphere Lifecycle Manager in the …Read More


VMware Social Media Advocacy

Getting started with VCF 4.0 Part 3 – vSphere…

Getting started with VCF 4.0 Part 3 – vSphere with Kubernetes in a Workload Domain

Getting started with VCF 4.0 Part 3 – vSphere…

At this point, we have a fully configured workload domain which includes an NSX-T Edge deployment. Check here for the previous VCF 4.0 deployment steps. We are now ready to go ahead and deploy vSphere with Kubernetes, formerly known as Project Pacific. Via SDDC Manager in VMware Cloud Foundation 4.0, we ensure that an NSX-T Edge is available, and we also ensure that the the Workload Domain is sufficiently licensed to enable vSphere with Kubernetes. Disclaimer: “To be clear, this post is based…Read More


VMware Social Media Advocacy

Getting started with VCF 4.0 Part 2 –…

Getting started with VCF 4.0 Part 2 – Commission hosts, Create Workload Domain, Deploy NSX-T Edge

Getting started with VCF 4.0 Part 2 –…

Now that a VCF 4.0 Management Domain has been deployed, we can move onto creating our very first VCF 4.0 Virtual Infrastructure Workload Domain (VI WLD). We will require a VI WLD with an NSX-T Edge cluster before we can deploy Kubernetes on vSphere (formerly known as Project Pacific). Not too much has changed in the WLD creation workflow since version 3.9. We still have to commission ESXi hosts before we can create the WLD. But something different to previous versions of VCF is that today in…Read More


VMware Social Media Advocacy

Getting started with VMware Cloud Foundation…

Getting started with VMware Cloud Foundation (VCF) 4.0

Getting started with VMware Cloud Foundation…

On March 10th, VMware announced a range of new updated products and features. One of these was VMware Cloud Foundation (VCF) version 4.0. In the following series of blogs, I am going to show you the steps to deploy VCF 4.0. We will begin with the deployment of a Management Domain. Once this is complete, we will commission some additional hosts and build our first workload domain (WLD). After that, we will deploy an NSX-T 3.0 Edge Cluster to our Workload Domain. The great news here is that…Read More


VMware Social Media Advocacy

Automating the creation of NSX-T “Disconnected”…

Automating the creation of NSX-T “Disconnected” Segments for DR testing on VMware Cloud on AWS

Automating the creation of NSX-T “Disconnected”…

Disaster Recovery (DR) and Disaster Avoidance (DA) on VMware Cloud on AWS is still one of the most popular use case amongst our customers, just second to Datacenter Migration and Evacuation. The VMware Site Recovery service makes it extremely easy and cost effective for customers to protect their critical workloads without having to worry about […]


VMware Social Media Advocacy

How to reset the Update Manager Database VCSA 6.5 or 6.7

After upgrading vCSA from 6.5 to 6.7, I reached error “Scan for Updates” .

Error:There are errors during the scan operation. Check the events and log files for details.

Cautions: It is a destrictive task. Ensure you have a backup or snapshots.

  • Stop the VMware Update Manager Service:

service-control --stop vmware-updatemgr 

  • Run the following command to reset the VMware Update Manager Database:

vCenter Server Appliance 6.5 : /usr/lib/vmware-updatemgr/bin/updatemgr-util reset-db
vCenter Server Appliance 6.7: /usr/lib/vmware-updatemgr/bin/updatemgr-utility.py reset-db

  • Run the following Command to delete the contents of the VMware Update Manager Patch Store

rm -rf /storage/updatemgr/patch-store/* 

  • Start the VMware Update Manager Service:

service-control --start vmware-updatemgr 

https://kb.vmware.com/s/article/67771

https://kb.vmware.com/s/article/2147284

Proactive HA is working in VCSA 6.7 with Cisco UCS Manager Plugin for VMware vSphere HTML Client (beta Version 3.0(2))

Cisco has released the 3.0(2) beta version of the the Cisco UCS Manager VMware vSphere HTML client plugin. These version is working with vSphere 6.7. It’s currently running and enabled on 9 different clusters – 290 hosts. It works great so far.

Here are the new and changed features in Release3.0(2):

  • Included defect fixes
  • Added a new fault (F1706)to the Cisco UCS Provider failure conditions list
  • Added support for proactive High Availability for more than 100 hosts in vCenter

It is great to combine it with new Cisco UCS 4.1.1 because of Intel Post Package Repair (PPR).

  • Intel Post Package Repair (PPR) uses additional spare capacity within the DDR4 DRAM to remap and replace faulty cell areas detected during system boot time. Remapping is permanent and persists through power-down and reboot.
  • Newer memories, such as double data ram version 4 (DDR4) include so-called post-package repair (PPR) capabilities. PPR capabilities enable a compatible memory controller to remap accesses from a faulty row of a memory module to a spare row of the memory module that is not faulty.
    • Hard-PPR permanently remaps accesses from a designated faulty row to a designated spare row. A Hard-PPR row remapping survives power cycles.
    • Soft-PPR remapping temporarily maps accesses from a faulty row to a designated spare row. A Soft-PPR row remapping will survive a “warm” reboot,but does not survive a powercycle.
  • You can enabled it in BIOS policy / Memory RAS configuration – Select PPR type configuration – Hard PPR

  • To support “Alert F1706 – ADDDC Memory RAS Problem” is necessary
    ADDDC Sparing—System reliability is optimized by holding memory in reserve so that it can be used in case other DIMMs fail. This mode provides some memory redundancy, but does not provide as much redundancy as mirroring.
  • Cisco recommends upgrading to 4.0(4c) or later to expand memory fault coverage. Beginning with 4.0(4c) an additional RAS feature, Adaptive Double Device Data Correction (ADDDC Sparing) is available. It will be enabled and configured as “Platform Default” for Memory RAS configuration.