Capacity Management Series – Part 1: Assessing…
This video is Part 1 of the VMware Cloud Foundation Capacity Management series that shows you how you can get an immediate assessment of your environment’s capacity needs.
Daniel Micanek virtual Blog – Like normal Dan, but virtual.
This video is Part 1 of the VMware Cloud Foundation Capacity Management series that shows you how you can get an immediate assessment of your environment’s capacity needs.
If you own a Minisforums MS-A2, you may have noticed that the system can run quite warm under load. I certainly have been putting my MS-A2 to work by running VMware Cloud Foundation (VCF) 9.0 and I have been trying various experiments in seeing how I can reduce both the thermals and thus the fan […]
Discover how NVMe memory tiering in VMware vSphere 9.0 helps reduce memory costs, improve VM consolidation, and optimize CPU utilization. Learn the benefits, use cases, and key ESXCLI monitoring commands for efficient tiered memory management.
During VMware Explore, I had a request from an attendee who was interested in my physical networking and how it is all connected for my VMware Cloud Foundation (VCF) 9.0 Lab setup. The network diagram below is based on the following VCF 9.0 Hardware BOM (Build-of-Material) and using the follow-[…]
Overview showing how to upgrade a vSphere 8.x instance, that is running with the Aria LCM and Aria Operations, to a VCF 9.0 Fleet.
Discover the biggest VMware Explore 2025 announcements on GPU virtualization. Learn how vSphere 9.0 introduces vGPU management, MIG support, NVLink scaling, GPU-aware vMotion, and new monitoring tools to accelerate AI workloads at scale.
Are your CPUs memory-starved while your infrastructure struggles with underutilization and growing costs? Enter Memory Tiering with NVMe — a groundbreaking feature in vSphere 9.0 that promises up to 40% lower TCO by intelligently managing your memory resources.
Memory tiering allows ESXi to use NVMe devices as a secondary memory tier, extending beyond traditional DRAM. By classifying memory pages as hot, warm, cold, or very cold, vSphere can dynamically move less frequently used pages to NVMe-backed memory. This unlocks better VM consolidation, more predictable performance, and optimized CPU usage.
Ideal for general workloads and tiered VMs, but not supported for latency-sensitive or passthrough-based VMs. Ensure your NVMe meets Broadcom’s vSAN compatibility requirements and configure the DRAM:NVMe ratio wisely (default is 1:1).
Memory tiering isn’t just a cool buzzword — it’s a strategic shift that aligns your infrastructure with modern performance and cost demands. Whether you’re scaling your VDI environment or looking to cut memory costs without compromising on performance, NVMe Memory Tiering in vSphere 9.0 is a game changer.
| Description | Command |
|---|---|
| Check maintenance mode | esxcli system maintenanceMode get |
| List storage devices | esxcli storage core adapter device list |
| Create NVMe tier device | esxcli system tierdevice create -d <device> <vendor> <id> |
| List tier devices | esxcli system tierdevice list |
| Enable kernel memory tiering | esxcli system settings kernel set -s MemoryTiering -v TRUE |
| Verify tiering status | esxcli system settings kernel list -o MemoryTiering |
| Reboot ESXi | reboot |
Explore 2025 Session Recap – INVB1158LV
Are you looking to maximize AI/ML performance in your virtualized environment? At VMware Explore 2025, I attended a compelling session — INVB1158LV: Accelerating AI Workloads: Mastering vGPU Management in VMware Environments — that unpacked how to effectively configure and scale GPUs for AI workloads in vSphere.

This blog post shares key takeaways from the session and outlines how to use vGPU, MIG, and Passthrough to achieve optimal performance for AI inference and training on VMware Cloud Foundation 9.0.
✅ Example profiles: grid_a100-8c, grid_a100-4-20c
✅ Example profiles: MIG 1g.5gb, MIG 2g.10gb, MIG 3g.20gb
✅ Assignable via vSphere UI with profiles like grid_a100-3-20c
| Mode | Best For | Sharing Type |
|---|---|---|
| Time Slicing | LLM training, dev/test environments | Time-shared |
| MIG | Production inference, multitenancy | Spatial (hardware) |
| Passthrough | Maximum performance for single workload | Not shared |
One of the standout improvements presented during session INVB1158LV was the vMotion optimization for VMs using vGPUs. With vSphere 8.0 U3 and VMware Cloud Foundation 9.0, the way vMotion handles GPU memory has been completely reengineered to minimize downtime (stun time) during live migration.
Instead of migrating all GPU memory during the VM stun phase, 70% of the vGPU cold data is now pre-copied in the pre-copy stage, and only the final 30% is checkpointed during stun. This greatly accelerates live migration even for massive LLM workloads running on multi-GPU systems.
📊 Example results with Llama 3.1 models:
These enhancements make zero-downtime AI infrastructure upgrades and scaling possible, even for large language model deployments
I had the pleasure of attending the excellent session “Deploying Minimal VMware Cloud Foundation 9.0 Lab” by Alan Renouf and William Lam at VMware Explore 2025. It was packed with practical advice, hardware insights, and field-tested tips on how to stand up a fully functional VCF environment—even on a tight budget.

Whether you’re a home lab enthusiast, enterprise architect, or just VCF-curious, here’s a recap of the key takeaways.
Here’s the juicy part—real-world deployment tips and overrides:
> cat /home/vcf/feature.properties
feature.vcf.internal.single.host.domain = true
> echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
If your ESXi host doesn’t have a 10GbE NIC:
> cat /etc/vmware/vcf/domainmanager/application.properties
enable.speed.of.physical.nics.validation = false
> echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
VCF Installer will fail validation if your SSD or controller is not on the vSAN ESA HCL. Install a “mock” VIB to bypass:
esxcli software vib install -v /tmp/vsan-mock.vib
By default, the VCF installer requires HTTPS:
cat /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties
lcm.depot.adapter.httpsEnabled=false
systemctl restart lcm
You don’t need a full-blown web server:
python http_server_auth.py --bind 192.168.1.100 --user myuser --password mysecurepassword --port 443 --directory /myrepo
Here’s an example BOM shared by the presenters:
This setup is small, powerful, and flexible enough for a complete VCF 9.0 deployment.
Here’s the summarized 8-step flow:
This session truly showcased how far VCF has come in terms of flexibility and accessibility. More info: VMware Cloud Foundation (VCF) 9.x in a Box.
All trademarks belong to their respective owners.
Data Services Manager 9.0 introduces support for a new Data Service, namely Microsoft SQL Server. This is currently tech preview and should be treated as non-production until full support is available. Customers can use this integration to deploy both MS SQL Server Instances and MS SQL Server […]