Featured

Mastering VMware NSX: Strategies for Success on the VCAP-NV Deploy Exam

The VCAP-NV Deploy exam is one of the most thrilling practical tests that evaluates your efficiency and forces you to work effectively.

“The journey is the destination.”

To prepare efficiently, I highly recommend taking the VMware NSX Troubleshooting and Operations [V4.x] – Deploy course. This course covers:

  • NSX Operations and Tools
  • Troubleshooting the NSX Management Cluster
  • Troubleshooting Infrastructure Preparation
  • Troubleshooting Logical Switching
  • Troubleshooting Logical Routing
  • Troubleshooting Security
  • Troubleshooting the NSX Advanced Load Balancer and VPN Services
  • Datapath Walkthrough

The syllabus thoroughly addresses the scope of the exam.

Review the labs multiple times and after completing the VMware NSX – Deploy course. Useful links:

Focus on the VMware Odyssey HOL Labs that were available at my time: HOL-2426-81-ODY VMware Odyssey – NSX Security Challenge.

Aim to be precise and sufficiently quick.

Exam Content Overview: The exam includes various sections focused on:

Section 4 – Installation, Configuration, and Setup
Objective 4.1 - Prepare VMware NSX-T Data Center Infrastructure
Objective 4.1.1 - Deploy VMware NSX-T Data Center Infrastructure components
Objective 4.1.2 - Configure Management, Control and Data plane components for NSX-T Data Center
Objective 4.1.3 - Configure and Manage Transport Zones, IP Pools, Transport Nodes etc.
Objective 4.1.4 - Confirm the NSX-T Data Center configuration meets design requirements
Objective 4.1.5 - Deploy VMware NSX-T Data Center Infrastructure components in a multi-site
Objective 4.2 - Create and Manage VMware NSX-T Data Center Virtual Networks
Objective 4.2.1 - Create and Manage Layer 2 services
Objective 4.2.2 - Configure and Manage Layer 2 Bridging
Objective 4.2.3 - Configure and Manage Routing including BGP, static routes, VRF Lite and EVPN
Objective 4.3 - Deploy and Manage VMware NSX-T Data Center Network Services
Objective 4.3.1 - Configure and Manage Logical Load Balancing
Objective 4.3.2 - Configure and Manage Logical Virtual Private Networks (VPNs)
Objective 4.3.3 - Configure and Manage NSX-T Data Center Edge and NSX-T Data Center Edge Clusters
Objective 4.3.4 - Configure and Manage NSX-T Data Center Network Address Translation
Objective 4.3.5 - Configure and Manage DHCP and DNS
Objective 4.4 - Secure a virtual data center with VMware NSX-T Data Center
Objective 4.4.1 - Configure and Manage Distributed Firewall and Grouping Objects
Objective 4.4.2 - Configure and Manage Gateway Firewall
Objective 4.4.3 - Configure and Manage Identity Firewall
Objective 4.4.4 - Configure and Manage Distributed IDS
Objective 4.4.5 - Configure and Manage URL Analysis
Objective 4.4.6 - Deploy and Manage NSX Intelligence
Objective 4.5 - Configure and Manage Service Insertion
Objective 4.6 - Deploy and Manage Central Authentication (Workspace ONE access)

Section 5 - Performance-tuning, Optimization, Upgrades
Objective 5.1 - Configure and Manage Enhanced Data Path (N-VDSe)
Objective 5.2 - Configure and Manage Quality of Service (QoS) settings

Section 6 – Troubleshooting and Repairing
Objective 6.1 - Perform Advanced VMware NSX-T Data Center Troubleshooting
Objective 6.1.1 - Troubleshoot Common VMware NSX-T Data Center Installation/Configuration Issues
Objective 6.1.2 - Troubleshoot VMware NSX-T Data Center Connectivity Issues
Objective 6.1.3 - Troubleshoot VMware NSX-T Data Center Edge Issues
Objective 6.1.4 - Troubleshoot VMware NSX-T Data Center L2 and L3 services
Objective 6.1.5 - Troubleshoot VMware NSX-T Data Center Security services
Objective 6.1.6 - Utilize VMware NSX-T Data Center native tools to identify and troubleshoot 

Section 7 – Administrative and Operational Tasks
Objective 7.1 - Perform Operational Management of a VMware NSX-T Data Center Implementation
Objective 7.1.1 - Backup and Restore Network Configurations
Objective 7.1.2 - Monitor a VMware NSX-T Data Center Implementation
Objective 7.1.3 - Manage Role Based Access Control
Objective 7.1.4 - Restrict management network access using VIDM access policies
Objective 7.1.5 - Manage syslog settings
Objective 7.2 - Utilize API and CLI to manage a VMware NSX-T Data Center Deployment
Objective 7.2.1 - Administer and Execute calls using the VMware NSX-T Data Center vSphere API

Each section contains objectives ranging from deploying NSX-T Data Center Infrastructure components to configuring and managing security features like the Distributed Firewall, Gateway Firewall, Identity Firewall, and more. It also covers performance tuning, quality of service settings, advanced troubleshooting, operational management, and using API and CLI for management.

Key Recommendations for Success:

  • Thoroughly study the VMware NSX Documentation to understand the fundamentals and advanced features of NSX.
  • Leverage the VMware NSX Product Page for the latest features and updates.
  • Engage with NSX Hands-On-Labs for practical, hands-on experience.
  • Watch NSX Training and Demo videos on YouTube to visualize configurations and use cases.
  • Prioritize precision and speed in lab exercises to mimic exam conditions.

Wishing you all the best in your preparation and lab endeavors. Let’s dive into the labs and master NSX.

Featured

Mastering Llama-2 Setup: A Comprehensive Guide to Installing and Running llama.cpp Locally

Welcome to the exciting world of Llama-2 models! In today’s blog post, we’re diving into the process of installing and running these advanced models. Whether you’re a seasoned AI enthusiast or just starting out, understanding Llama-2 models is crucial. These models, known for their efficiency and versatility in handling large-scale data, are a game-changer in the field of machine learning.

In this Shortcut, I give you a step-by-step process to install and run Llama-2 models on your local machine with or without GPUs by using llama.cpp. As I mention in Run Llama-2 Models, this is one of the preferred options.

Here are the steps:

Step 1. Clone the repositories

You should clone the Meta Llama-2 repository as well as llama.cpp:

$ git clone https://github.com/facebookresearch/llama.git
$ git clone https://github.com/ggerganov/llama.cpp.git 

Step 2. Request access to download Llama models

Complete the request form, then navigate to the directory where you downloaded the GitHub code provided by Meta. Run the download script, enter the key you received, and adhere to the given instructions:

$ cd llama
$ ./download 

Step 3. Create a virtual environment

To prevent any compatibility issues, it’s advisable to establish a separate Python environment using Conda. In case Conda isn’t already installed on your system, you can refer to this installation guide. Once Conda is set up, you can create a new environment by entering the command below:

#https://docs.conda.io/projects/miniconda/en/latest/

#These four commands quickly and quietly install the latest 64-bit version #of the installer and then clean up after themselves. To install a different #version or architecture of Miniconda for Linux, change the name of the .sh #installer in the wget command.

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh

#After installing, initialize your newly-installed Miniconda. The following #commands initialize for bash and zsh shells:

~/miniconda3/bin/conda init bash
~/miniconda3/bin/conda init zsh
$ conda create -n llamacpp python=3.11

Step 4. Activate the environment

Use this command to activate the environment:

$ conda activate llamacpp

Step 5. Go to llama.cpp folder and Install the required dependencies

Type the following commands to continue the installation:

$ cd ..
$ cd llama.cpp
$ python3 -m pip install -r requirements.txt

Step 6.

Compile the source code

Option 1:

Enter this command to compile with only CPU support:

$ make 

Option 2:

To compile with CPU and GPU support, you need to have the official CUDA libraries from Nvidia installed.

#https://developer.nvidia.com/cuda-12-3-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin

sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600

wget https://developer.download.nvidia.com/compute/cuda/12.3.0/local_installers/cuda-repo-ubuntu2204-12-3-local_12.3.0-545.23.06-1_amd64.deb

sudo dpkg -i cuda-repo-ubuntu2204-12-3-local_12.3.0-545.23.06-1_amd64.deb

sudo cp /var/cuda-repo-ubuntu2204-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/

sudo apt-get update

sudo apt-get -y install cuda-toolkit-12-3

sudo apt-get install -y cuda-drivers

Double check that you have Nvidia installed by running:

$ nvcc --version
$ nvidia-smi

Export the the CUDA Home environment variable:

$ export CUDA_HOME=/usr/lib/cuda

Then, you can launch the compilation by typing:

$ make clean && LLAMA_CUBLAS=1 CUDA_DOCKER_ARCH=all make -j

Step 7. Perform a model conversion

Run the following command to convert the original models to f16 format (please note in the example I show examples the 7b-chat model / 13b-chat model / 70b-chat model):

$ mkdir models/7B
$ python3 convert.py --outfile models/7B/ggml-model-f16.gguf --outtype f16 ../llama/llama-2-7b-chat --vocab-dir ../llama/llama

$ mkdir models/13B
$ python3 convert.py --outfile models/13B/ggml-model-f16.gguf --outtype f16 ../llama/llama-2-13b-chat --vocab-dir ../llama/llama

$ mkdir models/70B
$ python3 convert.py --outfile models/70B/ggml-model-f16.gguf --outtype f16 ../llama/llama-2-70b-chat --vocab-dir ../llama/llama

If the conversion runs successfully, you should have the converted model stored in models/* folders. You can double check this with the ls command:

$ ls models/7B/ 
$ ls models/13B/ 
$ ls models/70B/ 

Step 8. Quantize

Now, you can quantize the model to 4-bits, for example, by using the following command (please note the q4_0 parameter at the end):

$ ./quantize models/7B/ggml-model-f16.gguf models/7B/ggml-model-q4_0.gguf q4_0 

$ ./quantize models/13B/ggml-model-f16.gguf models/13B/ggml-model-q4_0.gguf q4_0 

$ ./quantize models/70B/ggml-model-f16.gguf models/70B/ggml-model-q4_0.gguf q4_0 
$ ./quantize models/70B/ggml-model-f16.gguf models/70B/ggml-model-q2_0.gguf Q2_K
$ ./quantize -h
usage: ./quantize [--help] [--allow-requantize] [--leave-output-tensor] [--pure] model-f32.gguf [model-quant.gguf] type [nthreads]

  --allow-requantize: Allows requantizing tensors that have already been quantized. Warning: This can severely reduce quality compared to quantizing from 16bit or 32bit
  --leave-output-tensor: Will leave output.weight un(re)quantized. Increases model size but may also increase quality, especially when requantizing
  --pure: Disable k-quant mixtures and quantize all tensors to the same type

Allowed quantization types:
   2  or  Q4_0   :  3.56G, +0.2166 ppl @ LLaMA-v1-7B
   3  or  Q4_1   :  3.90G, +0.1585 ppl @ LLaMA-v1-7B
   8  or  Q5_0   :  4.33G, +0.0683 ppl @ LLaMA-v1-7B
   9  or  Q5_1   :  4.70G, +0.0349 ppl @ LLaMA-v1-7B
  10  or  Q2_K   :  2.63G, +0.6717 ppl @ LLaMA-v1-7B
  12  or  Q3_K   : alias for Q3_K_M
  11  or  Q3_K_S :  2.75G, +0.5551 ppl @ LLaMA-v1-7B
  12  or  Q3_K_M :  3.07G, +0.2496 ppl @ LLaMA-v1-7B
  13  or  Q3_K_L :  3.35G, +0.1764 ppl @ LLaMA-v1-7B
  15  or  Q4_K   : alias for Q4_K_M
  14  or  Q4_K_S :  3.59G, +0.0992 ppl @ LLaMA-v1-7B
  15  or  Q4_K_M :  3.80G, +0.0532 ppl @ LLaMA-v1-7B
  17  or  Q5_K   : alias for Q5_K_M
  16  or  Q5_K_S :  4.33G, +0.0400 ppl @ LLaMA-v1-7B
  17  or  Q5_K_M :  4.45G, +0.0122 ppl @ LLaMA-v1-7B
  18  or  Q6_K   :  5.15G, -0.0008 ppl @ LLaMA-v1-7B
   7  or  Q8_0   :  6.70G, +0.0004 ppl @ LLaMA-v1-7B
   1  or  F16    : 13.00G              @ 7B
   0  or  F32    : 26.00G              @ 7B
          COPY   : only copy tensors, no quantizing

Step 9. Run one of the prompts

Option 1:

You can execute one of the example prompts using only CPU computation by typing the following command:

$ ./main -m ./models/7B/ggml-model-q4_0.gguf -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt

This example will initiate the chat in interactive mode in the console, starting with the chat-with-bob.txt prompt example.

Option 2:

If you compiled llama.cpp with GPU enabled in Step 8, then you can use the following command:

$ ./main -m ./models/7B/ggml-model-q4_0.gguf -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt --n-gpu-layers 3800

$ ./main -m ./models/13B/ggml-model-q4_0.gguf -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt --n-gpu-layers 3800

$ ./main -m ./models/70B/ggml-model-q4_0.gguf -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt --n-gpu-layers 1000

If you have GPU enabled, you need to set the number of layers to offload to the GPU based on your vRAM capacity. You can increase the number gradually and check with the nvtop command until you find a good spot.

You can see the full list of parameters of the main command by typing the following:

$ ./main --help

Quick Tip – Disable network traffic monitoring…

Quick Tip – Disable network traffic monitoring (promiscuous) UI prompt in VMware Fusion

Quick Tip – Disable network traffic monitoring…

While working on some demos planned for my VMware Explore session, Tech Deep Dive: Automating VMware ESXi Installation At Scale [CODEB1574LV], I kept getting this network monitoring prompt when powering up my Nested ESXi VM running in VMware Fusion. Since Nested ESXi requires the use of promiscuous mode (for inner-VM networking), you will be prompted […]


Broadcom Social Media Advocacy

Useful vSphere Automation techniques for…

Useful vSphere Automation techniques for assisting with CrowdStrike remediation

Useful vSphere Automation techniques for…

By now, you have probably heard about or have directly been impacted by the recent CrowdStrike software update to Microsoft Windows system causing an unprecedented global outage. I know IT administrators are working around the clock to remediate thousands if not tens of thousands of Windows systems, the current recommended remediation process from CrowdStrike is […]


Broadcom Social Media Advocacy

Don’t Miss Early-Bird Pricing for VMware Explore 2024 Barcelona Until July 29!

Are you ready to dive into the future of technology? VMware Explore 2024 EU is just around the corner, and you won’t want to miss this incredible opportunity. From cutting-edge innovations to industry-leading experts, this event promises to be a highlight for IT professionals, developers, and tech enthusiasts alike.

The best part? Early-bird pricing is available until July 29! By registering now, you can take advantage of significant savings and secure your spot at one of the most anticipated tech events of the year.

What to Expect at VMware Explore 2024 EU:

  • Innovative Sessions: Learn from the best in the industry through a variety of sessions that cover the latest trends and technologies. What is new in vSphere/VSAN 8U3.
  • Hands-On Workshops: Gain practical experience with hands-on labs and workshops designed to enhance your skills VSAN, VCF, VVols.
  • Networking Opportunities: Connect with peers, industry leaders, and VMware experts to share knowledge and expand your professional network.
  • Exclusive Insights: Get a first look at new products and solutions that will shape the future of IT.

Don’t wait – register today and take advantage of early-bird pricing before it’s too late. Secure your place at VMware Explore 2024 EU and be part of the conversation that drives innovation forward.

Mark your calendar and get ready to explore the future with VMware!

VMware Explore Registration Barcelona | VMware Explore | 4 – 7 Nov. 2024

VMware NVMeoF Virtual Volumes (vVols) on Pure…

VMware NVMeoF Virtual Volumes (vVols) on Pure…

Business Critical Oracle Workloads have stringent IO requirements and enabling, sustaining, and ensuring the highest possible performance along with continued application availability is a major goal for all mission critical Oracle applications to meet the demanding business SLAs on VMware […]


Broadcom Social Media Advocacy

Add nConnect Support to NFS v4.1

Starting with vSphere 8.0 Update 3, nConnect support has been added for NFS v4.1 datastores. This feature enables multiple connections using a single IP address within a session, thereby extending session trunking functionality to that IP. With nConnect, multipathing and nConnect coexist, allowing for more flexible and efficient network configurations.

Benefits of nConnect

Traditionally, vSphere NFSv4.1 implementations create a single TCP/IP connection from each host to each datastore. This setup can become a bottleneck in scenarios requiring high performance. By enabling multiple connections per IP, nConnect significantly enhances data throughput and performance. Here’s a quick overview of the benefits:

  • Increased Performance: Multiple connections can be established per session, reducing congestion and improving data transfer speeds.
  • Flexibility: Customers can configure datastores with multiple IPs to the same server and also multiple connections with the same IP.
  • Scalability: Supports up to 8 connections per IP, enhancing scalability for demanding workloads.

Configuring nConnect

Adding a New NFSv4.1 Datastore

When adding a new NFSv4.1 datastore, you can specify the number of connections at the time of the mount using the following command:

esxcli storage nfs41 add -H <host> -v <volume-label> -s <remote_share> -c <number_of_connections>

By default, the maximum number of connections per session is set to 4. However, this can be increased to 8 using advanced NFS options. Here’s how you can configure it:

  1. Set the maximum number of connections to 8: esxcfg-advcfg -s 8 /NFS41/MaxNConnectConns
  2. Verify the configuration: esxcfg-advcfg -g /NFS41/MaxNConnectConns

The total number of connections used across all mounted NFSv4.1 datastores is limited to 256.

Modifying Connections for an Existing NFSv4.1 Datastore

For an existing NFSv4.1 datastore, the number of connections can be increased or decreased at runtime using the following command:

esxcli storage nfs41 param set -v <volume-label> -c <number_of_connections>

Multipathing and nConnect Coexistence

There is no impact on multipathing when using nConnect. Both NFSv4.1 nConnect and multipaths can coexist seamlessly. Connections are created for each of the multipathing IPs, allowing for enhanced redundancy and performance.

Example Configuration with Multiple IPs

To add a datastore with multiple IPs and specify the number of connections, use:

esxcli storage nfs41 add -H <IP1,IP2> -v <volume-label> -s <remote_share> -c <number_of_connections>

This command ensures that multiple connections are created for each of the specified IPs, leveraging the full potential of nConnect.

Summary

The introduction of nConnect support in vSphere 8.0 U3 for NFS v4.1 datastores marks a significant enhancement in network performance and flexibility. By allowing multiple connections per IP, nConnect addresses the limitations of single connection setups, providing a robust solution for high-performance environments. Whether you’re configuring a new datastore or updating an existing one, nConnect offers a scalable and efficient way to manage your NFS workloads.

https://core.vmware.com/resource/whats-new-vsphere-8-core-storage

Map Your Next Move: VMware Explore 2024…

Map Your Next Move: VMware Explore 2024…

Searching for new cloud perspectives? Set your sights higher at VMware Explore. From cloud infrastructure to the software-defined edge to Private AI innovation, you’ll gain perspective, find answers, and see what’s possible with the right cloud solutions. Join us at Explore 2024, Barcelona, Spain!


Broadcom Social Media Advocacy