Announcing General Availability of VMware…

Announcing General Availability of VMware…

VMware is proud to announce the general availability of VMware Ransomware Recovery for VMware Cloud DR. VMware Ransomware Recovery is a purpose-built ransomware recovery-as-a-service solution that enables businesses to recover from ransomware attacks faster and with more predictability & confidence.


VMware Social Media Advocacy

Advertisement

Simplify load-balancing

Hands-on Labs are the fastest and easiest way to test-drive the full technical capabilities of VMware products. These evaluations are free, up and running on your browser in minutes, and require no installation.

Simplify load-balancing

Test-drive load-balancing in multi-cloud environments from your browser, no installation required. Get started in minutes. Learn More.


VMware Social Media Advocacy

INSTALL and configure tkg 1.4 – part-5 – deploy pods on tKG workload cluster

In the previous blog, we went through the steps of deploying a TKG workload cluster on a vSphere platform.

We have finally reached a stage where we can deploy Kubernetes pods, deployments and other resources on a TKG workload cluster.

The TKG workload cluster to used to host Kubernetes pods and deployments, which in turn host your containerized modern applications.

In this blog, we will see how a Kubernetes pod can be deployed to run on a TKG workload cluster.

As a first step, you need to switch to the required workload cluster context.

Login to your bootstrap machine and run the command below to switch to the required workload cluster context.

kubectl config use-context <name of the workload context>

Run the command to create a namespace where you would like to deploy your pod

kubectl create namespace <name of namespace>

Run the command below to create a pod in dev namespace.

Please note – In the command below, you need to provide two hyphens before the ‘namespace’ parameter.

kubectl run <name of a pod> –image=<container image name> –namespace=<name of a namespace>

In case of any errors, you can check the events associated with a pod.

Run the command below to describe a pod and check the events associated with a pod

Please note – In all the commands below, where the parameter namespace is used, you need to provide two hyphens before the ‘namespace’ parameter.

kubectl describe pod <name of a pod> –namespace=<name of a namespace>

In my lab environment, the error ‘toomanyrequests: You have reached your pull rate limit‘ was observed when an attempt was made to deploy a pod.

This error is seen because Docker Hub limits the number of Docker image downloads (“pulls”) based on the account type of the user pulling the image. Pull rates limits are based on individual IP address. For anonymous users, the rate limit is set to 100 pulls per 6 hours per IP address. For authenticated users, it is 200 pulls per 6 hour period. For further details, please refer to the Docker Hub download rate limit.

One of the workaround to this error is to login to the Docker hub account and then attempt to create a pod.

Please note – In the command below, you need to provide two hyphens before the ‘username’ parameter.

docker login –username=<username>

Once again run the command below to create a pod in the dev namespace.

kubectl run <name of a pod> –image=<container image name> –namespace=<name of a namespace>

Run the command to check the status of a pod

kubectl get pods –namespace=<name of a namespace>

Run the command below to check the node on which a pod is deployed

kubectl get pods -o wide –namespace=<name of a namespace>

As you can see, the kubernetes scheduler has chosen a worker node to host a newly deployed pod.

Likewise, you can deploy other Kubernetes objects like deployments, services, network policies etc.

This brings us to the end of the TKG install and configure blog series.

I hope you found this blog series useful.

install and configure tkg 1.4 on vsphere – part-4 – deploy tkg workload cluster on vsphere

The steps to deploy a TKG Management cluster on a vSphere platform were covered in the previous part of this blog series. In this blog, we will go through the steps to deploy a TKG workload cluster on a vSphere platform.

When a TKG management cluster is created using an installer interface, the configuration of a TKG Management cluster is saved in a cluster configuration file with a random name, which is available under the ~/.config/tanzu/tkg/clusterconfigs/ directory.

For instance, in my lab environment, the configuration for a TKG Management cluster was saved in a file given below.

The contents of this file include cluster name, cluster CIDR, LDAP configuration etc. The same file can be used to deploy a TKG workload cluster.

Make a copy of the management cluster configuration file and save it with a new name.

Open the new file in vi mode or using any other such editor which you are comfortable with and set a name for a TKG workload cluster.

If you have created namespaces in your Tanzu Kubernetes Grid instance, you can deploy Tanzu Kubernetes Workload clusters to those namespaces by specifying the NAMESPACE variable. If you do not specify the  NAMESPACE variable, Tanzu Kubernetes Grid places clusters in the default namespace. 

For example, in the cluster configuration file, you can add a NAMESPACE variable to production as shown below

NAMESPACE: production

Run the command below to create a TKG workload cluster.

tanzu cluster create –file /root/.config/tanzu/clusterconfigs/<TKG workload cluster file name>

Please note – In the command above, you need to provide two hyphens before the ‘file’ parameter.

The command above will verify and validate the configuration defined in your cluster config file and will create a fully functional Tanzu Kubernetes cluster where you can deploy your Kubernetes pods and deployments.

In my lab environment, I used a development cluster plan to deploy a Tanzu Kubernetes workload cluster. The development plan (DEV) deploys a TKG cluster with 1 control plane node, and 1 worker node (each with 2 CPU and 4 GB RAM).

If required, you can change the cluster plan to production or specify other sizes for your control and worker nodes. For more details, please refer to Tanzu Kubernetes Cluster Template.

Once a Tanzu Kubernetes workload cluster is created, run the command below to set the context of a workload cluster in the admin kubeconfig file.

tanzu cluster kubeconfig get <Tanzu Kubernetes Workload Cluster name> –admin

Please note – In the command above, you need to provide two hyphens before the ‘admin’ parameter.

Run the command below, to view the available contexts

kubectl config get-contexts

Run the command below to switch the context to the Tanzu workload cluster

kubectl config use-context <Tanzu Kubernetes Workload cluster context>

Run the command below to check if the current context has been set to the required tanzu Kubernetes workload cluster context.

kubectl config current-context

Run the command below to examine the status of a newly deployed Tanzu Kubernetes Workload cluster. Both the control node and the worker node should be in ‘Ready’ state.

kubectl get nodes

So, with TKG, deploying a Kubernetes cluster is just a matter of running a single command i.e. tanzu cluster create –file <cluster configuration file>.

In the next blog, we will see how to deploy and run Kubernetes pods on a Tanzu Kubernetes workload cluster.

INSTALL and configure TKG 1.4 on vsphere – Part-3 deploy tkg management cluster on vsphere

The prerequisites to deploy a TKG Management cluster on vSphere were covered in the previous part of this blog series. In this blog, we will go through the steps to deploy a TKG Management cluster on a vSphere platform.

You can deploy a TKG Management cluster either using an installer interface or through a command line option by providing a cluster configuration file as input.

In this blog, we will go through the steps to deploy a TKG management cluster using an installer interface.

On the bootstrap machine, where the tanzu cli is installed, run the command below to start an installer interface.

tanzu management-cluster create –ui –bind <IP address of a bootstrap machine>:8080

Please note – If you don’t use the bind option to bind an installer interface to the bootstrap machine host IP address and simply run a command tanzu management-cluster create –ui, by default it will attempt to open an interface on the localhost IP 127.0.0.1. If for some reason, the installer interface doesn’t launch on the localhost IP address, you can use the bind option to start the installer interface on the static IP of the bootstrap machine. Thanks to Cormac Hogan for covering the procedure in detail in his blog.

In the web browser (preferably Chrome), enter the URL to access the installer interface.

Click the ‘DEPLOY‘ icon for the VMware vSphere option and enter the vCenter Server FQDN, username and password.

Click on continue to accept the SSL thumbprint and proceed

From the two given options, select the DEPLOY TKG MANAGEMENT CLUSTER option.

Copy the contents of a /root/.ssh/id_rsa.pub file from the bootstrap machine and paste it into an SSH public key tab as shown below. Select the datacenter where you would like to deploy a TKG Management cluster.

In the Management Cluster Settings section, select the Development or Production tile. If you select Development, the installer deploys a management cluster with one control plane node and one worker node. If you select Production, the installer deploys a highly available management cluster with three control plane nodes and three worker nodes (v1.4.1+).

In a lab environment, due to resource constraints, often a ‘Development‘ option is preferable.

In the Development tile, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs. In a lab environment, a small instance type should be sufficient.

Enter a name for the management cluster and choose the CPU and Memory configuration for the worker nodes.

Under Control Plane Endpoint Provider, select Kube-Vip or NSX Advanced Load Balancer to choose the component to use for the control plane API server.

In my lab environment, I’ve chosen kubevip as a control plane endpoint provider.

Provide a static virtual IP address for a kube-vip. The static virtual IP address for a kube-vip should be from the same subnet as the DHCP range, but it should not be a part of the DHCP range.

You can either enable Machine Health Checks at this stage or at a later stage once the Management cluster is deployed.

if you have selected kube-vip as a control plane endpoint provider, leave the NSX AVI load balancer settings blank.

In the optional Metadata section, you can optionally provide descriptive information about your management cluster. Any metadata that you specify here applies to the management cluster and to the Tanzu Kubernetes clusters that it manages.

Specify the VM folder, datastore and cluster to host your TKG management cluster nodes.

In the Kubernetes Network section, under Network Name, select a vSphere network to use as the Kubernetes service network. Do not change the cluster service CIDR and cluster pod CIDR values. The cluster service CIDR and cluster pod CIDR values are populated by default. So, there is no need to change the cluster service CIDR and cluster pod CIDR values unless these subnet ranges are not available for some reason.

In the Identity Management section, enable identity management settings and select the LDAPS option.

Provide the Active Directory FQDN as the LDAPs endpoint

Provide the BASE DN details for user search and group search attributes.

In the ROOT CA tab, paste the ROOT CA certificate of the Active Directory server.

Please note – The ROOT CA should contain a SAN name (Subject Alternate Name).

Test the reachability to the LDAP server and check if you are able to successfully authenticate to the LDAP server using the given bind details.

In the OS Image section, use the drop-down menu to select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs. You need to the select base image template which was imported and deployed during the prerequisite phase.

In the CEIP Participation section, optionally select the check box to opt for the VMware Customer Experience Improvement Program.

Click Review Configuration to see the details of the management cluster that you have configured.

IaaS provider settings
TKG Management Cluster settings
Resource settings
Kubernetes Network settings.
Identity Management settings
Identity Management settings

OS Image settings and CIEP Agreement settings

When you click Review Configuration, Tanzu Kubernetes Grid populates the cluster configuration file, which is located in the ~/.config/tanzu/tkg/clusterconfigs subdirectory, with the settings that you specified in the interface.

Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.

Once you are done reviewing the configuration that will be used to deploy a TKG Management cluster, click on DEPLOY MANAGEMENT CLUSTER.

You can monitor the progress of deployment as shown below.

Different stages of a TKG Management cluster deployment

While a TKG management cluster deployment is in progress, you can monitor the logs in the same installer interface.

Logs of a TKG Management cluster deployment
Different stages of a TKG Management cluster deployment

Ensure that all the stages of deploying a TKG Management cluster are completed successfully.

Different stages of a TKG Management cluster deployment

On the bootstrap machine, a temporary kubeconfig file for a TKG management is created under ~/.kube-tkg/tmp/config/ directory. This temporary kubeconfig file can be used to check the status of pods

Also, the deployment status can be monitored by checking the logs of a capv-controller-manager pod using the kubectl command given below.

On the bootstrap machine, use the kubectl command to switch to the management cluster context.

Run the kubectl command below to check the status of the management cluster nodes. The status for both the control node and worker node should be Ready.

This brings us to the end of part 3.

In the next blog, we will go through the steps of deploying a TKG Workload cluster.

INSTALL and CONFIGURE TKG 1.4 on VSphere – Part-2 PREREQUISITES FOR deploying tkg management cluster on vsphere

In the previous blog, we went through the steps of setting up a bootstrap machine. In this blog, we will go through some of the key prerequisites of deploying a TKG Management cluster on a vSphere platform.

While all the prerequisites for deploying a TKG Management cluster are covered in detail in the VMware documentation, we will focus on some of the key prerequisites which require special attention.

Before we start with the prerequisites, the vSphere platform on which you plan to deploy TKG Cluster nodes should be running on at least vSphere 6.7 U3 or above. In my lab environment, the vSphere platform is running on the versions given below.

vCenter Server – 7 Update U2b – build 17958471

ESXi – 6.7 EP 24 – build 19997733

One of the key prerequisites is to create a base image template containing the OS and Kubernetes versions that the TKG cluster nodes run on. The base image template is published by VMware in an OVA format, which you need to import into the vSphere platform.

Deploy a base image template for TKG cluster nodes

So, let us start by downloading the correct base image for TKG version 1.4.3. The base image OVA is available with both Ubuntu and Photon OS versions. In my lab environment, I’ve used the base image OVA containing the Ubuntu OS version.

Please note – If you get an error while importing or deploying an OVA/OVF template, please check the status of the Content Library service. For more details, please refer to VMware KB 56898. In my lab environment, I had some issues while deploying the OVA template and restarting the Content Library service helped to address the underlying issue.

In the vSphere Client, right-click on a vSphere Cluster and click on ‘Deploy OVF Template’.

In the next window, click on upload files and navigate to the directory on your local client machine where you have downloaded the OVA file and select the OVA file.

Enter a virtual machine name and select a location to deploy the OVA template.

Select a compute resource to deploy the OVA template

On clicking next, the compatibility error below was seen.

On investigating a bit, I figured out that the error above was because the vCLS VMs in my lab environment were not in a ‘powered-on’ state. With the release of vCenter 7 Update 1, VMware introduced the vCLS (vSphere Clustering Service). If you would like to understand more about vSphere Clustering Service, please refer to the VMware blog on vSphere Clustering Service.

The vCLS VMs were throwing the error ‘Failed to power on virtual machine vCLS. Feature ‘cpuid.MMWAIT’ was absent.

Thanks to Duncan Epping and Cosmin for covering the workaround to this issue in their blogs.

Please note that you may need to login into the vSphere host client to change the VM hardware compatibility version.

After the vCLS VMs were successfully powered on, I was able to proceed with the next steps.

Review the details on the next page.

Accept the license agreements.

Select an appropriate datastore to store your template

Select a port group for the template

On the next page, review the details once before clicking on FINISH

You can monitor while the OVF is being deployed

Once the OVF is deployed, convert it into a template.

Check if all the tasks were completed successfully.

NTP configuration for a bootstrap machine, vCenter Server and ESXi hosts.

One of the important prerequisites is to ensure that NTP is configured on all ESXi hosts, on vCenter Server, and on the bootstrap machine. Also, you need to ensure that all these components including the bootstrap machine have the same NTP server configured as their time source.

Let us start by installing an NTP package on the bootstrap machine.

Run the command below to install an NTP package on the bootstrap machine.

sudo apt install ntp

Run the command below to set the preferred NTP server in the ntp.conf file

sudo bash -c “echo server <ntp server ip address> prefer iburst >> /etc/ntp.conf”

Restart the ntp service

systemctl restart ntp

Check the status of ntp service

systemctl status ntp

Run the command below to check if the ntp server has been configured correctly.

ntpq -p

Another important prerequisite is to ensure that all the components run in the UTC time zone.

Run the command below to set the time zone to UTC

timedatectl set-timezone UTC

Also, ensure that each of the ESXi hosts in your environment is configured to use the correct NTP server.

Log into the vSphere Client and navigate to the hosts and cluster view. For a given ESXi host, under system select ‘Time Configuration’ and click on edit.

Set the correct NTP server as shown below.

We also need to ensure that vCenter Server is configured to use the same NTP server as the ESXi hosts and bootstrap machine.

Login to the vCenter Server VAMI console to set the NTP server as shown below.

So, now we have the vCenter Server, ESXi hosts and the bootstrap machine configured to use the same NTP server as the time source.

DHCP Configuration for the TKG cluster nodes

A DHCP server with option 3 (Router) and option 6 (DNS) is required to assign IP addresses to the TKG cluster nodes. Each management cluster and Tanzu Kubernetes cluster that you deploy to vSphere requires one static virtual IP address for external requests to the cluster’s API server. This static virtual IP address should be from the same subnet as the DHCP range, however, it should not be part of the DHCP range.

In my lab environment, I’ve configured DHCP with the following scope options. If you have an NSX deployment in your environment, you can also leverage NSX to configure DHCP service.

For a lab environment, the DHCP address range with 10 to 20 IP addresses should be more than enough.

This brings us to the end of part 2.

In the next blog, we will go through the steps of deploying a TKG Management cluster on a vSphere platform.

INSTALL and CONFIGURE TKG 1.4 on VSphere – Part-1 SETUP bootstrap machine

Hello techies,

In this blog series, I intend to cover the detailed procedure to install and configure a Tanzu Kubernetes Grid (TKG) Management cluster and TKG Workload Cluster in a lab environment. The installation and configuration of a TKG Management and Workload cluster involve several tasks or stages, right from setting up a bootstrap machine to the point where you have a fully functional TKG workload cluster ready to deploy your Kubernetes pods or workloads.

Therefore, I’ve split all the major tasks into different blogs, so that each of these tasks can be covered in detail. This is the very first part of the TKG blog series, where we will go through the steps to set up a bootstrap machine.

Before we start with setting up a bootstrap machine, you may want to familiarize yourself with the key elements and concepts of a Tanzu Kubernetes Grid deployment.

A bootstrap machine is a machine on which you deploy Tanzu CLI, Docker, kubectl and various other tools, which are required to deploy and manage TKG Management clusters and TKG workload clusters.

A bootstrap machine could be a standard Linux or a Windows machine in virtual or physical form factor. In my lab environment, I’ve used an Ubuntu virtual machine with the configuration and OS version given below.

Bootstrap machine configuration

Bootstrap machine operating system

As a first step, let us install Docker on a bootstrap machine. Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Once the docker repository is installed, you can install and update Docker from the repository.

Install Docker

Update the apt (Advanced Package Tool) package index

sudo apt-get update

Install packages to allow apt to use a repository over HTTPS

sudo apt-get install ca-certificates curl gnupg lsb-release

Add docker’s GPG key

sudo mkdir -p /etc/apt/keyrings

curl -fsSLhttps://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o/etc/apt/keyrings/docker.gpg

Set up the repository.

echo “deb [arch=$(dpkg –print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” |  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the apt package index

 sudo apt-get update

Install the latest version of Docker Engine, containerd, and Docker Compose

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Enter the command below to enable and start docker

systemctl enable docker && systemctl start docker

Verify the version of docker

docker version

Install the TANZU CLI for Linux

Download the Tanzu CLI for Linux for the TKG version 1.4.3 from the VMware customer connect portal

Copy the tar bundle onto the bootstrap machine and untar it under the /root/tanzu directory using the command given below.

tar -vxf tanzu-cli-bundle-linux-amd64.tar

Navigate to the /root/tanzu/cli/v1.4.3 directory and run the command below to install the TANZU cli binaries in the /usr/local/bin directory.

Please ensure that the tanzu file under the /usr/local/bin has executable permission.

Verify if the tanzu command line utility is working

Install the kubectl tool for Linux

Download the kubectl tool for Linux for the TKG version 1.4.3 from the VMware Customer Connect portal

Copy the kubectl bundle onto /root/tkg-bin directory onto the bootstrap machine and use the gunzip command to unzip the kubectl utility.

Install the kubectl utility using the command below.

Verify if the kubectl command line utility is working.

Install the Tanzu CLI plugins

Navigate to the tanzu folder that contains the cli folder and run the command below to install the tanzu cli plugins

Verify if all the tanzu plugins were installed

Generate an SSH key pair

When deploying the TKG Management cluster you need to provide the public key part of the SSH key pair of the bootstrap machine. So, you need to generate an SSH key pair on the bootstrap machine.

Run the command below to generate an SSH key pair.

Copy the contents of a /root/.ssh/id_rsa.pub file onto a notepad++ file as you will need to provide the contents of this public key during the deployment of a TKG management cluster.

Install the Carvel tools

Navigate to the /root/tanzu/cli directory and run the commands below to install the carvel tools.

This brings us to the end of part 1.

In the next blog, let us go through the prerequisites to deploy TKG Management Cluster on vSphere.

MoNITORING oF NSX-T DFW THRESHOLDS USINg vrealize LOG INSIGHT

Hello there,

The performance of NSX-T Distributed Firewall Rules largely depends on the memory usage of the various vsip modules at the data-plane level. If the vsip modules have critically high resource utilization at the data-plane level, it could have an adverse impact on the performance of the NSX-T platform. Therefore, it is important to regularly monitor the resource utilization for all the vsip modules at the data-plane level. In a large environment with high number of ESXi hosts, it may not be feasible to monitor the resource utilization of vsip modules on each of the ESXi host. This is where the vRealize Log Insight can be used to monitor the vsip module resource utilization across ESXi hosts.

In this blog, we will go through the procedure to leverage vRealize Log Insight to monitor DFW vsip module usage.

The vsip module resource utilization can be monitored on each of the ESXi hosts using the command given below.

command – nsxcli -c get firewall thresholds

As shown in the command above, the default threshold for the vsip module resource utilization is set to 90%, which is a bit on the higher side.

We do not want to be alerted when the resource utilization has already reached 90%, rather we would like an alert to be raised when the resource utilization is 70%.

Fortunately, the default threshold values can be modified using the procedure given below.

Create CPU and Memory Threshold Profile

POST /api/v1/firewall/profiles

{“resource_type”: “FirewallCpuMemThresholdsProfile”,“display_name”: “dfw-thresholds-profile-001”,

“cpu_threshold_percentage” : ”70”,

“mem_threshold_percentage” : ”70″}

Note down ID/name of the profile..

Create NSGroup With members as all Host Nodes (Manager UI/API).

>>> Note down ID/name of the NSGroup..

Apply new profile to the host groups.

        POST https://{{nsxmanager}}/api/v1/service-configs

{

“profiles”: [

{

“profile_type”: “FirewallCpuMemThresholdsProfile”,

“target_id”: “2a7003e3-40a6-4644-b688-6af1dd8cbe85”,

“target_display_name”: “dfw-thresholds-profile-001”,

“target_type”: “UpmProfile”,

“is_valid”: true

}

],

“applied_to”: [

{

“target_id”: “df4a3dbd-0460-4d0e-a097-ebd6c54e1a2b”,

“target_display_name”: “nost-node-all”,

“target_type”: “NSGroup”,

“is_valid”: true

}

],

“resource_type”: “ServiceConfig”

}

The warning below is logged in /var/log/nsx-syslog.log whenever a threshold is breached for any of the vsip modules. Configure an alert in vRealize Log Insight for the string ”threshold event is raised”.

That’s it folks. I hope you found this blog useful.

NSX-T DFW MonitoRING and TROUBLESHOOTING

Hello there,

Today for most large enterprises the key area of concern is security. The NSX-T platform from VMware provides various features and functionalities to strengthen the security framework of an IT enterprise. One such feature is the ability to implement Distributed Firewall rules for East-West traffic.

In this blog, I’ll try and cover some of the command-line options to monitor and troubleshoot various aspects of Distributed Firewall Rules.

Check DFW status on a given ESXi host

Procedure: Login to the ESXi host with SSH and enter the command nsxcli to switch to NSX command-line interface. After switching to nsxcli, run the command given below.

Command – get firewall status

Purpose – This command is used to check if the firewall rules are enabled.

Check the firewall thresholds.

Procedure: Login to the ESXi host with SSH and enter the command nsxcli to switch to NSX command-line interface. After switching to nsxcli, run the command given below.

Command – get firewall thresholds

Purpose – This command provides the max size and current size for each of the DFW and vsip components. Please check if the current size for any of the components is close to the max threshold limits

Check the rule count and section count for L2 and L3 DFW rules

Procedure: Login to NSX-T Manager using admin user and enter the command get firewall summary

Command – get firewall summary

Purpose – This command is used to check the total number of firewall rules and sections

Check the heap memory stats for the vsip module

Procedure: Login to the ESXi host with SSH and enter the command below. Execute this command on the ESXi host on which the heap memory stats for the vsip modules need to be checked. 

Command – vsipioctl getmeminfo

Purpose – This command provides the memory utilization for each of the vsip modules. Please check if memory utilization for any of the vsip module is close to the critical threshold.

Check the number of firewall rules per virtual machine’s vNIC

Run the command below on the ESXi host to get the filter name associated with a virtual NIC of a given VM.

Once the filter name is known, execute the command below to list the firewall rules associated with a given filter or virtual NIC.

vsipioctl getrules -f <filter name>

You can execute the command below to get the total number of firewall rules applied to a given vNIC. This is crucial because the total number of firewall rules per vNIC should not exceed 3500 (the max permissible limit is 4000!)

vsipioctl getrules -f <vnic name of the VM> | grep “rule” | grep “inout” | wc -l

Check the stats for a given DV filter.

Procedure: Login to the ESXi host with SSH and run the command given below. Execute this command on the ESXi host running the VM for which dvfiler stats need to be checked.

Check the flow details for a given dvfilter

Procedure – Login to the ESXi host and run the summarize-dvfilter command to get the details of dvfilter associated with a vnic of a given VM. Once the dvfilter is known, execute the vsipioctl getflows -f <dvfilter name> command to get the details of network flows.

Commands:

summarize-dvfilter  – Command to get the dvfilter name associated with a vNIC of a given VM

vsipioctl getflows -f <filter name> – Command to get the flow details.

Check the active connections for a given dvfilter

Procedure – Login to the ESXi host and run the summarize-dvfilter command to get the details of dvfilter associated with a vnic of a given VM. Once the dvfilter is known, execute the vsipioctl getflows -f <dvfilter name> command to get the details of network flows.

Commands:

summarize-dvfilter – Command to get the dvfilter name.

vsipioctl getconnections -f <filter name> – Command to get the connection details.

Check the active connection count for a given dvfilter

Procedure – Login to the ESXi host and run the summarize-dvfilter command to get the details of dvfilter associated with a vnic of a given VM. Once the dvfilter is known, execute the vsipioctl getconncount -f <dvfilter name> command to get the details of active connection count.

Commands:

summarize-dvfilter – Command to get the dvfilter name.

vsipioctl getconncount -f dvfilter – Command to get the active connection count.

Well, this concludes the blog. I hope you found this blog useful.