V-to-T migration expert sharing with RestNSX -…

V-to-T migration expert sharing with RestNSX – serie04

V-to-T migration expert sharing with RestNSX -…

V-to-T migration expert sharing with RestNSX partner- serie04oCross vCenter migration and more using RestNSXoWatch customer success stories migrating from NSX-V to NSX-T using RESTNSX, a VMware Partner. ReSTNSX MAT (Migration Assistance Tool) provides customers with a very easy to use process for migrating configuration from NSX v to NSX T complete with rollback. This allows admins to stand up a NSX-T environment for validation purposes prior to cutover. -Cross vCenter migration and more…Read More


VMware Social Media Advocacy

Kubernetes Hands-On-Lab + Free Quick Start E-Book

Kubernetes Hands-On-Lab + Free Quick Start E-Book

This demo is a follow-along guide through a Kubernetes hands-on lab to get yourself a Free Kubernetes Nigel Poulton Quick Start E-Book! We will walk through creating a MySQL database, Deploying Kasten K10, Protecting your MySQL database, accidentally deleting data and then recovering data. […]


VMware Social Media Advocacy

Let’s upgrade to NSX-T 3.1.1 – Step by Step procedure

Hi Folks,

It has been a while that NSX-T 3.1.1 was released and it comes with some really cool capabilities that can help you migrate from NSX-V (NSX Datacenter for vSphere) to NSX-T. I’ve listed below some of these capabilities.

  • Support of Universal Objects Migration for a Single Site
  • Migration of NSX-V Environment with vRealize Automation
  • Modular Migration for Hosts and Distributed Firewall
  • Modular Migration for Distributed Firewall available from UI
  • NSX-T bridging to extend L2 between NSX for vSphere and NSX-T and support lift and shift migration using vMotion

Besides the capabilities listed above, the NSX-T 3.1.1 comes with other exciting features like NSX policy API support for Identity firewall configuration, NSX API support for AVI load balancer configuration etc. For the complete list, please refer to the ‘what’s new’ section of the NSX-T 3.1.1 release notes.

With this brief overview about the features introduced in NSX-T 3.1.1, let us now look at how to upgrade to NSX-T 3.1.1

The prerequisites, upgrade procedure and post upgrade tasks for both NSX-T 3.1.1 and NSX-T 3.1.2 upgrade remains the same. So, you can refer to this blog post for upgrading to either of these versions.

Before we get into the actual upgrade procedure, I would strongly recommend to go through the sections below from the NSX-T Upgrade guide.

Also, from the prerequisites standpoint, it is important to confirm on the below points.

For any upgrade task, it is important to first perform the backup of existing configuration.

Backup the NSX-T Manager

  • In order to take a manual backup of the existing configuration, loginto the NSX-T manager console and navigate to the ‘Backup and Restore’ option under ‘System’ tab.
  • Click on EDIT to configure the FileServer as the backup location and provide the IP address, port, protocol, username, password, destination directory and passphrase details as shown below.

  • Leave the SSH fingerprint option blank. Once you click on the SAVE button, it will prompt you to accept the SSH fingerpint.
  • Once the Fileserver is configured, click on ‘BACKUP NOW’ to start with the backup
  • Monitor the progress of a backup task
  • Navigate to the File Server backup location and confirm if a backup file has been created at the preconfigured location

Provision a secondary disk to all NSX-T Manager appliances

  • If you are upgrading from a version earlier than NSX-T Datacenter, provision a second or secondary hard disk of exactly 100 GB to all the NSX-T Manager appliances.
  • Login to the vSphere client and add a second hard disk of exactly 100 GB to each of the three NSX-T Manager appliances.

Verify the current state of NSX-T Data Center

Before proceeding with the upgrade, it is important to perform the checks below and confirm that current state of NSX-T platform is good.

  • Verify the admin and root credentials for all the NSX-T Manager appliances and NSX-T edge appliances
  • Verify the admin user credentials for the NSX-T admin console
  • Check the Dashboard, system overview, host transport nodes, edge transport nodes, NSX Edge cluster, transport nodes, HA status of the edge, and all logical entities to make sure that all the status indicators are green, deployed, and do not show any warnings
  • Verify the cluster status by running the command ‘get cluster status verbose’ on the CLI prompt of any of the NSX-T Manager appliances.
  • Verify the service status by running the command ‘get service’ on the CLI prompt of any of the NSX-T Manager appliances.
  • Confirm the North-South connectivity by pinging out from one of the VMs connected to NSX-T logical segment.
  • Confirm the East-West connectivity between the two VMs connected to the NSX-T logical segment
  • Record BGP states on the NSX Edge devices by running the below set of commands on the NSX-T Edge appliances

Download the NSX-T upgrade bundle

Download the NSX-T upgrade on a client machine from where you can access the NSX-T manager admin console. The links to download the NSX-T Upgrade bundle for NSX-T upgrade 3.1.1 and NSX-T upgrade 3.1.2 are given below

NSX-T 3.1.1 Upgrade bundle

NSX-T 3.1.1 Upgrade Bundle

NSX-T 3.1.2 Upgrade Bundle

NSX-T 3.1.2 Upgrade Bundle

Important references for Upgrade

The KB article below refers to an issue, which could occur during upgrade to NSX-T 3.1.0/3.1.1 when a Tier-0 or Tier-1 Gateway incorrectly have 2 default firewall sections.
When section “Policy_Default_Infra” is uppermost and has the wrong state, NAT rules will not work as expected and traffic expected to be allowed by the default rule may be disrupted. This KB article covers the workaround for this issue.

KB article 82202 – Network disruption observed after upgrade to NSX-T 3.1.0/3.1.1

NSX-T Upgrade Procedure

NSX-T components should be upgraded using the sequence below

  • Upgrade the upgrade coordinator
  • Upgrade NSX Edge Cluster
  • Configuring and Upgrading Hosts
  • Upgrade Management Plane

Please note : For the entire upgrade process, login to the NSX-T manager using an IP address of the orchestrator node. Ensure that you do not use any type of Virtual IP address or the FQDN to upgrade NSX-T Data Center.

In order to identify an orchestrator node, run the command below on the CLI prompt of any of the NSX-T managers. In the output of the command below, check the IP address which is displayed against “Enabled on” and the use the same IP address throughout the upgrade process.

To change the orchestrator node, login to an NSX-T manager node that you want to set as an orchestrator node and run the command set repository-ip.

Also, when the Management Plane upgrade is in progress, avoid any configuration changes from any of the nodes.

Upgrade of the upgrade coordinator

The Upgrade coordinator is a self-contained web application that orchestrates the upgrade process of hosts, NSX Edge cluster, NSX Controller cluster, and Management plane.

Prerequisites

Ensure that the NSX-T upgrade bundle is downloaded onto a machine from where you can connect to the NSX-T Manager admin console

Upgrade Procedure

  • Login as admin user to an NSX-T Manager admin console using an URL https://nsx-manager-ip-address/login.jsp?local=true, where nsx-manager-ip-address should be the IP address of an orchestrator node.
  • Navigate to System > Upgrade from the navigation panel and click on ‘Proceed to Upgrade’
  • Click ‘Browse‘ to navigate to the location you downloaded the upgrade bundle .mub file and upload the upgrade bundle

  • The speed of upload task may vary based on the network speed. Monitor the upload task and wait until the ‘BEGIN UPGRADE’ button is highlighted.

  • Once the upgrade bundle is uploaded successfully, the BEGIN UPGRADE button will be highlighted as shown below.

  • Accept the End User License Agreement and click on ‘CONTINUE‘ to proceed with the upgrade process

  • Verify the current and target version shown under the Prepare for Upgrade tab
  • Click Run Pre-Checks to verify that all the NSX-T Data Center components are ready for upgrade. You must run the pre-checks when you change or reset your upgrade plan, or upload a new upgrade bundle.

  • Check the Host notification, Edge notification and Management notification next to pre-checks to see the warning details

  • In a lab environment, the warning below was displayed for the NSX-T Manager component due to the lack of CPU resources for the NSX-T Manager appliances.

  • Address any such warning or issues and ensure that pre-check status is ‘Success’ for Edges, Host and NSX Managers before proceeding with the upgrade
  • You can click Download Pre-Check Results to download a CSV file with details about pre-check errors for each component and their status.

Upgrade of NSX Edge Cluster

After the upgrade coordinator is upgraded, the upgrade coordinator upgrades the NSX Edge cluster.

Based on the selected option, the upgrade coordinator upgrades multiple Edge clusters/groups simultaneously or in a serial fashion.

However, the NSX Edge nodes within the same cluster are upgraded in serial mode so that when the upgrading node is down, the other nodes in the NSX Edge cluster remain active to continuously forward traffic.

Below copied is the description for the two given options. Out of the two given option, the ‘Serial’ option is selected by default.

Serial option to upgrade NSX-Edge groups

Parallel option to upgrade NSX-Edge groups

The maximum limit of simultaneous upgrade of Edge upgrade unit groups is five.

Prerequisites

Upgrade Procedure

  • Select an appropriate upgrade plan option (Serial or Parallel) based on your requirement and click ‘Start’ to upgrade the NSX-Edge cluster. I just left the option to default i.e. ‘Serial’.
  • Monitor the progress while the upgrade coordinator upgrades each of the NSX-Edge nodes within a cluster one after the other.
  • Wait until the upgrade status changes from ‘In progress’ to ‘Successful’

  • Once the upgrade is successful, click on ‘RUN POST CHECKS’ to run the post upgrade checks
  • Ensure that no issues are reported as a part of post upgrade checks

Upgrade of Hosts

Before you proceed to upgrade and install the NSX vibs on ESXi hosts, you can customise the upgrade sequence of the hosts, disable certain hosts from the upgrade, or pause the upgrade at various stages of the upgrade process. For more details, please refer Configure Hosts.

Again there are options for upgrading ESXi hosts – Serial and Parallel. The description for both these options is detailed in the screenshots below

Serial option to upgrade ESXi hosts
Parallel option to upgrade ESXi hosts

The maximum limit for a simultaneous upgrade is five host upgrade unit groups and five hosts per group.

Prerequisites

  • Verify that you have configured the overall hosts upgrade plan. Please refer Configure Hosts.
  • Verify that ESXi hosts that are part of a disabled DRS cluster or standalone ESXi hosts are placed in maintenance mode.For ESXi hosts that are part of a fully enabled DRS cluster, if the host is not in maintenance mode, the upgrade coordinator requests the host to be put in maintenance mode.
  • For hosts running ESXi 6.5U2/U3 or ESXi 6.7U1/U2, during maintenance mode upgrade to NSX-T Data Center 2.5.1, the host is rebooted if stale DV filters are found to be present on the host. Upgrade to ESXi 6.7 U3 or ESXi 6.5 P04 prior to upgrading to NSX-T Data Center 2.5.1 if you want to avoid rebooting the host during the NSX-T Data Center upgrade.

Upgrade Procedure

  • Select an appropriate upgrade plan option (Serial or Parallel) based on your requirement and click ‘Start’ to upgrade the NSX-Edge cluster. I just left the option to default i.e. ‘Serial’.
  • Click start to upgrade the ESXi hosts
  • Monitor the progress while the NSX vibs of the new version are installed on each of the ESXi hosts

  • Click Run Post Checks to make sure that the upgraded hosts and NSX-T Data Center do not have any problems.

  • After the upgrade is successful, verify that the latest version of NSX-T Data Center packages is installed on the vSphere, KVM hosts, and bare metal server.

For vSphere hosts, use the command esxcli software vib list | grep -i ‘nsx’

NSX VIBS installed after upgrade to version 3.1.1

For Ubuntu hosts, use the command dpkg -l | grep -i ‘nsx’

For Suse Linux Enterprise Server, Red Hat or CentOS hosts, enter rpm -qa | egrep ‘nsx|openvswitch|nicira’

  • Power on the tenant VMs of standalone ESXi hosts that were powered off before the upgrade
  • Power on or return the tenant VMs of ESXi hosts that are part of a disabled DRS cluster that were powered off before the upgrade.

Upgrade Management Plane

Please make a note of a few important points before starting with the upgrade of Management plane

  • When the Management Plane upgrade is in progress, avoid any configuration changes from any of the nodes.
  • After you initiate the upgrade, the NSX Manager user interface is briefly inaccessible. Then the NSX Manager user interface, API, and CLI are not accessible until the upgrade finishes and the Management plane is restarted

Prerequisites

  • Ensure that NSX Edge cluster is upgraded successfully
  • If you are upgrading from a version earlier than NSX-T Datacenter 3.0, ensure to add a secondary disk of exactly 100 GB to each of the NSX-T Manager appliances. Reboot the appliance if the secondary disk is not detected by the Upgrade Coordinator.
  • Take a backup of the NSX-T Manager

Upgrade Procedure

  • Click Start to upgrade the Management plane.
  • Please go through the below note, which appears as soon as you click on ‘START’
  • You can safely ignore any upgrade related errors such as, HTTP service disruption that appears at this time. These errors appear because the Management plane is rebooting during the upgrading.Wait until all the nodes are upgraded. It may take several minutes for the cluster to reach a stable state.

  • Monitor the upgrade progress from the NSX Manager CLI for the orchestrator node
  • In the CLI, log in to the NSX Manager to verify that the services have started and to check the cluster status.
  • Monitor the progress using the command ‘get upgrade progress-status’ as each of the NSX-T Manager appliances are upgraded
  • While upgrading the NSX-T Manager appliances, the upgrade coordinator will identify a newly added disk of size 100 GB.

  • Once all the NSX-T Manager appliances are upgraded, run the get cluster status command and ensure that the overall status of the cluster is stable.

  • Also, run the get service command and ensure that all the required services are in running state.

Post Upgrade Tasks

  • From your browser, log in as a local admin user to an NSX Manager using the URL https://<nsx-manager-ip-address>/login.jsp?local=true.

  • Navigate to System>Upgrade and verify the upgrade summary

  • Check the Upgrade History

  • Verify that the Dashboard, fabric hosts, NSX Edge cluster, transport nodes, and all logical entities status indicators are green, normal, deployed, and do not show any warnings.

  • Navigate to the Alarms tab and check if there are any alarms.
  • By default, passwords expire after 90 days. If the password expires, you will be unable to log in and manage components. Check the existing password expiration details for admin, audit and root user using the command below

  • You can set the expiration period for between 1 and 9999 days using the command nsxcli set user admin password-expiration <1 – 9999>
  • Log in to the vSphere Client to verify if your existing NSX Edge VMs are configured with the following CPU and Memory values. If they are not, edit the VM settings to match these values.

  • Navigate to ‘Backup and Restore’ option under ‘System’ tab and take a manual backup of the upgraded NSX-T Manager
  • Before the upgrade the automatic backup option was disabled. Therefore, you will need to enable the automatic backup option.

This concludes the upgrade of NSX-T Datacenter to NSX-T version 3.1.1. Thank you for reading through my blog. I hope you found this blog useful. I would appreciate your feedback in the comments section below.

Kubernetes cluster overview

Before we get into the various components of a kubernetes cluster, it is important to understand what a pod is. The kubernetes encapsulates containers into a kubernetes object called pod. A pod is a single instance of a containerised application running on a worker node of the kubernetes cluster. If the demand for a given containerised application increases, kubernetes spins up more pods on the same worker node or other worker nodes in the kubernetes cluster. If a demand for an application goes down, the number of pods can be scaled down to free up resources on the worker nodes. In most cases, there is one to one relationship between a container and a pod i.e. only one container is encapsulated in a pod, except in cases where a sidecar or a helper container is required by an application to perform additional task. So, why does kubernetes encapsulates containers in an object called pod, when a container can be run directly on a host hardware or a worker node? Well, it becomes far more easier for kubernetes to manage and orchestrate containers when they are encapsulated inside a pod. For instance, pods can be easily scaled up, scaled down, deployed, destroyed and assigned storage and network resources based on the requirement. With this high level understanding of a pod, let us now dive deep into the various components of a kubernetes cluster.

The Kubernetes cluster comprises of a Master node and one or more worker nodes. The Master node hosts the various control plane components required to manage and administer the Kubernetes cluster, whereas worker nodes are responsible for hosting pods. Some of the key functions performed by a Master node are listed below.

  • The Master node is responsible for the management and administration of the kubernetes cluster.
  • It determines which containers need to be scheduled on which of the Worker nodes
  • It also stores the complete inventory of the cluster
  • It monitors the Worker nodes and the containers running on the Worker nodes.

The Master node does all of these using a set of control plane components detailed below.

Kube-apiserver – The kube-apiserver is responsible for orchestrating all operations within the kubernetes cluster. It exposes kubernetes api, which is used by administrators to perform management operations on the kubernetes cluster. The kubectl is a client utility is used to interact with the kube-apiserver. When a user enters a kubectl command to create a pod, the request to create a pod is first authenticated, validated and subsequently executed by the kube-apiserver. The kube-apiserver also updates the ETCD datastore with the information about the new pod.

If a kubernetes cluster is deployed using the kubeadm tool, it deploys the control plane components including the kube-apiserver as a pod in the kube-system namespace. In such a case, the pod definition file /etc/kubernetes/manifests/kube-apiserver.yaml defines all the options with which the kube-apiserver is invoked. If a kubernetes cluster is deployed the hard way (without using the kubeadm tool) using the binaries from the kubernetes release page, the options used to invoke the kube-apiserver are available in /etc/systemd/system/kube-apiserver.service. While specifying the path of kube-apiserver pod definition file or service definition file maybe too much of a detail for a high level overview, I thought of covering it here as such a detail usually comes handy while troubleshooting cluster relevant issues. Moreover, the given pod definition file path and the service definition file path holds true for other control plane components as well with kubelet being an exception. The kubelet can be installed only using the binaries on the kubernetes release page, it cannot be installed using kubeadm.

ETCD – The ETCD stores the complete cluster inventory in the form of key-value pair. It is a key-value datastore, which stores information such as nodes, pods, configs, secrets, roles, bindings, etc. ETCDCTL is a ETCD client or a command line utility used to interact with the ETCD key value datastore.

Kube-scheduler – As the name suggests, the kube-scheduler determines which pods need to be placed on which Worker nodes based on the pods resource requirements, worker node size and any other policy, constraints such as taints and tolerations or node affinity rules. It is worth noting that kube-scheduler does the job of only determining which pod needs to be scheduled on which node, it does not actually deploys a pod on a node.

Controller Manager – The controller manager internally comprises of two controllers:

  • Node controller – The Node controller is meant for onboarding new nodes into the cluster and take care of situation where nodes become unavailable.  The node controller monitors the status of the nodes every 5 seconds (Node monitor period). In case there is no heart beat received for 40 seconds (Node monitor grace period) , the node controller marks the node as unreachable. After a node is marked unreachable, the node controller waits for 5 minutes (Pod Eviction timeout) to check if the node has come up. If an affected node does not comes up after 5 minutes, the node controller evicts any pods which are part of the replication group, from the affected node and provisions those pods onto the healthy nodes in the cluster.
  • Replication controller – The Replication controller ensures that desired number of pods always run in a given replication group. If a pod within a replication group dies, the replication controller creates another pod.

Kubelet – The kubelet runs on every worker node. It listens for instructions from the kube-apiserver running on the Master node and deploys or destroys a pod on the worker node as required. When the kubelet receives a request from the kube-apiserver to create a pod, it instructs the container runtime engine, which could be docker, to pull a required image and run an instance of the pod. The kube-apiserver periodically fetches reports from the kubelet to monitor the status of the pods running on a node. The kublet service also runs on a Master node to manage control-plane pods (control-plane components that are deployed as pods) and pods running services like networking and DNS. The kubelet also registers a worker node with the kubernetes cluster. Once a worker node is registered, it is monitored by the node-controller on the Master node. Unlike the other components, the kubelet is not installed by the kubeadm tool. In order to install the kubelet, you need to download the kubelet binaries from the kubernetes release page, extract it and run it as a service.

Kube-proxy – The Kube-proxy is responsible for enabling communication between the worker nodes. For instance, an application container hosted on one worker node communicates with a database container on another worker node by leveraging a kube-proxy  service. In a kubernetes cluster, the pods running custom applications, databases or web servers are exposed by the means of a service (ClusterIP, NodePort or load balancer service). The service is a kind of logical entity which is created in the memory and is accessible to all the nodes of kubernetes cluster. The kube-proxy component on each node is responsible for forwarding any traffic from the service to the back-end pods.

Networking and DNS services – The services responsible for managing networking and DNS requirements of the cluster are usually deployed as pods on the Master node.

Let’s upgrade to vCenter Server 7.0 U1- Step by Step guide

Hi Folks,

The vSphere 7 release has been a game changer for VMware in many ways. The vSphere 7 comes with quite a few interesting features like support for containers and the ability to run fully compliant and conformant Kubernetes with vSphere, improved DRS algorithm which takes into account VM “Happiness” while load balancing cluster resources, vSphere Lifecycle Manager for better lifecycle operations and much more. For the complete list, please refer Introducing vSphere 7.

As most customers are looking forward to upgrade their vSphere platform to vSphere 7.0, I thought it would help to capture the prerequisites and complete procedure to upgrade vCenter Server to 7.0 U1 in a blog.

When we plan to perform an upgrade of any software component, the first thing that comes to our mind is the prerequisite. Are all the important prerequisites like backup, license, compute resources, storage resources etc in place to support the upgrade process? So, let us first look at the prerequisites required to upgrade the vCenter Server to vCenter Server 7.0 U1.

I usually prefer to prepare a list of prerequisites as detailed in the tabular format below as it allows me to track the status for each of the prerequisites. This way, am sure that I’ve not missed on any of the key prerequisites. While it’s a long list, trust me, if you get your prerequisites right you can be almost certain that your upgrade process will go smooth.

PrerequisitesReference linkStatus
Go through the release notes of the vCenter Server 7.0 U1, specifically the ‘what’s new’ section and ‘known issues’ sectionvCenter Server 7.0 U1 Release Notes 
If you are running a vCenter Server Appliance, take the file based backup of the vCenter Server configuration from within the vCenter Server Appliance Management console. vCenter Server Appliance File based backup 
Use the VMware interop matrix and check the interoperability of the vCenter Server 7.0 U1 with other VMware components, which are integrated with your existing vCenter Server. VMware Interoperability Matrix 
Check the compatibility of vCenter Server 7.0 U1 with any third party products like backup or monitoring solution, which you may have integrated with your existing vCenter Server N/A 
Confirm the supported upgrade path for vCenter Server upgradevCenter Server 7.0 supported upgrade path 
Raise a proactive SR with VMware support teamN/A 
Shutdown the vCenter Server and take an image-based backup of the vCenter Server appliance in power-off state. After the backup, power-on the vCenter Server Appliance N/A 
Download the vCenter Server 7.0 U1 ISO image from the vCenter Server download sitevCenter Server 7.0 U1 download 
In order to perform an upgrade, one needs to download the vCenter Server 7.0 U1 ISO image and run the vCenter Server installer from a network client machine, which has connectivity to the existing vCenter Server and ESXi hosts. For optimal performance of the GUI and CLI vCenter Server installers, the network client machine should meet the minimum hardware requirements. System Requirements for the vCenter Server Installer 

The upgrade of the appliance is a migration of the old version to the new version, which includes deploying a new appliance of version 7.0. Unsynchronized clocks can result in authentication problems, which can cause the installation to fail or prevent the vCenter Server vmware-vpxd service from starting. Please verify that all components on the vSphere network have their clocks synchronised.
Synchronising Clocks on the vSphere Network 
When you use Fully Qualified Domain Names, verify that the client machine from which you are deploying the vCenter appliance and the network on which you are deploying the appliance use the same DNS server.N/A 
Check the supported Deployment topology for vCenter Server Moving from a Deprecated to a Supported vCenter Server Deployment Topology Before Upgrade or Migration 
Export the vSphere Distributed Switch configurationhttps://kb.vmware.com/s/article/2034602 
   
The vSphere 5.x, 6.x and 7.x releases require different license keys. It is not possible to use vSphere 5.x licenses on vSphere 6.x. The same is true when upgrading from vSphere 6.x to 7.x. Therefore, please upgrade the existing vCenter Server 6.x license keyhttps://kb.vmware.com/s/article/2107538 
 For information on upgrading license keys, please refer KB 81665https://kb.vmware.com/s/article/81665 
   
When you deploy the vCenter Server appliance, you can select to deploy an appliance that is suitable for the size of your vSphere environment. Ensure that you have sufficient resources in terms of CPU and Memory on the ESXi host on which you plan to deploy the new vCenter Server Appliance Hardware Requirements for the vCenter Server Appliance 
When you deploy the vCenter Server appliance, the ESXi host or DRS cluster on which you deploy the appliance must meet minimum storage requirements. Ensure that you have sufficient disk space on the datastore on which you plan to host the vCenter Server Appliance disks. The given storage requirements include the requirements for the vSphere Lifecycle Manager that runs as a service in the vCenter Server applianceStorage Requirements for the vCenter Server Appliance 
The VMware vCenter Server appliance can be deployed on ESXi 6.5 hosts or later, or on vCenter Server instances 6.5 or later. Ensure that all the ESXi hosts in your cluster are at least running on ESXi 6.5 version or laterSoftware Requirements for the vCenter Server Appliance 
VMware has tested the use of vSphere Client for certain guest operating system and browser versions. Ensure that the client machines from where you plan to access the vCenter Server using vSphere Client meet these requirementsvSphere Client Software Requirements 
   
The vCenter Server system must be able to send data to every managed host and receive data from the vSphere Client. Though not recommended, if you are planning to deploy vCenter Server on a separate subnet than existing vCenter Server and ESXi hosts, check if all the required ports are open to allow communication between newly upgraded vCenter Server appliance and existing vCenter Server and ESXi hosts. Port requirements for vCenter Server 7 
The deployment of a new vCenter Server appliance requires a temporary static IP. Ensure to reserve a static IP address for a new vCenter Server appliance from the same subnet as that of the existing vCenter Server as it will save you the hassle of raising firewall ports for a new vCenter Server appliance. IP Requirement for the vCenter Server Appliance 
When you deploy the new vCenter Server appliance, in the temporary network settings, you need to assign a static IP address and an FQDN that is resolvable by a DNS server. After the upgrade, the appliance frees this static IP address and assumes the network settings of the old appliance. Ensure to register a static IP address for a new vCenter Server appliance with an FQDN and confirm DNS resolution for both forward and reverse lookup queries. DNS Requirements for the vCenter Server Appliance 
Ensure that the ESXi host management interface has a valid DNS resolution from the existing vCenter Server and the network client machine from where you plan to run the installer and perform the upgrade. Ensure that the existing vCenter Server has a valid DNS resolution from all ESXi hosts and network client machine from where you plan to run the installer and perform the upgrade. Confirm forward and reverse lookup for both vCenter Server and ESXi hostsDNS Requirements for the vCenter Server Appliance 
   
ESXi hosts must be at version 6.5 or laterPrepare ESXi Hosts for vCenter Server Appliance Upgrade 
Your source ESXi host (host on which the existing vCenter Server is running) and target ESXi hosts (cluster of ESXi hosts or individual ESXi host on which you plan to deploy the new vCenter Server Appliance) must not be in lockdown or maintenance mode, and not part of fully automated DRS clusters. Ensure that DRS is switched to manual mode for both source and target ESXi host clusters. Prepare ESXi Hosts for vCenter Server Appliance Upgrade 
If you have vSphere HA enabled clusters, ensure that the parameter ‘vCenter Server requires verified host SSL certificates’ under configure>settings>General>SSL settings is selected Prepare ESXi Hosts for vCenter Server Appliance Upgrade 
Review the SSL certificates in VMware End Point Certificate Store and ensure that none of the certificates are expired or invalid.https://kb.vmware.com/s/article/2111411 
   
Verify that port 22 is open on the vCenter Server appliance that you want to upgrade. The upgrade process establishes an inbound SSH connection to download the exported data from the source vCenter Server appliance. Also, ensure that SSH service/daemon is running on the source vCenter Server appliance. Prerequisites for upgrading the vCenter Server Appliance 
If you are upgrading a vCenter Server appliance that is configured with Update Manager, run the Migration Assistant on the source Update Manager computer.Prerequisites for upgrading the vCenter Server Appliance 
Verify that port 443 is open on the source ESXi host on which the appliance that you want to upgrade resides. The upgrade process establishes an HTTPS connection to the source ESXi host to verify that the source appliance is ready for upgrade and to set up an SSH connection between the new and the existing appliance.Prerequisites for upgrading the vCenter Server Appliance 
When upgrading, the temporary vCenter Server instance requires the same access rights as the permanent vCenter Server instance to port 443. Ensure that any firewalls in your environment allow both the temporary and permanent vCenter Server instances to access port 443.Prerequisites for upgrading the vCenter Server Appliance 
Verify that the new appliance can connect to the source ESXi host or vCenter Server instance on which resides the appliance that you want to upgrade.Prerequisites for upgrading the vCenter Server Appliance 

Now that the prerequisites are covered, let us look at the information required to perform the upgrade. It would be good to keep all the required information handy, so that you don’t waste time looking for it during the change window. Once again, it would be good to prepare a table with the list of required information.

Deployment Types Information RequiredDefault Value
All deployment typesFQDN or IP address of the source appliance that you want to upgrade. 
All deployment typesHTTPS port of the source appliance.443
All deployment typesvCenter Single Sign-On administrator user name of the source appliance.
administrator@vsphere.local
All deployment typesPassword of the vCenter Single Sign-On admin user 
All deployment typesPassword of the root user of the source appliance 
   
All deployment typesFQDN or IP address of the source server on which resides the vCenter Server appliance that you want to upgrade.  
All deployment typesHTTPS port of the source server.443
All deployment types
If your source server is an ESXi host, use root.
If your source server is a vCenter Server instance, use user_name@your_domain_name, for example, administrator@vsphere.local. The source server cannot be the vCenter Server appliance that you want to upgrade. In such cases, use the source ESXi host.
 
All deployment typesroot password for the source ESXi host or administrator user password for the source vCenter Server 
   
All deployment typesFQDN or IP address of the target server on which you want to deploy the new vCenter Server appliance. 
All deployment typesHTTPS port of the target server.443
All deployment types
If your target server is an ESXi host, use root.
If your target server is a vCenter Server instance, use user_name@your_domain_name, for example, administrator@vsphere.local. If the network port group that are you planning to use for the new vCenter Appliance is on the vSphere Distributed switch, please provide the FQDN or IP address of your existing vCenter Server instance as the target server. 
 
All deployment typesroot password for the ESXi host or administrator user password for your vCenter Server  
   
All deployment types
Only if your target server is a vCenter Server instance.
Data center from the vCenter Server inventory on which you want to deploy the new appliance.
 
All deployment typesOptionally you can provide a data center folder. 
All deployment typesESXi host or DRS cluster from the data center inventory on which you want to deploy the new appliance . 
   
All deployment typesThe virtual machine name for the new vCenter Server appliance.
Must not contain a percent sign (%), backslash (\), or forward slash (/).
Must be no more than 80 characters in length.
VMware vCenter Server Appliance
   
All deployment typesPassword for the root user of the new vCenter Server appliance operating system.
Must contain only the lower ASCII character set without spaces.
Must be at least 8 characters, but no more than 20 characters in length.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain at least one number.
Must contain at least one special character, for example, a dollar sign ($), hash key (#), at sign (@), period (.), or exclamation mark (!).
 
   
vCenter Server appliance 6.5 with an embedded Platform Services Controller
vCenter Server appliance 6.7 with an embedded Platform Services Controller
vCenter Server appliance 6.5 with an external Platform Services Controller
vCenter Server appliance 6.5 with an external Platform Services Controller
Deployment size of the new vCenter Server appliance for your vSphere environment.
Tiny
Deploys an appliance with 2 CPUs and 12 GB of memory.

Suitable for environments with up to 10 hosts or 100 virtual machines.

Small
Deploys an appliance with 4 CPUs and 19 GB of memory.

Suitable for environments with up to 100 hosts or 1,000 virtual machines.

Medium
Deploys an appliance with 8 CPUs and 28 GB of memory.

Suitable for environments with up to 400 hosts or 4,000 virtual machines.

Large
Deploys an appliance with 16 CPUs and 37 GB of memory.

Suitable for environments with up to 1,000 hosts or 10,000 virtual machines.

X-Large
Deploys an appliance with 24 CPUs and 56 GB of memory.

Suitable for environments with up to 2,000 hosts or 35,000 virtual machines.
 
   
vCenter Server appliance 6.5 with an external Platform Services Controller
vCenter Server appliance 6.7 with an external Platform Services Controller
Storage size of the new vCenter Server appliance for your vSphere environment.

Increase the default storage size if you want larger volume for SEAT data (stats, events, alarms, and tasks).                                           Default
For tiny deployment size, deploys the appliance with 415 GB of storage.
For small deployment size, deploys the appliance with 480 GB of storage.
For medium deployment size, deploys the appliance with 700 GB of storage.
For large deployment size, deploys the appliance with 1065 GB of storage.
For x-large deployment size, deploys the appliance with 1805 GB of storage.
Large
For tiny deployment size, deploys the appliance with 1490 GB of storage.
For small deployment size, deploys the appliance with 1535 GB of storage.
For medium deployment size, deploys the appliance with 1700 GB of storage.
For large deployment size, deploys the appliance with 1765 GB of storage.
For x-large deployment size, deploys the appliance with 1905 GB of storage.
X-Large
For tiny deployment size, deploys the appliance with 3245 GB of storage.
For small deployment size, deploys the appliance with 3295 GB of storage.
For medium deployment size, deploys the appliance with 3460 GB of storage.
For large deployment size, deploys the appliance with 3525 GB of storage.
For x-large deployment size, deploys the appliance with 3665 GB of storage.
The sizing algorithm in use by the upgrade installer might select a larger storage size for your environment. Items that might affect the storage size selected by the installer include modifications to the vCenter Server appliance disks (for example, changing the size of the logging partition), or databases having a database table that the installer determines to be exceptionally large and requiring additional hard disk space.
   
All deployment typesName of the datastore on which you want to store the configuration files and virtual disks of the new appliance. The installer displays a list of datastores that are accessible from your target server. 
All deployment typesEnable or disable Thin Disk Mode.Disabled
   
All deployment typesName of the network port group to which to connect the new appliance. The installer displays a drop-down menu with networks that depend on the network settings of your target server. If you are deploying the appliance directly on an ESXi host, non-ephemeral distributed virtual port groups are not supported and are not displayed in the drop-down menu. 
All deployment typesIP version for the appliance temporary addressIPv4
All deployment typesIP assignment for the appliance temporary address – Static or DHCPStatic
   
All deployment types
Only if you use a static assignment for the temporary IP address.
Temporary system name (FQDN or IP address). The system name is used for managing the local system. The system name must be FQDN. If a DNS server is not available, provide a static IP address. 
 Temporary IP address 
 Default gateway 
 DNS servers separated by commas. 
   
vCenter Server appliance 6.5 with an embedded or external Platform Services Controller
vCenter Server appliance 6.7 with an embedded or external Platform Services Controller
Data types to transfer from the old appliance to the new appliance.

In addition to the configuration data, you can transfer the events, tasks, and, performance metrics.

Note:
For minimum upgrade time and storage requirement for the new appliance, select to transfer only the configuration data.
 
   
vCenter Server appliance 6.5 with an embedded Platform Services Controller

vCenter Server appliance 6.7 with an embedded Platform Services Controller
Join or do not participate in the VMware Customer Experience Improvement Program (CEIP).

For information about the CEIP, see the Configuring Customer Experience Improvement Program section in vCenter Server and Host Management.
Join the CEIP

Upgrade Procedure

With all the prerequisites and required information in place, let’s begin with the upgrade.

I’ve my existing vCenter Server appliance running on version 6.7 with embedded PSC. So, the steps below describes the procedure to upgrade the vCenter Server appliance 6.7 with embedded PSC to vCenter Server appliance 7.0 U1.

  • As a first step, download the vCenter Server ISO to the network client machine
  • Right click on the ISO file and mount the vCenter Server ISO
  • Once the ISO is mounted, navigate to the win32 directory under vcsa-ui-installer directory and double click on the installer
  • Once a window with below options appear, click on ‘Upgrade’.

  • Click on ‘Deploy vCenter Server’ to start with the deployment of the new vCenter Server appliance
  • Accept the end user license agreement
  • Provide the details of the source vCenter Server that you want to upgrade
  • The installer will present a warning with a certificate thumbprint of the ESXi server that hosts the source vCenter Server. Click ‘YES’ to accept the certificate warning and continue with the next step.
  • Specify the target server settings for deploying a new vCenter Server Appliance.
  • Click ‘YES’ to accept the certificate warning for the target server
  • Select the datacenter or folder to host the virtual machine of the new vCenter Server appliance
  • Select a compute resource to deploy the vCenter Server appliance
  • Specify the name and password for the new vCenter Server virtual appliance
  • Select deployment size for the new vCenter Server Appliance
  • Select a datastore for the new vCenter Server appliance. Do not enable ‘Thin Disk mode’ for production environment.
  • Provide the network settings for the new vCenter Server appliance.
  • In the Ready to complete stage 1 page, review the settings and click next
  • The vCenter Server installer should now start with the deployment of new vCenter Server appliance
  • Allow the installer to deploy the vCenter Server appliance
  • Once the new vCenter Server is deployed, click on continue to proceed with the stage 2 of the upgrade process.
  • Click on ‘NEXT’ to proceed with the stage 2 of the upgrade process. In stage 2, the installer copies the selected data (Configuration and inventory, Performance metrics, Tasks and Events) of the source vCenter Server, along with the network settings, to the target vCenter Server appliance and eventually shuts down the source vCenter Server.
  • Allow installer to run few pre-upgrade checks.
  • Review the pre-upgrade check results

  • Select the data that you want to copy from the Source vCenter Server appliance to target vCenter Server appliance
  • Select the option below, if you are willing to participate in VMware’s customer experience improvement program.
  • In the Ready to complete page, review the settings and click FINISH

  • Acknowledge the warning below, which states that source vCenter Server will shutdown once the network configuration, copied from the source vCenter Server, is enabled on target vCenter Server.
  • As soon as you acknowledge the warning above, the upgrade process is initiated.
  • In stage 2, the source vCenter Server data is copied to the target vCenter Server.
  • Once the required services are started for the target vCenter Server, the warning appears to notify that by default, the vCenter Server 7.0 disables the use of TLS 1.0 and TLS 1.1 protocols.
  • Once all the required data is imported into the new vCenter Server appliance, the upgraded vCenter Server is accessible using the same IP address and FQDN as the old vCenter Server.
This step concludes the successful completion of the upgrade process

The upgrade is successfully done!! However, hang on, it’s not party time yet. We need to go through few post upgrade checks.

Post upgrade checks

Once the upgrade is done, we need to perform few post upgrade checks to ensure that everything is functional from vSphere standpoint.

  • Login to the vSphere client using the same IP address or FQDN of the existing vCenter Server and confirm the version and build number of vCenter Server.
  • Check the version of vSphere Client
  • Check for any alarms or alerts. In my lab, after the vCenter Server upgrade, the error below was logged for the vCLS (1) VM.

The vSphere Cluster Services (vCLS) is a new feature in vSphere 7.0 Update 1. This feature ensures cluster services such as vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the vCenter Server instance availability. For further details on vSphere Cluster Service VM, please refer VMware KB 80472 and vSphere Clustering Service in vSphere 7.0 U1

Do not attempt to power on the vCLS VM manually. The vCLS VMs are managed by vSphere Cluster Service. For more details, please refer VMware KB 79892

  • Take a snapshot of the vLCS VM and change the hardware compatibility of the vLCS VM to VM Hardware version 14 – compatible with ESXi 6.7 and later

  • Navigate to the configure>VMware EVC tab and disable the EVC for the vCLS VM
  • Once the EVC is disabled, the vSphere Cluster Service will power on the vCLS (1) VM and will deploy two more VMs – vCLS (2) and vCLS (3). Repeat the same procedure to address the power on issues with vCLS (2) and vCLS (3).
  • Navigate to the Home>Administration>Licensing and add a new upgraded license key for vCenter Server 7.
  • Assign a new license key to the vCenter Server instance
  • Check if all the integrated VMware components (vROps, vRLI, vRA, NSX etc) and external third party solutions (backup, monitoring solution etc) are working as expected.
  • Login to the vCenter Server Appliance Management console and check the health of the vCenter Server. Check the status of all vCenter Server services

  • It would be good to take the image based and file based backup of vCenter Server appliance, so that we have a latest copy of the vCenter Server appliance immediately after the upgrade.
  • The DRS was switched to manual mode during the upgrade process. So, switch the DRS back to fully automated mode.

Finally, this concludes the upgrade process and yes, you can party now 🙂

Thank you for reading through my blog. Please feel free to provide your feedback in the comments section below.

Media Optimization for Microsoft Teams

The Media Optimization for Microsoft teams leverages the (Web Real-Time Communication) WebRTC features and offloads the audio and video processing from the virtual desktop to the client machine. 

I’ve summarized the steps below, which would help you to understand what is required from the VMware’s standpoint to configure Media Optimization for Microsoft teams.  

The brief note from VMware TechZone article below explains how this is achieved using the Horizon Client for Windows 2006 and Horizon Agent 2006 version. 

Text

Description automatically generated

The feature to support Media Optimization for Microsoft teams was introduced in Horizon Client for Wndows 2006 and is also incorporated in Horizon Client for Windows 5.5

Therefore, for configuring media optimization for Microsoft teams, the Horizon client version should be minimum at Horizon Client for Wndows 2006 (compatible with Horizon 8/2006) or at Horizon Client for Windows 5.5 (compatible with Horizon 7.13)

Table

Description automatically generated

High level steps to configure the Media Optimization for Microsoft teams

  • Horizon platform should be running on Horizon version 7.13 or Horizon 8 (2006 – as per the new versioning standards)
  • Download the Horizon Client for Windows 2006 or Horizon Client for Windows 5.5 and during the installation, select custom installation and scroll down to select “Media Optimization for Microsoft teams”
  • After the installation, reboot the Windows Client
Graphical user interface, text, application, email

Description automatically generated
  • The code in the Horizon Agent is also installed by default, but it is controlled with a GPO, which is not enabled by default. As stated in the email below, the media optimization for Microsoft teams is not supported in Horizon agent 7.12 or earlier. So, the Horizon agent version should be minimum at 7.13 or Horizon Agent 2006
  • Download the Horizon GPO bundle and use the ADM template files to enable the relevant GPO.  
  • The GPO can be enabled using the Group Policy Editor by navigating to Computer Configuration > Administrative Templates > VMware View Agent Configuration > VMware HTML5 Features > VMware WebRTC Redirection Features > Enable Media Optimization for Microsoft Teams. After setting this policy, you must log off from the Horizon desktop for the GPO policy to take effect.
  • Besides, the below GPOs under the VMware WebRTC Redirection Features can be configured based on the requirement.
Graphical user interface, text, application, email

Description automatically generated

Keys points from the Microsoft Article  Microsoft Teams for Virtual Desktop Infrastructure

  • You can deploy the Teams desktop app for VDI using a per-machine installation or per-user installation using the MSI package. Deciding on which approach to use depends on whether you use a persistent or non-persistent setup and the associated functionality needs of your organization
  • In a dedicated persistent setup, users’ local operating system changes are retained after users log off. For persistent setup, Teams supports both per-user and per-machine installation.
  • In a non-persistent setup, users’ local operating system changes are not retained after users log off. Such setups are commonly shared multi-user sessions. VM configuration varies based on the number of users and available physical box resources.
  • With per-machine installation, automatic updates is disabled. This means that to update the Teams app, you must uninstall the current version to update to a newer version. With per-user installation, automatic updates is enabled. For most VDI deployments, Microsoft recommends to deploy Teams using per-machine installation.
  • To update to the latest Teams version, start with the uninstall procedure followed by latest Teams version deployment.
  • For Teams AV optimization in VDI environments to work properly, the thin client endpoint must have access to the internet. If internet access isn’t available at the thin client endpoint, optimization startup won’t be successful. This means that the user is in a non-optimized media state.

High level steps to install Teams on VDI

  • The minimum version of the Teams desktop app that’s required is version 1.3.00.4461. (PSTN hold isn’t supported in earlier versions.)
  • Install the MSI to the VDI VM by running one of the following commands:
  • Per-user installation (default)
msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSERS=1
 
This process is the default installation, which installs Teams to the %AppData% user folder. At this point, the golden image setup is complete. Teams won't work properly with per-user installation on a non-persistent setup.
  • Per-machine installation
msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSER=1 ALLUSERS=1

This process installs Teams to the Program Files (x86) folder on a 64-bit operating system and to the Program Files folder on a 32-bit operating system. At this point, the golden image setup is complete. Installing Teams per-machine is required for non-persistent setups.

Graphical user interface, text, application

Description automatically generated

IMP NOTE – For beta testing, VMware built support for Microsoft Teams into the Horizon Client for Windows versions 5.3, 5.4, 5.4.1, 5.4.2, and 5.4.3. If you enable the optimization GPO in the virtual desktop, these clients, although not officially supported, will begin implementing offload. The bugs we found in these clients during beta testing are fixed in Horizon Client for Windows version 2006 or later, which is officially supported and which VMware recommends using.

Procedure to check if Microsoft teams is running in optimized mode

A user can check if Microsoft Teams is running in optimized mode, fallback mode, or natively (no optimization) in the virtual desktop. In the top-right corner of the Microsoft Teams interface, click the user icon and navigate to About > Version to see a banner under the user icon describing the Microsoft Teams version and pairing modes:

·         Optimized – If the banner shows VMware Media Optimized, the Enable Media Optimization for Microsoft Teams GPO is enabled, Microsoft Teams is running in the virtual desktop, and audio and video have been offloaded to the client machine. 

·         Fallback – If the banner shows VMware Media Not Connected, then Microsoft Teams is running in fallback mode. In this mode, the Enable Media Optimization for Microsoft Teams GPO is enabled, and Microsoft Teams has tried to start in optimized mode, but the version of Horizon Client being used does not support Microsoft Teams optimization. Audio and video from Microsoft Teams is not offloaded to the client machine. Fallback mode has the same limitations as optimized mode. When you make a call in fallback mode, you see a warning message on the call:

Your device doesn’t support connection via VMware. Audio and video quality may be reduced

·         No optimization – If the banner does not show VMware text in the message, the Enable Media Optimization for Microsoft Teams GPO is not enabled. Audio and video from Microsoft Teams is not offloaded to the client machine.

The VMware TechZone article Microsoft Teams Optimization with VMware Horizon covers the detailed procedure to configure Media Optimization for Microsoft teams and Microsoft article Microsoft Teams on VDI covers important considerations and procedures from Microsoft’s standpoint