Kubernetes cluster overview

Before we get into the various components of a kubernetes cluster, it is important to understand what a pod is. The kubernetes encapsulates containers into a kubernetes object called pod. A pod is a single instance of a containerised application running on a worker node of the kubernetes cluster. If the demand for a given containerised application increases, kubernetes spins up more pods on the same worker node or other worker nodes in the kubernetes cluster. If a demand for an application goes down, the number of pods can be scaled down to free up resources on the worker nodes. In most cases, there is one to one relationship between a container and a pod i.e. only one container is encapsulated in a pod, except in cases where a sidecar or a helper container is required by an application to perform additional task. So, why does kubernetes encapsulates containers in an object called pod, when a container can be run directly on a host hardware or a worker node? Well, it becomes far more easier for kubernetes to manage and orchestrate containers when they are encapsulated inside a pod. For instance, pods can be easily scaled up, scaled down, deployed, destroyed and assigned storage and network resources based on the requirement. With this high level understanding of a pod, let us now dive deep into the various components of a kubernetes cluster.

The Kubernetes cluster comprises of a Master node and one or more worker nodes. The Master node hosts the various control plane components required to manage and administer the Kubernetes cluster, whereas worker nodes are responsible for hosting pods. Some of the key functions performed by a Master node are listed below.

  • The Master node is responsible for the management and administration of the kubernetes cluster.
  • It determines which containers need to be scheduled on which of the Worker nodes
  • It also stores the complete inventory of the cluster
  • It monitors the Worker nodes and the containers running on the Worker nodes.

The Master node does all of these using a set of control plane components detailed below.

Kube-apiserver – The kube-apiserver is responsible for orchestrating all operations within the kubernetes cluster. It exposes kubernetes api, which is used by administrators to perform management operations on the kubernetes cluster. The kubectl is a client utility is used to interact with the kube-apiserver. When a user enters a kubectl command to create a pod, the request to create a pod is first authenticated, validated and subsequently executed by the kube-apiserver. The kube-apiserver also updates the ETCD datastore with the information about the new pod.

If a kubernetes cluster is deployed using the kubeadm tool, it deploys the control plane components including the kube-apiserver as a pod in the kube-system namespace. In such a case, the pod definition file /etc/kubernetes/manifests/kube-apiserver.yaml defines all the options with which the kube-apiserver is invoked. If a kubernetes cluster is deployed the hard way (without using the kubeadm tool) using the binaries from the kubernetes release page, the options used to invoke the kube-apiserver are available in /etc/systemd/system/kube-apiserver.service. While specifying the path of kube-apiserver pod definition file or service definition file maybe too much of a detail for a high level overview, I thought of covering it here as such a detail usually comes handy while troubleshooting cluster relevant issues. Moreover, the given pod definition file path and the service definition file path holds true for other control plane components as well with kubelet being an exception. The kubelet can be installed only using the binaries on the kubernetes release page, it cannot be installed using kubeadm.

ETCD – The ETCD stores the complete cluster inventory in the form of key-value pair. It is a key-value datastore, which stores information such as nodes, pods, configs, secrets, roles, bindings, etc. ETCDCTL is a ETCD client or a command line utility used to interact with the ETCD key value datastore.

Kube-scheduler – As the name suggests, the kube-scheduler determines which pods need to be placed on which Worker nodes based on the pods resource requirements, worker node size and any other policy, constraints such as taints and tolerations or node affinity rules. It is worth noting that kube-scheduler does the job of only determining which pod needs to be scheduled on which node, it does not actually deploys a pod on a node.

Controller Manager – The controller manager internally comprises of two controllers:

  • Node controller – The Node controller is meant for onboarding new nodes into the cluster and take care of situation where nodes become unavailable.  The node controller monitors the status of the nodes every 5 seconds (Node monitor period). In case there is no heart beat received for 40 seconds (Node monitor grace period) , the node controller marks the node as unreachable. After a node is marked unreachable, the node controller waits for 5 minutes (Pod Eviction timeout) to check if the node has come up. If an affected node does not comes up after 5 minutes, the node controller evicts any pods which are part of the replication group, from the affected node and provisions those pods onto the healthy nodes in the cluster.
  • Replication controller – The Replication controller ensures that desired number of pods always run in a given replication group. If a pod within a replication group dies, the replication controller creates another pod.

Kubelet – The kubelet runs on every worker node. It listens for instructions from the kube-apiserver running on the Master node and deploys or destroys a pod on the worker node as required. When the kubelet receives a request from the kube-apiserver to create a pod, it instructs the container runtime engine, which could be docker, to pull a required image and run an instance of the pod. The kube-apiserver periodically fetches reports from the kubelet to monitor the status of the pods running on a node. The kublet service also runs on a Master node to manage control-plane pods (control-plane components that are deployed as pods) and pods running services like networking and DNS. The kubelet also registers a worker node with the kubernetes cluster. Once a worker node is registered, it is monitored by the node-controller on the Master node. Unlike the other components, the kubelet is not installed by the kubeadm tool. In order to install the kubelet, you need to download the kubelet binaries from the kubernetes release page, extract it and run it as a service.

Kube-proxy – The Kube-proxy is responsible for enabling communication between the worker nodes. For instance, an application container hosted on one worker node communicates with a database container on another worker node by leveraging a kube-proxy  service. In a kubernetes cluster, the pods running custom applications, databases or web servers are exposed by the means of a service (ClusterIP, NodePort or load balancer service). The service is a kind of logical entity which is created in the memory and is accessible to all the nodes of kubernetes cluster. The kube-proxy component on each node is responsible for forwarding any traffic from the service to the back-end pods.

Networking and DNS services – The services responsible for managing networking and DNS requirements of the cluster are usually deployed as pods on the Master node.

Let’s upgrade to vCenter Server 7.0 U1- Step by Step guide

Hi Folks,

The vSphere 7 release has been a game changer for VMware in many ways. The vSphere 7 comes with quite a few interesting features like support for containers and the ability to run fully compliant and conformant Kubernetes with vSphere, improved DRS algorithm which takes into account VM “Happiness” while load balancing cluster resources, vSphere Lifecycle Manager for better lifecycle operations and much more. For the complete list, please refer Introducing vSphere 7.

As most customers are looking forward to upgrade their vSphere platform to vSphere 7.0, I thought it would help to capture the prerequisites and complete procedure to upgrade vCenter Server to 7.0 U1 in a blog.

When we plan to perform an upgrade of any software component, the first thing that comes to our mind is the prerequisite. Are all the important prerequisites like backup, license, compute resources, storage resources etc in place to support the upgrade process? So, let us first look at the prerequisites required to upgrade the vCenter Server to vCenter Server 7.0 U1.

I usually prefer to prepare a list of prerequisites as detailed in the tabular format below as it allows me to track the status for each of the prerequisites. This way, am sure that I’ve not missed on any of the key prerequisites. While it’s a long list, trust me, if you get your prerequisites right you can be almost certain that your upgrade process will go smooth.

PrerequisitesReference linkStatus
Go through the release notes of the vCenter Server 7.0 U1, specifically the ‘what’s new’ section and ‘known issues’ sectionvCenter Server 7.0 U1 Release Notes 
If you are running a vCenter Server Appliance, take the file based backup of the vCenter Server configuration from within the vCenter Server Appliance Management console. vCenter Server Appliance File based backup 
Use the VMware interop matrix and check the interoperability of the vCenter Server 7.0 U1 with other VMware components, which are integrated with your existing vCenter Server. VMware Interoperability Matrix 
Check the compatibility of vCenter Server 7.0 U1 with any third party products like backup or monitoring solution, which you may have integrated with your existing vCenter Server N/A 
Confirm the supported upgrade path for vCenter Server upgradevCenter Server 7.0 supported upgrade path 
Raise a proactive SR with VMware support teamN/A 
Shutdown the vCenter Server and take an image-based backup of the vCenter Server appliance in power-off state. After the backup, power-on the vCenter Server Appliance N/A 
Download the vCenter Server 7.0 U1 ISO image from the vCenter Server download sitevCenter Server 7.0 U1 download 
In order to perform an upgrade, one needs to download the vCenter Server 7.0 U1 ISO image and run the vCenter Server installer from a network client machine, which has connectivity to the existing vCenter Server and ESXi hosts. For optimal performance of the GUI and CLI vCenter Server installers, the network client machine should meet the minimum hardware requirements. System Requirements for the vCenter Server Installer 

The upgrade of the appliance is a migration of the old version to the new version, which includes deploying a new appliance of version 7.0. Unsynchronized clocks can result in authentication problems, which can cause the installation to fail or prevent the vCenter Server vmware-vpxd service from starting. Please verify that all components on the vSphere network have their clocks synchronised.
Synchronising Clocks on the vSphere Network 
When you use Fully Qualified Domain Names, verify that the client machine from which you are deploying the vCenter appliance and the network on which you are deploying the appliance use the same DNS server.N/A 
Check the supported Deployment topology for vCenter Server Moving from a Deprecated to a Supported vCenter Server Deployment Topology Before Upgrade or Migration 
Export the vSphere Distributed Switch configurationhttps://kb.vmware.com/s/article/2034602 
   
The vSphere 5.x, 6.x and 7.x releases require different license keys. It is not possible to use vSphere 5.x licenses on vSphere 6.x. The same is true when upgrading from vSphere 6.x to 7.x. Therefore, please upgrade the existing vCenter Server 6.x license keyhttps://kb.vmware.com/s/article/2107538 
 For information on upgrading license keys, please refer KB 81665https://kb.vmware.com/s/article/81665 
   
When you deploy the vCenter Server appliance, you can select to deploy an appliance that is suitable for the size of your vSphere environment. Ensure that you have sufficient resources in terms of CPU and Memory on the ESXi host on which you plan to deploy the new vCenter Server Appliance Hardware Requirements for the vCenter Server Appliance 
When you deploy the vCenter Server appliance, the ESXi host or DRS cluster on which you deploy the appliance must meet minimum storage requirements. Ensure that you have sufficient disk space on the datastore on which you plan to host the vCenter Server Appliance disks. The given storage requirements include the requirements for the vSphere Lifecycle Manager that runs as a service in the vCenter Server applianceStorage Requirements for the vCenter Server Appliance 
The VMware vCenter Server appliance can be deployed on ESXi 6.5 hosts or later, or on vCenter Server instances 6.5 or later. Ensure that all the ESXi hosts in your cluster are at least running on ESXi 6.5 version or laterSoftware Requirements for the vCenter Server Appliance 
VMware has tested the use of vSphere Client for certain guest operating system and browser versions. Ensure that the client machines from where you plan to access the vCenter Server using vSphere Client meet these requirementsvSphere Client Software Requirements 
   
The vCenter Server system must be able to send data to every managed host and receive data from the vSphere Client. Though not recommended, if you are planning to deploy vCenter Server on a separate subnet than existing vCenter Server and ESXi hosts, check if all the required ports are open to allow communication between newly upgraded vCenter Server appliance and existing vCenter Server and ESXi hosts. Port requirements for vCenter Server 7 
The deployment of a new vCenter Server appliance requires a temporary static IP. Ensure to reserve a static IP address for a new vCenter Server appliance from the same subnet as that of the existing vCenter Server as it will save you the hassle of raising firewall ports for a new vCenter Server appliance. IP Requirement for the vCenter Server Appliance 
When you deploy the new vCenter Server appliance, in the temporary network settings, you need to assign a static IP address and an FQDN that is resolvable by a DNS server. After the upgrade, the appliance frees this static IP address and assumes the network settings of the old appliance. Ensure to register a static IP address for a new vCenter Server appliance with an FQDN and confirm DNS resolution for both forward and reverse lookup queries. DNS Requirements for the vCenter Server Appliance 
Ensure that the ESXi host management interface has a valid DNS resolution from the existing vCenter Server and the network client machine from where you plan to run the installer and perform the upgrade. Ensure that the existing vCenter Server has a valid DNS resolution from all ESXi hosts and network client machine from where you plan to run the installer and perform the upgrade. Confirm forward and reverse lookup for both vCenter Server and ESXi hostsDNS Requirements for the vCenter Server Appliance 
   
ESXi hosts must be at version 6.5 or laterPrepare ESXi Hosts for vCenter Server Appliance Upgrade 
Your source ESXi host (host on which the existing vCenter Server is running) and target ESXi hosts (cluster of ESXi hosts or individual ESXi host on which you plan to deploy the new vCenter Server Appliance) must not be in lockdown or maintenance mode, and not part of fully automated DRS clusters. Ensure that DRS is switched to manual mode for both source and target ESXi host clusters. Prepare ESXi Hosts for vCenter Server Appliance Upgrade 
If you have vSphere HA enabled clusters, ensure that the parameter ‘vCenter Server requires verified host SSL certificates’ under configure>settings>General>SSL settings is selected Prepare ESXi Hosts for vCenter Server Appliance Upgrade 
Review the SSL certificates in VMware End Point Certificate Store and ensure that none of the certificates are expired or invalid.https://kb.vmware.com/s/article/2111411 
   
Verify that port 22 is open on the vCenter Server appliance that you want to upgrade. The upgrade process establishes an inbound SSH connection to download the exported data from the source vCenter Server appliance. Also, ensure that SSH service/daemon is running on the source vCenter Server appliance. Prerequisites for upgrading the vCenter Server Appliance 
If you are upgrading a vCenter Server appliance that is configured with Update Manager, run the Migration Assistant on the source Update Manager computer.Prerequisites for upgrading the vCenter Server Appliance 
Verify that port 443 is open on the source ESXi host on which the appliance that you want to upgrade resides. The upgrade process establishes an HTTPS connection to the source ESXi host to verify that the source appliance is ready for upgrade and to set up an SSH connection between the new and the existing appliance.Prerequisites for upgrading the vCenter Server Appliance 
When upgrading, the temporary vCenter Server instance requires the same access rights as the permanent vCenter Server instance to port 443. Ensure that any firewalls in your environment allow both the temporary and permanent vCenter Server instances to access port 443.Prerequisites for upgrading the vCenter Server Appliance 
Verify that the new appliance can connect to the source ESXi host or vCenter Server instance on which resides the appliance that you want to upgrade.Prerequisites for upgrading the vCenter Server Appliance 

Now that the prerequisites are covered, let us look at the information required to perform the upgrade. It would be good to keep all the required information handy, so that you don’t waste time looking for it during the change window. Once again, it would be good to prepare a table with the list of required information.

Deployment Types Information RequiredDefault Value
All deployment typesFQDN or IP address of the source appliance that you want to upgrade. 
All deployment typesHTTPS port of the source appliance.443
All deployment typesvCenter Single Sign-On administrator user name of the source appliance.
administrator@vsphere.local
All deployment typesPassword of the vCenter Single Sign-On admin user 
All deployment typesPassword of the root user of the source appliance 
   
All deployment typesFQDN or IP address of the source server on which resides the vCenter Server appliance that you want to upgrade.  
All deployment typesHTTPS port of the source server.443
All deployment types
If your source server is an ESXi host, use root.
If your source server is a vCenter Server instance, use user_name@your_domain_name, for example, administrator@vsphere.local. The source server cannot be the vCenter Server appliance that you want to upgrade. In such cases, use the source ESXi host.
 
All deployment typesroot password for the source ESXi host or administrator user password for the source vCenter Server 
   
All deployment typesFQDN or IP address of the target server on which you want to deploy the new vCenter Server appliance. 
All deployment typesHTTPS port of the target server.443
All deployment types
If your target server is an ESXi host, use root.
If your target server is a vCenter Server instance, use user_name@your_domain_name, for example, administrator@vsphere.local. If the network port group that are you planning to use for the new vCenter Appliance is on the vSphere Distributed switch, please provide the FQDN or IP address of your existing vCenter Server instance as the target server. 
 
All deployment typesroot password for the ESXi host or administrator user password for your vCenter Server  
   
All deployment types
Only if your target server is a vCenter Server instance.
Data center from the vCenter Server inventory on which you want to deploy the new appliance.
 
All deployment typesOptionally you can provide a data center folder. 
All deployment typesESXi host or DRS cluster from the data center inventory on which you want to deploy the new appliance . 
   
All deployment typesThe virtual machine name for the new vCenter Server appliance.
Must not contain a percent sign (%), backslash (\), or forward slash (/).
Must be no more than 80 characters in length.
VMware vCenter Server Appliance
   
All deployment typesPassword for the root user of the new vCenter Server appliance operating system.
Must contain only the lower ASCII character set without spaces.
Must be at least 8 characters, but no more than 20 characters in length.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain at least one number.
Must contain at least one special character, for example, a dollar sign ($), hash key (#), at sign (@), period (.), or exclamation mark (!).
 
   
vCenter Server appliance 6.5 with an embedded Platform Services Controller
vCenter Server appliance 6.7 with an embedded Platform Services Controller
vCenter Server appliance 6.5 with an external Platform Services Controller
vCenter Server appliance 6.5 with an external Platform Services Controller
Deployment size of the new vCenter Server appliance for your vSphere environment.
Tiny
Deploys an appliance with 2 CPUs and 12 GB of memory.

Suitable for environments with up to 10 hosts or 100 virtual machines.

Small
Deploys an appliance with 4 CPUs and 19 GB of memory.

Suitable for environments with up to 100 hosts or 1,000 virtual machines.

Medium
Deploys an appliance with 8 CPUs and 28 GB of memory.

Suitable for environments with up to 400 hosts or 4,000 virtual machines.

Large
Deploys an appliance with 16 CPUs and 37 GB of memory.

Suitable for environments with up to 1,000 hosts or 10,000 virtual machines.

X-Large
Deploys an appliance with 24 CPUs and 56 GB of memory.

Suitable for environments with up to 2,000 hosts or 35,000 virtual machines.
 
   
vCenter Server appliance 6.5 with an external Platform Services Controller
vCenter Server appliance 6.7 with an external Platform Services Controller
Storage size of the new vCenter Server appliance for your vSphere environment.

Increase the default storage size if you want larger volume for SEAT data (stats, events, alarms, and tasks).                                           Default
For tiny deployment size, deploys the appliance with 415 GB of storage.
For small deployment size, deploys the appliance with 480 GB of storage.
For medium deployment size, deploys the appliance with 700 GB of storage.
For large deployment size, deploys the appliance with 1065 GB of storage.
For x-large deployment size, deploys the appliance with 1805 GB of storage.
Large
For tiny deployment size, deploys the appliance with 1490 GB of storage.
For small deployment size, deploys the appliance with 1535 GB of storage.
For medium deployment size, deploys the appliance with 1700 GB of storage.
For large deployment size, deploys the appliance with 1765 GB of storage.
For x-large deployment size, deploys the appliance with 1905 GB of storage.
X-Large
For tiny deployment size, deploys the appliance with 3245 GB of storage.
For small deployment size, deploys the appliance with 3295 GB of storage.
For medium deployment size, deploys the appliance with 3460 GB of storage.
For large deployment size, deploys the appliance with 3525 GB of storage.
For x-large deployment size, deploys the appliance with 3665 GB of storage.
The sizing algorithm in use by the upgrade installer might select a larger storage size for your environment. Items that might affect the storage size selected by the installer include modifications to the vCenter Server appliance disks (for example, changing the size of the logging partition), or databases having a database table that the installer determines to be exceptionally large and requiring additional hard disk space.
   
All deployment typesName of the datastore on which you want to store the configuration files and virtual disks of the new appliance. The installer displays a list of datastores that are accessible from your target server. 
All deployment typesEnable or disable Thin Disk Mode.Disabled
   
All deployment typesName of the network port group to which to connect the new appliance. The installer displays a drop-down menu with networks that depend on the network settings of your target server. If you are deploying the appliance directly on an ESXi host, non-ephemeral distributed virtual port groups are not supported and are not displayed in the drop-down menu. 
All deployment typesIP version for the appliance temporary addressIPv4
All deployment typesIP assignment for the appliance temporary address – Static or DHCPStatic
   
All deployment types
Only if you use a static assignment for the temporary IP address.
Temporary system name (FQDN or IP address). The system name is used for managing the local system. The system name must be FQDN. If a DNS server is not available, provide a static IP address. 
 Temporary IP address 
 Default gateway 
 DNS servers separated by commas. 
   
vCenter Server appliance 6.5 with an embedded or external Platform Services Controller
vCenter Server appliance 6.7 with an embedded or external Platform Services Controller
Data types to transfer from the old appliance to the new appliance.

In addition to the configuration data, you can transfer the events, tasks, and, performance metrics.

Note:
For minimum upgrade time and storage requirement for the new appliance, select to transfer only the configuration data.
 
   
vCenter Server appliance 6.5 with an embedded Platform Services Controller

vCenter Server appliance 6.7 with an embedded Platform Services Controller
Join or do not participate in the VMware Customer Experience Improvement Program (CEIP).

For information about the CEIP, see the Configuring Customer Experience Improvement Program section in vCenter Server and Host Management.
Join the CEIP

Upgrade Procedure

With all the prerequisites and required information in place, let’s begin with the upgrade.

I’ve my existing vCenter Server appliance running on version 6.7 with embedded PSC. So, the steps below describes the procedure to upgrade the vCenter Server appliance 6.7 with embedded PSC to vCenter Server appliance 7.0 U1.

  • As a first step, download the vCenter Server ISO to the network client machine
  • Right click on the ISO file and mount the vCenter Server ISO
  • Once the ISO is mounted, navigate to the win32 directory under vcsa-ui-installer directory and double click on the installer
  • Once a window with below options appear, click on ‘Upgrade’.

  • Click on ‘Deploy vCenter Server’ to start with the deployment of the new vCenter Server appliance
  • Accept the end user license agreement
  • Provide the details of the source vCenter Server that you want to upgrade
  • The installer will present a warning with a certificate thumbprint of the ESXi server that hosts the source vCenter Server. Click ‘YES’ to accept the certificate warning and continue with the next step.
  • Specify the target server settings for deploying a new vCenter Server Appliance.
  • Click ‘YES’ to accept the certificate warning for the target server
  • Select the datacenter or folder to host the virtual machine of the new vCenter Server appliance
  • Select a compute resource to deploy the vCenter Server appliance
  • Specify the name and password for the new vCenter Server virtual appliance
  • Select deployment size for the new vCenter Server Appliance
  • Select a datastore for the new vCenter Server appliance. Do not enable ‘Thin Disk mode’ for production environment.
  • Provide the network settings for the new vCenter Server appliance.
  • In the Ready to complete stage 1 page, review the settings and click next
  • The vCenter Server installer should now start with the deployment of new vCenter Server appliance
  • Allow the installer to deploy the vCenter Server appliance
  • Once the new vCenter Server is deployed, click on continue to proceed with the stage 2 of the upgrade process.
  • Click on ‘NEXT’ to proceed with the stage 2 of the upgrade process. In stage 2, the installer copies the selected data (Configuration and inventory, Performance metrics, Tasks and Events) of the source vCenter Server, along with the network settings, to the target vCenter Server appliance and eventually shuts down the source vCenter Server.
  • Allow installer to run few pre-upgrade checks.
  • Review the pre-upgrade check results

  • Select the data that you want to copy from the Source vCenter Server appliance to target vCenter Server appliance
  • Select the option below, if you are willing to participate in VMware’s customer experience improvement program.
  • In the Ready to complete page, review the settings and click FINISH

  • Acknowledge the warning below, which states that source vCenter Server will shutdown once the network configuration, copied from the source vCenter Server, is enabled on target vCenter Server.
  • As soon as you acknowledge the warning above, the upgrade process is initiated.
  • In stage 2, the source vCenter Server data is copied to the target vCenter Server.
  • Once the required services are started for the target vCenter Server, the warning appears to notify that by default, the vCenter Server 7.0 disables the use of TLS 1.0 and TLS 1.1 protocols.
  • Once all the required data is imported into the new vCenter Server appliance, the upgraded vCenter Server is accessible using the same IP address and FQDN as the old vCenter Server.
This step concludes the successful completion of the upgrade process

The upgrade is successfully done!! However, hang on, it’s not party time yet. We need to go through few post upgrade checks.

Post upgrade checks

Once the upgrade is done, we need to perform few post upgrade checks to ensure that everything is functional from vSphere standpoint.

  • Login to the vSphere client using the same IP address or FQDN of the existing vCenter Server and confirm the version and build number of vCenter Server.
  • Check the version of vSphere Client
  • Check for any alarms or alerts. In my lab, after the vCenter Server upgrade, the error below was logged for the vCLS (1) VM.

The vSphere Cluster Services (vCLS) is a new feature in vSphere 7.0 Update 1. This feature ensures cluster services such as vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the vCenter Server instance availability. For further details on vSphere Cluster Service VM, please refer VMware KB 80472 and vSphere Clustering Service in vSphere 7.0 U1

Do not attempt to power on the vCLS VM manually. The vCLS VMs are managed by vSphere Cluster Service. For more details, please refer VMware KB 79892

  • Take a snapshot of the vLCS VM and change the hardware compatibility of the vLCS VM to VM Hardware version 14 – compatible with ESXi 6.7 and later

  • Navigate to the configure>VMware EVC tab and disable the EVC for the vCLS VM
  • Once the EVC is disabled, the vSphere Cluster Service will power on the vCLS (1) VM and will deploy two more VMs – vCLS (2) and vCLS (3). Repeat the same procedure to address the power on issues with vCLS (2) and vCLS (3).
  • Navigate to the Home>Administration>Licensing and add a new upgraded license key for vCenter Server 7.
  • Assign a new license key to the vCenter Server instance
  • Check if all the integrated VMware components (vROps, vRLI, vRA, NSX etc) and external third party solutions (backup, monitoring solution etc) are working as expected.
  • Login to the vCenter Server Appliance Management console and check the health of the vCenter Server. Check the status of all vCenter Server services

  • It would be good to take the image based and file based backup of vCenter Server appliance, so that we have a latest copy of the vCenter Server appliance immediately after the upgrade.
  • The DRS was switched to manual mode during the upgrade process. So, switch the DRS back to fully automated mode.

Finally, this concludes the upgrade process and yes, you can party now 🙂

Thank you for reading through my blog. Please feel free to provide your feedback in the comments section below.

Media Optimization for Microsoft Teams

The Media Optimization for Microsoft teams leverages the (Web Real-Time Communication) WebRTC features and offloads the audio and video processing from the virtual desktop to the client machine. 

I’ve summarized the steps below, which would help you to understand what is required from the VMware’s standpoint to configure Media Optimization for Microsoft teams.  

The brief note from VMware TechZone article below explains how this is achieved using the Horizon Client for Windows 2006 and Horizon Agent 2006 version. 

Text

Description automatically generated

The feature to support Media Optimization for Microsoft teams was introduced in Horizon Client for Wndows 2006 and is also incorporated in Horizon Client for Windows 5.5

Therefore, for configuring media optimization for Microsoft teams, the Horizon client version should be minimum at Horizon Client for Wndows 2006 (compatible with Horizon 8/2006) or at Horizon Client for Windows 5.5 (compatible with Horizon 7.13)

Table

Description automatically generated

High level steps to configure the Media Optimization for Microsoft teams

  • Horizon platform should be running on Horizon version 7.13 or Horizon 8 (2006 – as per the new versioning standards)
  • Download the Horizon Client for Windows 2006 or Horizon Client for Windows 5.5 and during the installation, select custom installation and scroll down to select “Media Optimization for Microsoft teams”
  • After the installation, reboot the Windows Client
Graphical user interface, text, application, email

Description automatically generated
  • The code in the Horizon Agent is also installed by default, but it is controlled with a GPO, which is not enabled by default. As stated in the email below, the media optimization for Microsoft teams is not supported in Horizon agent 7.12 or earlier. So, the Horizon agent version should be minimum at 7.13 or Horizon Agent 2006
  • Download the Horizon GPO bundle and use the ADM template files to enable the relevant GPO.  
  • The GPO can be enabled using the Group Policy Editor by navigating to Computer Configuration > Administrative Templates > VMware View Agent Configuration > VMware HTML5 Features > VMware WebRTC Redirection Features > Enable Media Optimization for Microsoft Teams. After setting this policy, you must log off from the Horizon desktop for the GPO policy to take effect.
  • Besides, the below GPOs under the VMware WebRTC Redirection Features can be configured based on the requirement.
Graphical user interface, text, application, email

Description automatically generated

Keys points from the Microsoft Article  Microsoft Teams for Virtual Desktop Infrastructure

  • You can deploy the Teams desktop app for VDI using a per-machine installation or per-user installation using the MSI package. Deciding on which approach to use depends on whether you use a persistent or non-persistent setup and the associated functionality needs of your organization
  • In a dedicated persistent setup, users’ local operating system changes are retained after users log off. For persistent setup, Teams supports both per-user and per-machine installation.
  • In a non-persistent setup, users’ local operating system changes are not retained after users log off. Such setups are commonly shared multi-user sessions. VM configuration varies based on the number of users and available physical box resources.
  • With per-machine installation, automatic updates is disabled. This means that to update the Teams app, you must uninstall the current version to update to a newer version. With per-user installation, automatic updates is enabled. For most VDI deployments, Microsoft recommends to deploy Teams using per-machine installation.
  • To update to the latest Teams version, start with the uninstall procedure followed by latest Teams version deployment.
  • For Teams AV optimization in VDI environments to work properly, the thin client endpoint must have access to the internet. If internet access isn’t available at the thin client endpoint, optimization startup won’t be successful. This means that the user is in a non-optimized media state.

High level steps to install Teams on VDI

  • The minimum version of the Teams desktop app that’s required is version 1.3.00.4461. (PSTN hold isn’t supported in earlier versions.)
  • Install the MSI to the VDI VM by running one of the following commands:
  • Per-user installation (default)
msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSERS=1
 
This process is the default installation, which installs Teams to the %AppData% user folder. At this point, the golden image setup is complete. Teams won't work properly with per-user installation on a non-persistent setup.
  • Per-machine installation
msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSER=1 ALLUSERS=1

This process installs Teams to the Program Files (x86) folder on a 64-bit operating system and to the Program Files folder on a 32-bit operating system. At this point, the golden image setup is complete. Installing Teams per-machine is required for non-persistent setups.

Graphical user interface, text, application

Description automatically generated

IMP NOTE – For beta testing, VMware built support for Microsoft Teams into the Horizon Client for Windows versions 5.3, 5.4, 5.4.1, 5.4.2, and 5.4.3. If you enable the optimization GPO in the virtual desktop, these clients, although not officially supported, will begin implementing offload. The bugs we found in these clients during beta testing are fixed in Horizon Client for Windows version 2006 or later, which is officially supported and which VMware recommends using.

Procedure to check if Microsoft teams is running in optimized mode

A user can check if Microsoft Teams is running in optimized mode, fallback mode, or natively (no optimization) in the virtual desktop. In the top-right corner of the Microsoft Teams interface, click the user icon and navigate to About > Version to see a banner under the user icon describing the Microsoft Teams version and pairing modes:

·         Optimized – If the banner shows VMware Media Optimized, the Enable Media Optimization for Microsoft Teams GPO is enabled, Microsoft Teams is running in the virtual desktop, and audio and video have been offloaded to the client machine. 

·         Fallback – If the banner shows VMware Media Not Connected, then Microsoft Teams is running in fallback mode. In this mode, the Enable Media Optimization for Microsoft Teams GPO is enabled, and Microsoft Teams has tried to start in optimized mode, but the version of Horizon Client being used does not support Microsoft Teams optimization. Audio and video from Microsoft Teams is not offloaded to the client machine. Fallback mode has the same limitations as optimized mode. When you make a call in fallback mode, you see a warning message on the call:

Your device doesn’t support connection via VMware. Audio and video quality may be reduced

·         No optimization – If the banner does not show VMware text in the message, the Enable Media Optimization for Microsoft Teams GPO is not enabled. Audio and video from Microsoft Teams is not offloaded to the client machine.

The VMware TechZone article Microsoft Teams Optimization with VMware Horizon covers the detailed procedure to configure Media Optimization for Microsoft teams and Microsoft article Microsoft Teams on VDI covers important considerations and procedures from Microsoft’s standpoint

vRealize Log Insight Upgrade from 4.8 to 8.1

Important note about the change in architecture

The vRealize Log Insight 4.8 is based on SLES (Suse Linux Enterprise Server), whereas vRealize Log insight t 8.0 and higher versions are based on Photon OS. Because of this, as compared to the earlier version, there is a change in the architecture of the vRealize Log Insight 8.0 appliance operating system  in terms of number of partitions and the size of partitions.

Before upgrading from an SLES-based vRealize Log Insight 4.8 to a Photon OS based vRealize Log Insight 8.0, ensure that the root partition has enough space for the upgrade. If the root partition is smaller in size, for example, 8 GB, increase the disk size to 20 GB, so that the root partition size increases to 16 GB. You must increase the disk size for each node that has a root partition with less space. The KB article 76304 provides detailed steps to increase the root partition size.

Prerequisites and important upgrade notes

  • As per the Supported Upgrade Path, direct upgrade to vRealize Log Insight version 8.1.1 is supported from vRealize Log Insight version 4.8, 8.0 or 8.1.0
  • Confirm the interoperability / compatibility of vRealize Log Insight 8.1.1 with other VMware components in the environment using the VMware Interop Matrix.
  • Before starting the upgrade from a vRealize Log Insight 4.8 cluster to 8.1.1, verify that each node has enough free space in the root partition. For more information, please refer KB article 76282. The KB article 76304 provides detailed steps to increase the root partition size.
  • As stated in the KB 76304,  please ensure to remove any snapshots before you attempt to increase the disk size to support the larger root partition. Please take the snapshot immediately after increasing the virtual disk size and before increasing the root partition size from within the operating system.
  • As per the KB 76067, upgrade to vRealize Log Insight 8.0 fails when default gateway is missing. On each of the vRealize Log Insight Appliance, login as root and check the /etc/sysconfig/network/routes and /etc/sysconfig/networking/devices/ifcfg-eth0  file to confirm if the entry for default gateway is configured correctly. If not, please configure the default gateway as suggested in the KB 76067. Please ensure that you have a latest snapshot of the vRealize Log Insight appliance in appliance before making changes to any of the configuration files.
  • The sshd customized service configuration (/etc/ssh/sshd_config) resets to its default when you upgrade the SLES-based vRealize Log Insight 4.8 to the latest Photon-based vRealize Log Insight. As a workaround, save the /etc/ssh/sshd_config configuration before upgrading and then reconfigure manually after upgrade.
  • When performing a manual upgrade, you must upgrade worker node one at a time. Upgrading more than one workers at the same time causes an upgrade failure. When you upgrade the master node to vRealize Log Insight 8.1.1, a rolling upgrade occurs unless specifically disabled.
  • Upgrade must be performed from the master node’s FQDN. Upgrading with the Integrated Load Balancer IP address is not supported.
  • vRealize Log Insight does not support two-node clusters. Add a third vRealize Log Insight node of the same version as the existing two nodes before performing an upgrade.
  • Create a snapshot or backup copy of the vRealize Log Insight virtual appliance.
  • Ensure that you have the correct admin and root credentials for the vRealize Log Insight appliance master and worker nodes

License requirements

  • As per the vRealize-Suite-2019-Release-Notes, the vRealize Log Insight 8.1 is part of the vRealize Suite 2019. So, if you are using vRealize Suite licenses, ensure to upgrade your license key to vRealize Suite 2019. Please make a note of the existing license key before upgrading the vRealize Suite license key to 2019. The KB article 2006974 provides detailed steps to upgrade a license key using my VMware portal.

Upgrade procedure

  • Login to the vRealize Log Insight master node user interface using the admin credentials.
  • Navigate to the Administration tab and under ‘Management’, click on ‘Cluster’
  • Under ‘Cluster’, click on ‘UPGRADE CLUSTER’
  • Click Upgrade from PAK to upload the upgrade .pak file
  • When the upload of .pak file completes, click ‘ACCEPT’ to accept the EULA and start with the upgrade
  • When the upgrade process completes, check the version in the cluster tab
  • Once the upgrade of master node completes, the remaining nodes will be upgraded automatically.
  • The upgrade logs are written into /storage/core/loginsight/var/upgrade.log

Rollback procedure

For rollback procedure, please refer KB article 75150. The rollback logs are written into /storage/core/loginsight/var/rollback.log

Important points to check post upgrade

  • If the vRealize Log Insight upgrade (.pak file) has a new JRE version, the user-installed certificates in a vRealize Log Insight setup (such as for event forwarding) become invisible after upgrade)
  • If integration destinations provide untrusted certificates for SSL connections, their integration with vRealize Log Insight does not work correctly after an upgrade because the certificates are not added to the truststore. These integration destinations include vSphere, vRealize Operations Manager, event forwarder, Active Directory, and SMTP. As a workaround, in each integration configuration page, test the connection and accept the untrusted SSL certificate if a dialog box appears with the details of the certificate. Accepting the certificate adds it to the truststore.
  • Photon OS has improved security policies, which might require you to change the root password after a successful upgrade to Photon OS. This happens only when the root password in SLES expired, but unlike Photon OS, SLES OS did not enforce the update.
  • The sshd customized service configuration (/etc/ssh/sshd_config) resets to its default when you upgrade the SLES-based vRealize Log Insight 4.8 to the latest Photon-based vRealize Log Insight. As a workaround, save the /etc/ssh/sshd_config configuration before upgrading and then reconfigure manually after upgrade.
  • Photon OS has strict rules for the number of simultaneous ssh connection. Because the MaxAuthtries value is set to 2 by default in the /etc/ssh/sshd_config file, the ssh connection to your vRealize Log Insight virtual appliance might fail in the presence of multiple connections, with the following message: “Received disconnect from xx.xx.xx.xxx port 22:2: Too many authentication failures”. You can use any of the following workarounds for this issue:
  1. Use the IdentitiesOnly=yes option while connecting via ssh: #ssh -o IdentitiesOnly=yes user@ip
  2. Update the ~/.ssh/config file to add: Host* IdentitiesOnly yes
  3. Change the MaxAuthtries value by modifying the /etc/ssh/sshd_config file and restarting the sshd service.

New features introduced in vRealize Log Insight 8.0

The vRealize Log Insight 8.0 Release Notes  provides the details of key new features introduced in vRealize Log Insight version 8.0, which are also applicable to minor release 8.1.1

This concludes the vRealize Log Insight upgrade to version 8.1.1

Please feel free to provide your feedback or comments.

vExpert Applications are Open – Don’t Miss Out!

vExpert Applications are Open – Don’t Miss Out!

vExpert Applications are Open! Don’t miss out on the opportunity to join this amazing program & community. Applications will be open from June 1st, 2020 to July 19th, 2020 and the awards will be announced on July 17th. Apply for vExpert 2020 What the Program is About The vExpert Program is […]


VMware Social Media Advocacy