Upgrade VMware Identity Manager to Workspace ONE Access

Upgrade VMware Identity Manager to Workspace ONE Access

Upgrade VMware Identity

Recently, I had a client who wanted to upgrade VIDM 19.03 to the latest version of Workspace ONE Access. OVA upgrades are never for the feint of heart, but I decided to take it on. This is my first foray into VIDM On-Premises since 2018. Today, we will talk about preparing for upgrade, adding storage to your Linux boxes, performing the massive upgrade, and post-installation procedures that are needed to ensure you have a successful upgrade. On-Premises is not for the weak so get ready!

Preparing for the Upgrade to Workspace ONE Access

We will slice this section up by: (1) performing some cluster operations, (2) restore the original server.xml, (3) and preparing the server for the upgrade. Let’s get started by preparing the cluster!

Preparing your VIDM Cluster

For your clustered upgrade to go well, we will start by doing the following:

  1. Snapshot the Database and Service Nodes
  2. Remove ALL but ONE of the VIDM nodes from the cluster

Yes, it really is just that simple. You will prepare your cluster by taking some backups and making sure you only have one live server in the NLB.

Restoring the Original Server.XML

If you have been doing VIDM for awhile, you will have applied a security patch in the past like HW-137959. For the upgrade to be successful, you will undo that by using the following code:

mv /opt/vmware/horizon/workspace/conf/server.xml.bk /opt/vmware/horizon/workspace/conf/server.xml

Preparing the Servers for the Upgrade

You could do this stuff for just the first node, but I suggest doing it to all of them. We will list it out via code:

##Skip this if /db/elasticsearch/horizon/nodes only has a 0 directory, I recommend using WinSCP to check first. Verify /opt/vmware/elasticsearch/logs/horizon.log has an entry like "recovered xx indicies into cluster_state" to verify the step worked after completed#
##Stop the Elasticsearch Service##
service elasticsearch stop

##Verify the Processes have Stopped##
ps -ef | grep elasticsearch
##Remove and Rename the Data Folders##
rm -r /db/elasticsearch/horizon/nodes/1/
mv /db/elasticsearch/horizon/nodes/0
service elasticsearch start
##Remove these Files from any Cloned Service Nodes##
rm -f /usr/local/horizon/conf/flags/sysconfig.cloneprep
rm -f /usr/local/horizon/conf/flags/sysconfig.iamaclone
##Stop the Elasticsearch Service##
service elasticsearch stop

##Verify the Processes have Stopped##
ps -ef | grep elasticsearch
##Add DB_Owner Role to the User that Installed Access##
USE <saasdb>;
ALTER ROLE db_owner ADD MEMBER <domain\username> or <loginusername>; GO 

Once that is done, you will just want to grab the files for WS1 Access 2010 software called Workspace ONE Access Service Virtual Appliance Dual Boot Update and copy it to /tmp/ for now. We’ll cover that more later! You should also grab the Hotfix at the bottom of the page for your post-install.

The last thing you will need to validate is that your server can reach https://vapp-updates.vmware.com as mentioned here.

Adding Storage to your Linux Servers

VMware says to check how much free disk space you have. I happen to KNOW that any OVA you deploy won’t have enough so we don’t have to waste time there. Below, you will find a video that I made that shows you the process you can follow to grow out the size of your root folder aka / on your Linux server.

As you saw in the video, this process will be a huge help as most people struggle with this. I can thank my friends in PSO for sending along their “unsupported process” for this, which is similar to what I came up with. I hope you remembered that snapshot I mentioned earlier!

Alternatively, you can hack your OVAs to make sure they have ample space for the future. Definitely recommend it!

Upgrading from VIDM 19.03 to WS1 Access 20.10

Once done with the storage growth, we will move onto the actual upgrade:

I think it’s worth sharing the upgrade commands for fun:

##Updates the Installer so you can update your system##
/usr/local/horizon/update/updatemgr.hzn updateinstaller
##Checks for the Latest Updates##
/usr/local/horizon/update/updatemgr.hzn check
##Performs the Update##
/usr/local/horizon/update/updatemgr.hzn update

Don’t forget after one node is up and happy, you will then upgrade the other 2 nodes and then add both back to the NLB at the same time!

Post-Installation Tasks for WS1 Access

Once done, we will be (1) re-applying the patch, (2) restore the log4j files, (3) re-saving the UEM config, and (4) Cluster IDs for a 2nd data center and (5) cache services settings for the 2nd data center.

Re-Apply the Security Patch

Applying the patch is easy, just follow these steps. Don’t forget to copy the patch to /tmp first!

##Unzip the Patch
unzip /tmp/HW-137959-20.10.zip
##Go to the Patch Directory##
cd /tmp/HW-137959-20.10
##Execute the Patch Script##

Restore the Log4J Files

The Log4J Files are used for the Log Config on your Access server. If you edit them at all, like changing the log level, you will need to restore them. The upgrade renames them as log4j.properties.rpmnew. Just do this:

find / -name "*log4j.properties.rpmnew"

##Copy each file to the location of the log4j file you find##
e.g. copy /tmp/saas-log4j.properties.rpmnew /usr/local/horizon/conf/saas-log4j.properties

These are the possible log4j files you may see on VIDM:

Resaving the UEM Config in the Console

Pretty simple, just do this below. It will make sure the UEM populates the Device Services URL for the Catalog:

Clustering Post-Install Tasks for Multi-Data Centers

The first thing you do is confirm the Elasticsearch Discovery-idm plugin exists on each node. (This happens sometimes if each data center doesn’t have a unique Cluster ID):

/opt/vmware/elasticsearch/bin/plugin list

##If the Plugin is Missing Install it Like So##
/opt/vmware/elasticsearch/bin/plugin install file:///opt/vmware/elasticsearch/jars/discovery-idm-1.0.jar

Now go to Dashboard > System Diagnostics Dashboard and update the cluster ID of the nodes in the secondary data center to be unique, like you can see below:

You also want to check each node and verify its cluster health via cURL:

curl 'http://localhost:9200/_cluster/health?pretty'

You should see something like this:

  "cluster_name" : "horizon",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 20,
  "active_shards" : 40,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0

Fixing WS1 Access Cache Services on the Secondary Data Center

Since your 2nd data center is passive, you will need to tweak the properties file to get things functional. You will do the following in /usr/local/horizon/conf/runtime-config.properties on each server:

##Edit the Runtime-Config Props File##
vim /usr/local/horizon/conf/runtime-config.properties
##Update the Properties as Follows##
##Restart the Service
service horizon-workspace restart

Final Thoughts

One might wonder why I decided to write about this. Sure, most people are using WS1 Access in the mythical cloud. Many customers especially in Europe/APAC are using WS1/VIDM On-Premises, which is completely okay. It’s probably the worst service to manage in my opinion as it feels like there are magical lights ALWAYS yelling at you all day.

I think many of the lessons I cover in this article will translate well to other services like Horizon and UAG, which can be very challenging for some people to work with. Remember On-Premises DOES exist!



Social Media

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about the latest posts and updates.