Extending Okta Accounts to Workspace ONE via Just-in-Time Provisioning

This just in… It’s 2022!! Being 2022, we need to leave products and SCCM behind. I have been decrying this sentiment for years. Many customers are using Okta’s Universal Directory and their HRIS as a source of truth. With that in mind, Okta Provisioning is an interesting concept for these scenarios. Workspace ONE typically requires an LDAP directory of some sort to deliver enterprise-level identities. NOW, you can flow the magic of identities from Okta to WS1 Access to WS1 UEM to modernize your endpoint strategy. If you joined me at VMware Explore, you learned how I use the power of Okta today.

Today, we will discuss the integration, which has been covered by a few brilliant technologists in Darryl Miles and Steve the Identity Guy. I am taking their work one step further by focusing more on leveraging PowerShell and accounting for an issue they missed (because they were doing fresh deployments). Let’s get started and bring on the fun!

Setting the Stage for Okta Provisioning to Workspace ONE Access

To support Okta Provisioning, we need to create a landing zone inside of Workspace ONE Access. It’s slightly easier said than done, but with a little bit of work we can make that happen. As you work more in WS1 Access, you will learn that most of the good stuff is driven by their API. Their API is really complicated because you have to define content types for most of the crucial stuff to work appropriately. Check the video below for the process of creating your JIT directory:

I thought it would be good to share the code from my Github Repo, which is on display in the video. There’s nothing wrong with using Postman, but I always feel like I’m cheating using it. For me, it’s more fun to do it in PowerShell. As I mentioned earlier, the complicated part is figuring out the content type, which is application/vnd.vmware.horizon.manager.connector.management.directory.other+json for this command. It’s easy enough to read “horizon manager connector management directory other JSON”

You just run this little script and abracabra directory appears! One thing that I changed from my video is flipping it from a JIT directory to “other directory” just to match with what VMware prefers for this integration.

#Forces the use of TLS 1.2
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

$AccessURL = Read-Host -Prompt 'Enter your WS1 Access URL'
$Domain = Read-Host -Prompt 'Enter your New Domain'
$DirectoryName = Read-Host -Prompt 'Enter a name for your new Access Directory'

##Start-Sleep -s 30
$ClientId = Read-Host -Prompt 'Enter your OAuth Client ID'
$ClientSecret = Read-Host -Prompt 'Enter your Client Secret'
$text = "${ClientId}:${ClientSecret}"
$base64 = [Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($text))
$headers = @{
        "Authorization"="Basic $base64";
        "Accept" = "*/*"
    }

$results = Invoke-WebRequest -Uri "https://$AccessURL/SAAS/auth/oauthtoken?grant_type=client_credentials" -Method POST -Headers $headers
$accessToken = ($results.Content | ConvertFrom-Json).access_token
  $authHeader = @{
        "Authorization"="Bearer $accessToken";
    }
      $global:workspaceOneAccessConnection = new-object PSObject -Property @{
        'Server' = "https://$AccessURL"
        'headers' = $authHeader
    } 
$global:workspaceOneAccessConnection

     $dirHeaders = @{
        "Accept"="application/vnd.vmware.horizon.manager.connector.management.directory.other+json"
        "Content-Type"="application/vnd.vmware.horizon.manager.connector.management.directory.other+json"
        "Authorization"=$global:workspaceOneAccessConnection.headers.Authorization;
    }
    $restheader = $restheader | ConvertTo-Json
    ##Build the Body##
     $script:body = @{
    "type" = "OTHER_DIRECTORY"
    "domains"  = @($Domain)
     "name" = $DirectoryName
        }
    ##Convert Body to Json##
    $body = $body | ConvertTo-Json
   


Invoke-RestMethod -Uri "https://$AccessURL/SAAS/jersey/manager/api/connectormanagement/directoryconfigs" -Method POST -headers $dirHeaders -Body $Body

Building the Okta Provisioning Flow with the Workspace ONE App in Okta

Now that we have a directory, it’s time to populate it with users. This is relatively simple inside of Okta. It’s a basic 3 step process:

  1. Add the VMware Workspace ONE Application
  2. Put in the bearer token from your PowerShell command earlier
  3. Set the domain to match what you created in WS1 Access

Once that is done, it’s just adding users and you are off!! As someone who was often critical of Okta, I have to admit that they translate manual process really well, such as Office 365 SAML being fully automated for administrators. This gets us very close to the finish line!

As you saw, it wasn’t too difficult. You just needed to remove the carriage returns from the copy, but no biggie. Once that directory is created, you need to take a look at your UEM environment and set yourself up for success to provision accounts.

Analyzing Workspace ONE UEM Accounts Pre-Migration

Essentially, you will be leveraging the AirWatch Provisioning Application in WS1 Access to flow users into WS1 UEM. That means you likely will delete your existing directories inside of Access, sync users into your directory inside of Access, and bring them to UEM. To do that, you need to MATCH your users from Access to UEM.

We can only do that if the usernames and domains match inside of UEM. I am continuing to refine the process and will update you guys and gals as I learn more. Sometimes, we have issues where the domains aren’t synchronizing to WS1 for whatever reason like below:

You can see the domains that say “not applicable.” The recommendation that I make typically is to export your full user list and figure out what accounts we need to update. The major issue we have is that the domain field is greyed out in the user account (happens with directory accounts):

Luckily, I am here to say that I have written an API tool for updating the domain field of these user accounts. The premise of this command is that you create a CSV of the usernames that you want to update because often we don’t want to just update every domain that is blank.

One caveat is that you cannot change to a domain that doesn’t exist in your LDAP directory. To do that, you will need to find set your LDAP directory to “NONE” and save. Keep in mind, that will NOT delete existing user accounts, but make sure you take a snapshot of those settings if you need to revert them!

	[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
	$Username = Read-Host -Prompt 'Enter the Username'
	    $Password = Read-Host -Prompt 'Enter the Password' -AsSecureString
	    $apikey = Read-Host -Prompt 'Enter the API Key'
	 
	    #Convert the Password
	    $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($Password)
	    $UnsecurePassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
	 
	    #Base64 Encode AW Username and Password
	    $combined = $Username + ":" + $UnsecurePassword
	    $encoding = [System.Text.Encoding]::ASCII.GetBytes($combined)
	    $cred = [Convert]::ToBase64String($encoding)
	    $script:header = @{
	    "Authorization"  = "Basic $cred";
        "Accept"         = "application/json;version=2";
	    "aw-tenant-code" = $apikey;
	    "Content-Type"   = "application/json";}
	 
	##Prompt for the API hostname##
	$apihost = Read-Host -Prompt 'Enter your API server hostname'
	 
    ##Prompt for Path for CSV Import of User List##
    $csv = Read-Host -Prompt 'Enter the Path of your User CSV Template'
    $userlist = import-csv $csv | foreach {Invoke-RestMethod -headers $header -Uri https://$apihost/API/system/users/search?username=$($_.username)}
	 
	##Prompt for the Username##
	 
	##$username = Read-Host -Prompt 'Enter the username for the device you want to migrate'
	 
	##Get the User ID##
	##$userresults = Invoke-RestMethod -headers $header -Uri https://$apihost/API/system/users/search?username=$username

	##$userid = $userresults.users.id.value
    ##$useruuid = $userresults.users.uuid
	##Define the Domain Attribute you want to update##
	
    $UUIDs = $userlist.users.uuid

    $domain = Read-Host -Prompt 'Enter the domain you want to migrate to'
    ##Build the Body##
     $script:body = @{
    "domain"  = $domain
        }
    ##Convert Body to Json##
    $body = $body | ConvertTo-Json
    ##Update the Domain##
   ## Invoke-RestMethod -Method Put -Headers $header -Uri https://$apihost/API/system/users/$useruuid -Body $body -ContentType application/json
    ##Set the Domain for Each User#
    foreach ($UUID in $UUIDs) {Invoke-RestMethod -Method Put -Headers $header -Uri https://$apihost/API/system/users/$uuid -Body $body -ContentType application/json}

Once you have updated your domains, we migrate users via the AirWatch Provisioning App inside of WS1 Access.

Migrating Users from WS1 Access to WS1 UEM

The final portion is pretty easy. After you add the AirWatch Provisioning application, you just need to configure a few settings.

First, we set the provisioning adapter to your Group ID like below:

The last key part is modifying any fields needed to correct match users from Okta to WS1 UEM. You just click on the attribute and update the lookup value. You can see the user provisioning fields here:

Make sure you use the prepopulated options, it will keep things simple and keep your data accurate.

You can also do the same thing with Group Provisioning, which is crucial to deploy resources in WS1 properly:

One last thing to point out, you go into “Assign” on the App to check your provisioning status and even retry provisioning tasks if needed:

Final Thoughts

As we covered today, Darryl and Steve built a really nice foundation, but bringing it to the field requires more mindfulness. Deploying stuff in a fresh or test environment is easy. This is an example of evaluating the issue and leveraging APIs to cultivate something better. If I can teach you anything today, never let the GUI stop you. We need to push the API to its limits.

Capitalizing on the API in the SaaS world has been the most crucial part of my technologist career. I feel like that was a major jumping off point for me. We often learned that if you can’t do it via the GUI then you just can’t do it. In recent years, I’ve learned the API is the programmatic gateway to my own creativity. Continuing to push the API to its limits will let you deliver better end results for your customers. Okta Provisioning is just one example of modernizing our world, which we should always strive to achieve. Let’s declare the death of legacy technologies like GPOs, SCCM, and AD together!

1 thought on “Extending Okta Accounts to Workspace ONE via Just-in-Time Provisioning”

  1. Pingback: Week 50-2022 VMware Enduser Computing Updates – Julius Lienemann

Leave a Reply

Scroll to Top
%d bloggers like this: