Hey Everyone! So I’ve just returned from a break (like almost 3 months), while I was making Workplace Ninja US 2025 in Dallas happen. I couldn’t have returned at a better time because this Secure Boot certificate thing is heating up.
Recently, Microsoft posted an article discussing the upcoming update of the Secure Boot certificates that will be expiring at the end of June 2026. Easy enough right? Don’t I wish! I investigated this a bit. Despite Device Query being able to query for this information, Multi-Device Query (MDQ) cannot! Inconceivable! Today, we will discuss:
- Secure Boot Certificate Update Architecture
- Leveraging Log Analytics to Monitor your Fleet
- Final Thoughts
Secure Boot Certificate Update Architecture
Overall, it’s simple. The older devices that are pre-2024 will have their certificate expire in 2026. The entire framework for this update revolves around registry keys and a Windows Scheduler task.
The registry keys all live in this location:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecureBoot and HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecureBoot\Servicing
The key registry keys are:
| UEFICA2023Status | States if the new certificate has been updated on the device. Will likely say “NotStarted” and moves to “InProgress” and finally to “Updated” |
| UEFICA2023Error | After the upgrade, this will be set to “0” on success. An error will reflect a non-zero number. |
| WindowsUEFICA2023Capable | This key tells you if the certificate is now in use. “0” says the certificate isn’t in the database yet. “1” means its there, but not in use. “2” states its using the 2023 boot manager. |
| AvailableUpdates | This key will be typically set to “0”, but you can set it to 0x5944 to make it update the certificate. |
| HighConfidenceOptOut | You can set this to “1” lets you opt-out of automatic updates of the certificate via the cumulative update. |
| MicrosoftUpdateManagedOptIn | Let’s you opt into Microsoft-managed update via “Controlled Feature Rollout” |
The other aspect of this is the scheduled task: Secure-Boot-Update inside of Microsoft\Windows\PI\Secure-Boot-Update. The setting of the “AvailableUpdates” key will trigger the scheduled task within 12 hours:

From a technical perspective, it’s interesting as it uses a “custom handler”, which runs compiled system process to update the certificate. If you want to try it out manually, you can run this PowerShell:
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Secureboot /v AvailableUpdates /t REG_DWORD /d 0x5944 /f
Start-ScheduledTask -TaskName "\Microsoft\Windows\PI\Secure-Boot-Update"
Manually reboot the system when the AvailableUpdates becomes 0x4100
Start-ScheduledTask -TaskName "\Microsoft\Windows\PI\Secure-Boot-Update"
It’s a bit convoluted as running the scheduled task once will trigger the process caused by the “AvailableUpdates” registry key. Next, you reboot and run it again and it does its certificate magic. It’s pretty chaotic from a logical perspective.
My friend Richard Hicks wrote a great article on how to update these here and wrote a really nice PowerShell script to check out the certificate itself via PowerShell found here.
Now, let’s get into my solution.
Leveraging Log Analytics to Capture Secure Boot Certificate Status Across Your Fleet
We know that we want to query the status of our devices, but we don’t really know much about what’s going on overall. After considering several different options, we decided that we could create a database of some sort and use PowerShell to write the device’s state to that database.
We chose to use Log Analytics as it’s good at that and despite me NOT loving KQL, let’s take a shot at it anyways.
We have 3 main steps:
- Create the Log Analytics Workspace
- Create the Data Collection Endpoint
- Create the Log Analytics Custom Table
- Create the App Registration
- Test the Log Analytics Script Locally
- Deploy the Secure Boot Script to your Intune Devices
Create the Log Analytics Workspace
First, we start by going to into Azure and “Log Analytics Workspace” and click “Create”
This part is simple, we just set the resource group and click create magically:

Create the Data Collection Endpoint
Now, let’s go over to Data Collection Endpoint area and create it. Simple to create just like the workspace:

Now, we drill into the created data collection endpoint, and write down the logs ingestion endpoint, which you will need later.

Create the Log Analytics Custom Table
After the endpoint is created, go back to your Log Analytics Workspace, you drill into “Tables” and click Create > New Custom Log (Direct Ingest)

You can set a name for the table, select “Analytics”, create a new data collection rule, and click “Next”

After clicking next, you upload the JSON file found here
It looks like this:
{
"streamDeclarations": {
"Custom-Json-SecureBootServicing_CL": {
"columns": [
{
"name": "TimeGenerated",
"type": "datetime"
},
{
"name": "Hostname",
"type": "string"
},
{
"name": "RegistryPath",
"type": "string"
},
{
"name": "UEFICA2023Status",
"type": "string"
},
{
"name": "WindowsUEFICA2023Capable",
"type": "string"
},
{
"name": "KEK_Thumbprint",
"type": "string"
},
{
"name": "KEK_IssueDate",
"type": "datetime"
},
{
"name": "KEK_ExpirationDate",
"type": "datetime"
}
]
}
}
}
Once done, you just click “Next” and “Create” to finish creating that custom table.
After finished, go into the Data Collection Rule” and document your immutable ID you will need for the script later:

Drill into Configuration > Data Sources and document the Stream Name, for your script.

Create the App Registration
Now, you go to App Registrations and click “New Registration”
Name it and click “Register”

Document the App ID and the Tenant ID. Once done, go to certificates & secrets to create the client secret for your PowerShell script.

From there, just click “New Client” secret, follow the prompts and document the client secret:

Grant Permissions to the Data Collection Rule
Drill into the Data Connection Rule and the Access Control (IAM) section to grant the role assignment to the App Registration you just created and click Add > Add role assignment.

Select the “Monitoring Metrics Publisher”:

Add your app registration and click “Review and Assign”

Test the Log Analytics Script Locally
You can download my Log Analytics script here.
Simply, copy the script locally, and update the variables below with information you have collected in the previous steps.
$TenantId = "" # Entra ID tenant (Directory) ID (You collected this from the App Registration)
$ClientId = "" # App registration (Application) ID (You collected this from the App Registration)
$Secret = "" # Client secret VALUE (You collected this from the App Registration)
$DceLogsIngestionUri = "" (This was from the Data Collection Endpoint you collected earlier)
$DcrImmutableId = "" (You grabbed this from the Data Collection Rule earlier).
$StreamName = "" (You grabbed this from the Data Collection Rule earlier).
Run it and see the magic happen:

Once you run it, it will take a few minutes and then you can see it in your log analytics workspace:

Deploy Log Analytics Script via Intune
Now, you just need to save your script and deploy it as a platform script here
First, name the script, upload it, and set the settings like this:

After that, assign to all devices and deploy!

Shortly after, you will start seeing those sweet devices coming in with this easy query:
SecureBootServicing_CL
| where ingestion_time() > ago(24h)
| order by ingestion_time() desc
| take 50

Final Thoughts
Be sure to let me know what you think and leaves some comments below!
I thought overall it was an interesting exercise to create my own log analytics workspace and write things to it, which can be used for many different purposes.
You can also create new Data Collection Rules (DCRs) and build your own JSON schema if the DCRs I made aren’t useful or working like you expect. Replication can be a little annoying sometimes when you grow the schema. A good rule of thumb I learned is if stuff isn’t writing to your workspace properly, it is likely the schema or how it’s formatted.
