Using Intune Remediations to Address Massive CrowdStrike Outage

baymax-baymaxohno

As most people know by now, CrowdStrike is actively working on a major outage. Essentially, a channel driver has caused massive BSOD (Blue Screens of Death) across the world. This quick blog is going to cover:

What is a Driver Channel File?

As you can see in this diagram below from the CrowdStrike’s “kernel-level security agent” they have a Communications Module.

As stated, the “Comms Module” may comprise:

  • Multiple protocol stacks like TCP/IP
  • Device drivers to network interfaces
  • Add’l modules that let devices send and receive data over the network
diagram of the security agent for Crowdstrike

The driver channel file is used to facilitate comms between components. A common use of this file is to help facilitate the agent communication between hardware/software along with doing kernel-level monitoring. It’s an important distinction to just have a basic understanding as to why this matters and why it’s an issue.

What I know for certain based on the patent is the agent is installed on the OS in the form of a driver. In addition it also uses filter drivers to allow the agent to receive notifications, which isn’t a major surprise. If you’re bored feel free to read about the driver more here.

Channel File 291 Technical Details

You can access the technical details on the file that took down the world last week here.

I wanted to provide some of the technical details that are somewhat interesting. One of the big misnomers is that they’re kernel drivers, which is not accurate.

Channel File 291 controls the Falcon named pipe evaluation mechanism. Falcon will evaluate named pipes looking for malicious named pipes on the system. Named pipes on Windows are used for normal, interprocess or intersystem communications.

So what happened is the question? The issue is not a null bytes issue as has been commonly reported. A logic error was introduced (according to CrowdStrike) on an update to Channel File 291 looking for new malicious named pipes used by common C2 frameworks in cyberattacks.

Named Pipe Diagram Example

Some of the most common named pipes used in Windows 11 are:

  • \Pipe\Spoolss: used for print spooler service communications
  • \Pipe\Samr: used for Security Account Manager (SAM) communication
  • \Pipe\winreg: used for remote registry operations

Intune Detection and Remediation Scripts for the CrowdStrike Outage

Update: Based on additional testing, this doesn’t appear to be super effective, but it did work in a few instances. It’s just something to try to help out, but it’s not the cure.

The code itself is pretty simple, which we can see below.

A few notes:

  • We’re looking for files that match a pattern of C-00000291*.sys
  • We’re also targeting files with a timestamp of 0409 UTC (that’s the problematic version)

This code will help detect if the problematic files are on the PC:

# Define the target directory and file pattern
$targetDirectory = "C:\Windows\System32\drivers\CrowdStrike"
$filePattern = "C-00000291*.sys"
# Get the list of files matching the pattern in the target directory
$files = Get-ChildItem -Path $targetDirectory -Filter $filePattern
# Initialize an array to store problematic files
$problematicFiles = @()
# Iterate through each file and check the timestamp
foreach ($file in $files) {
    # Get the file's LastWriteTime
    $lastWriteTimeUTC = $file.LastWriteTimeUtc
    # Check if the LastWriteTime matches the problematic timestamp (04:09 UTC)
    if ($lastWriteTimeUTC.Hour -eq 4 -and $lastWriteTimeUTC.Minute -eq 9) {
        # Add the file to the problematic files array
        $problematicFiles += $file
    }
}
# Output the problematic files and set exit code
if ($problematicFiles.Count -gt 0) {
    Write-Output "Problematic files detected:"
    $problematicFiles | ForEach-Object { Write-Output $_.FullName }
    exit 1
} else {
    Write-Output "No problematic files detected."
}

Once you have the detection squared away, we move onto the remediation script, which deletes the bad evil files:

# Define the target directory and file pattern
$targetDirectory = "C:\Windows\System32\drivers\CrowdStrike"
$filePattern = "C-00000291*.sys"
# Get the list of files matching the pattern in the target directory
$files = Get-ChildItem -Path $targetDirectory -Filter $filePattern
# Initialize an array to store problematic files
$problematicFiles = @()
# Iterate through each file and check the timestamp
foreach ($file in $files) {
    # Get the file's LastWriteTime
    $lastWriteTimeUTC = $file.LastWriteTimeUtc
    # Check if the LastWriteTime matches the problematic timestamp (04:09 UTC)
    if ($lastWriteTimeUTC.Hour -eq 4 -and $lastWriteTimeUTC.Minute -eq 9) {
        # Add the file to the problematic files array
        $problematicFiles += $file
    }
}
# Delete the problematic files
if ($problematicFiles.Count -gt 0) {
    Write-Output "Deleting problematic files:"
    $problematicFiles | ForEach-Object {
        Write-Output "Deleting $_.FullName"
        Remove-Item -Path $_.FullName -Force
    }
    Write-Output "Problematic files deleted."
} else {
    Write-Output "No problematic files to delete."
}

GitHub Links:

Deploying Detection and Remediation Scripts in Intune

As I covered in this article here, automated remediations in Intune will use the detection script to see if a device has the bad files, and the remediation script will remove the evil bad files.

Simply, you can go here, and create the detection and remediation like this one below:

screenshot of remediations window

Letting Users Self-Service Their BitLocker Keys in the Company Portal Website

One other thing if the remediation cannot catch the issue before the BSOD, if you can let users get t heir own BitLocker keys really easily.

You go here and enable the ability for users to get their own BitLocker keys:

the setting that lets users recovery their own BitLocker recovery key

Once that is in place, you can go to the Company Portal Website to fetch your BitLocker keys in the event you need to do the Safe Mode dance to delete the files manually:

an example of the recovery key window in the Company Portal website

You can see the various BitLocker keys and can fetch them:

A list of the keys you can recover in the Company Portal window

A few last tips if you’re going to allow this:

  • You can use Conditional Access policy to only allow BitLocker Recovery Key access from compliant devices.
  • Leverage your Audit Logs to see who is accessing those keys, and potentially put in some automation to rotate your keys.
  • The password will be automatically rotated once it has been used as long as you have “Configure Recovery Password Rotation” set in the policy:
The BitLocker settings for automatic key rotation

The manual remediation steps are:

  1. Cycle through BSODs until you get the recovery screen
  2. Navigate to Troubleshoot > Advanced Options > Startup Settings
  3. Click “Restart”
  4. Skip the first BitLocker recovery key prompt with “Esc”
  5. Skip the second BitLocker recovery key prompt by selecting “Skip This Drive in the bottom right”
  6. Navigate to Troubleshoot > Advanced Options > Command Prompt
  7. Type “bcdedit /set {default} safeboot minimal” and press “Enter”
  8. Go back to the main menu and select “Continue”
  9. Device could cycle 2-3 times and hopefully boots to safe mode
  10. Go to C:\Windows\System32\drivers\Crowdstrike and delete any file starting with C-00000291* and a .sys file extension
  11. Open CMD as admin and type “bcdedit /deletevalue {default} safeboot” and press enter
  12. Reboot

Also, some people said 6-7 reboots will eventually get the new CrowdStrike patch and addressing the issue.

Other Possible Fixes In the Industry

A few potential options that people are doing right now are:

Work in Progress for BitLocker Automation

Currently, I am working on re-writing and re-building the code that is being used for the Microsoft Recovery tool to automate the BitLocker aspect of things.

The current code that powers the remediation is this below:

@echo off
set drive=C:
echo Using drive %drive%
echo If your device is BitLocker encrypted use your phone to log on to https://aka.ms/aadrecoverykey. Log on with your Email ID and domain account password to find the BitLocker recovery key associated with your device.
echo.
manage-bde -protectors %drive% -get -Type RecoveryPassword
echo.
set /p reckey="Enter recovery key for this drive if required: "
IF NOT [%reckey%] == [] (
	echo Unlocking drive %drive%
	manage-bde -unlock %drive% -recoverypassword %reckey%
)
del %drive%\Windows\System32\drivers\CrowdStrike\C-00000291*.sys
echo Done performing cleanup operation.
pause
exit 0

The new code will be below (which doesn’t currently work because we are working with Microsoft to get the BitLocker API permissions expected to Application Permissions (only works currently on Delegated Permissions):

@echo off
SETLOCAL ENABLEDELAYEDEXPANSION

REM Initialize the found flag
SET found=0

REM Get the BitLocker protection status and Numerical Password ID for C: drive
FOR /F "tokens=1,2 delims=:" %%A IN ('manage-bde -protectors -get C: ^| findstr /R /C:"Numerical Password" /C:"ID"') DO (
    IF "%%A"=="    Numerical Password" (
        SET found=1
    ) ELSE IF "%%A"=="      ID" (
        IF !found!==1 (
            SET BLKeyID=%%B
            SET found=0
        )
    )
)

REM Remove leading space and brackets from BLKeyID
SET BLKeyIDValue=%BLKeyID:~1%
SET BLKeyIDValue=%BLKeyIDValue:~1,-1%

ENDLOCAL & SET BLKeyID=%BLKeyIDValue%

SET tenant=
SET clientId=
SET clientSecret=

REM Set the headers
SET headers=Content-Type: application/x-www-form-urlencoded

REM Construct the body for the OAuth request
SET "body=grant_type=client_credentials&scope=https://graph.microsoft.com/.default"
SET "body=%body%&client_id=%clientId%"
SET "body=%body%&client_secret=%clientSecret%"

REM Make the OAuth request to get the token
curl -X POST -H "%headers%" -d "%body%" "https://login.microsoftonline.com/%tenant%/oauth2/v2.0/token" -o response.json

REM Extract the access token from the response using PowerShell
FOR /F "tokens=*" %%A IN ('powershell -Command "Get-Content response.json | ConvertFrom-Json | Select-Object -ExpandProperty access_token"') DO SET accessToken=%%A

REM Construct the authorization header
SET authHeader=Authorization: Bearer %accessToken%
SET headers=Content-Type: application/json
set responseFile=response.json

REM Use the token to make an authenticated request to MS Graph API
curl -X GET -H "%authHeader%" -H "%headers%" "https://graph.microsoft.com/v1.0/informationProtection/bitlocker/recoveryKeys/%BLKeyID%?$select=key" -o response.json

REM Unlock the Drive
set /p reckey=%responseFile%
echo Unlocking drive %drive%	
manage-bde -unlock %drive% -recoverypassword %reckey%
del %drive%\Windows\System32\drivers\CrowdStrike\C-00000291*.sys
echo Done performing cleanup operation.
ENDLOCAL
pause
exit 0
Facebook
Twitter
LinkedIn
CrowdStrike faces a major outage due to a driver channel file causing widespread BSOD. Intune scripts detect and remove problematic files. Intune can also enable users to self-service BitLocker keys. Conditional Access can control key access and Audit Logs can monitor key usage. Compliance ensures key access from compliant devices only.

7 thoughts on “Using Intune Remediations to Address Massive CrowdStrike Outage”

  1. Pingback: Navigating the Chaos: Resources for the CrowdStrike Crash – CSP Global Blog

  2. Pingback: InfoSec News – July 20th 2024 – Unstable Path

  3. Are there any drawbacks with enabling “Windows license verification.” That seems to be a requirement to create a remediation script.

Leave a Reply to JeffCancel reply

Scroll to Top

Discover more from Mobile Jon's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading