Last updated on Jul 19, 1:44 PM
CrowdStrike Falcon Agent Update Causes BSOD Issues on Windows
Some Windows Instances, Windows Workspaces and Appstream Applications experienced connectivity issues and reboots due to a recent update of the CrowdStrike Falcon agent (csagent.sys).
This update caused a stop error (BSOD) within the Windows operating system.
We are compiling all available information on fixes for the recent global outage affecting CrowdStrike.
Below are some solutions sourced from Reddit and other parts of the internet.
We will continue to monitor and update this article as new information becomes available.
Live coverage on the outage can be found on the BBCÂ website.
Workaround Steps from CrowdStrike Engineering
Permalink here.
CrowdStrike Engineering has identified and reverted a content deployment related to this issue. If you continue to experience problems, use the following workaround:
- Boot into Safe Mode or Windows Recovery Environment:
- Restart your computer and press F8 (or Shift+F8) before the Windows logo appears.
- Select "Safe Mode" or "Windows Recovery Environment."
- Navigate to CrowdStrike Directory:
- Open File Explorer and go to C:\Windows\System32\drivers\CrowdStrike.
- Delete Specific File:
- Locate the file named C-00000291*.sys and delete it.
- Boot Normally:
- Restart your computer and allow it to boot normally.
Recovery Steps for AWS EC2 Instances
Permalink here.
If you need to recover an AWS EC2 instance affected by the update, follow these steps:
- Detach the EBS Volume: Detach the EBS volume from the impacted EC2 instance.
- Attach to New EC2 Instance: Attach the EBS volume to a new EC2 instance.
- Fix CrowdStrike Driver Folder: Navigate to the CrowdStrike driver folder and make necessary repairs.
- Detach EBS Volume: Detach the EBS volume from the new EC2 instance.
- Reattach EBS Volume: Attach the EBS volume back to the original impacted EC2 instance.
If you have Nutanix (or a similar encryption-in-place VM)
Permalink here
- Mount the VirtIO ISO
- Run command prompt
- Run this command pnputil /add-driver "D:\Windows Server 2019\x64\*.inf" /install
- Delete the C-00000291*.sys file
Amazon AWS Recommended Fix
Permalink here
AWSÂ has posted a list of manual mitigations for the issue.
High-level fix
Permalink here
A Reddit user recommends a procedure that can be applied high level for all cloud providers.
In short:
- Detach affected OS disk
- Attach affected OS disk as DATA disk to a new VM instance
- Apply workaround
- Detach DATA disk (which is your affected OS disk) from the newly created VM instance
- Attach the affected OS disk which has been fixed to the faulty VM instance
- Boot the instance
- Rinse and repeat.
Obviously, this can be automated to some extent, but with so many people doing the same calls to the resource provider APIs, expect slowness and also failures, so you need patience.
Ongoing Monitoring
We are actively monitoring the situation and will update this article with any new fixes or workarounds as they become available.