Potential Bug Fixes for Latest CrowdStrike Update
Book a demoTry Marker.io for free
How-To Guides

Potential Bug Fixes for Latest CrowdStrike Update

Last updated:
July 19, 2024
Contents
    Contents

    Last updated on Jul 19, 1:44 PM

    CrowdStrike Falcon Agent Update Causes BSOD Issues on Windows

    Some Windows Instances, Windows Workspaces and Appstream Applications experienced connectivity issues and reboots due to a recent update of the CrowdStrike Falcon agent (csagent.sys).

    This update caused a stop error (BSOD) within the Windows operating system.

    We are compiling all available information on fixes for the recent global outage affecting CrowdStrike.

    Below are some solutions sourced from Reddit and other parts of the internet.

    We will continue to monitor and update this article as new information becomes available.

    Live coverage on the outage can be found on the BBC website.

    Please note that the solutions provided below are sourced from various parts of the internet, including online forums.

    We cannot guarantee their effectiveness or safety.

    Any modifications made to your machine based on these solutions are at your own risk.

    We strongly recommend consulting a professional or CrowdStrike's official technical support before making any changes to your system.

    Workaround Steps from CrowdStrike Engineering

    Permalink here.

    CrowdStrike Engineering has identified and reverted a content deployment related to this issue. If you continue to experience problems, use the following workaround:

    1. Boot into Safe Mode or Windows Recovery Environment:
      • Restart your computer and press F8 (or Shift+F8) before the Windows logo appears.
      • Select "Safe Mode" or "Windows Recovery Environment."
    2. Navigate to CrowdStrike Directory:
      • Open File Explorer and go to C:\Windows\System32\drivers\CrowdStrike.
    3. Delete Specific File:
      • Locate the file named C-00000291*.sys and delete it.
    4. Boot Normally:
      • Restart your computer and allow it to boot normally.

    Recovery Steps for AWS EC2 Instances

    Permalink here.

    If you need to recover an AWS EC2 instance affected by the update, follow these steps:

    1. Detach the EBS Volume: Detach the EBS volume from the impacted EC2 instance.
    2. Attach to New EC2 Instance: Attach the EBS volume to a new EC2 instance.
    3. Fix CrowdStrike Driver Folder: Navigate to the CrowdStrike driver folder and make necessary repairs.
    4. Detach EBS Volume: Detach the EBS volume from the new EC2 instance.
    5. Reattach EBS Volume: Attach the EBS volume back to the original impacted EC2 instance.

    If you have Nutanix (or a similar encryption-in-place VM)

    Permalink here

    • Mount the VirtIO ISO
    • Run command prompt
    • Run this command pnputil /add-driver "D:\Windows Server 2019\x64\*.inf" /install
    • Delete the C-00000291*.sys file

    Amazon AWS Recommended Fix

    Permalink here

    AWS has posted a list of manual mitigations for the issue.

    High-level fix

    Permalink here

    A Reddit user recommends a procedure that can be applied high level for all cloud providers.

    In short:

    1. Detach affected OS disk
    2. Attach affected OS disk as DATA disk to a new VM instance
    3. Apply workaround
    4. Detach DATA disk (which is your affected OS disk) from the newly created VM instance
    5. Attach the affected OS disk which has been fixed to the faulty VM instance
    6. Boot the instance
    7. Rinse and repeat.

    Obviously, this can be automated to some extent, but with so many people doing the same calls to the resource provider APIs, expect slowness and also failures, so you need patience.

    Ongoing Monitoring

    We are actively monitoring the situation and will update this article with any new fixes or workarounds as they become available.

    Continue reading

    Frequently Asked Questions

    Get started now

    Start free trial
    Free 15-day trial  •  No credit card required •  Cancel anytime