The CrowdStrike Problem Isn’t A Simple Fix…
78,492
Published 2024-07-19
TY AGAIN FOR THE STUDIO @learnwithjason
SOURCES
x.com/troyhunt/status/1814174010202345761
x.com/us_stormwatch/status/1814268813879206397
Check out my Twitch, Twitter, Discord more at t3.gg/
S/O Ph4seon3 for the awesome edit 🙏
All Comments (21)
-
Sorry about the frame rate issues, CrowdStrike took down my main recording rig and I had to do this on my Mac :(
-
This is the best named company in history. This is the exact same outcome as if the entire crowd went on strike.
-
When I got multiple calls at 2AM I knew this was going to go down as one of the worst days in recent IT history.
-
To everyone cleaning up this mess: my condolences, may your weekend rest in peace.
-
The largest disruption in human history caused by a missing try/catch block
-
I mean, Windows might be the least secure how most people use it, but there's another huge facet to why it's the target of randsomware: it's absolutely dominates the end-user/workstation market, especially when you are wagering the victim can't just restore from a backup and ignore you.
-
I mean, all the malware also targets windows because that's the big user facing desktop OS.
-
Definitely a "zero" day problem. The only things saving CrowdStrike from a class action is most law firms are Windows users too :)
-
the fact that one company can take everything down like this is scary, one bad actor and this could've been a mass malware attack instead of a simple driver error
-
Crowdstrike is ransomware, they just have a different payment plan. You pay up front for the privilege of being ransomwared at some unknown point in the future. Turns out the unknown point in the future was today! Surprise!
-
Imagine having "I broke the planet" as a hold my beer anecdote whenever you and your colleagues start trying to one-up each other on times you screwed up at work.
-
Yeah, I'm one of those tech guys. I'm in charge of our enterprise's cloud infrastructure (which is all our servers). I was up till 2am restoring a couple servers affected on our European side, thinking it was some weird Win update that took things down. I went to bed and was woken up 3 hours later by my boss freaking out. I spent all morning force shutting down systems, detaching and attaching drives to working systems to remove this .sys file and all. What a HUGE pain. I finally got everything working after like 5 hours of doing this crap nonstop. The poor helpdesk was stuck doing bitlocker based safe mode fixes for end users. I don't envy them...
-
My favourite part of the disappearing air traffic example is that while they will occasionally get crippling downtime from their infrastructure, Southwest still running primarily Windows 3.1 with a sprinkling of Windows 95 here and there rather isolated them from the CrowdStrike issue.
-
They failed to do a smoketest of their agent after build but before deploying it worldwide. Sounds like their software and update development process is just really not up to professional software engineering standards. At Meta, we had to have other engineers, sometimes multiple, review diffs before they would be accepted. And then there were multiple layers of CI/CD testing before exponential deployment with canary testing. You don't just push new code to all the machines all at once, because it's way too dangerous.
-
uh, no. Crowdstrike on mac is just as deep, and slows down my work mac just as much.
-
They don't want to apologize cuz they don't want to admit fault and open them up to lawsuits.
-
loved the title "The day the world went blue"
-
Let's not forget all the people who probably put their very important bit locker passwords... inside of their bit lockers.
-
hey grandma, all you have to do is start up in safe mode, grandma? Grandma?
-
I literally just turned down an offer from Crowdstrike two weeks ago in favor of another job offer…it was a tough decision to make at the time but now it’s definitely looking like I made the right decision! 😬