Network admin life crowdstrike cleanup

Publicado 2024-07-21

Todos los comentarios (21)
  • Programming errors happen; that’s forgivable. Pushing out a worldwide update without extensive testing and without doing a small test update to subset of customers is not forgivable. My company would never do that. Never.
  • Took us 7 hours to get majority of stuff working, but altogether 48 hours no-sleep up to get us stabilized. I'm also in healthcare IT. It was a nightmare.
  • We dropped Crowdstrike months ago. Glad we made the right decision. Also, bitlocker shouldn't be used on a Windows server long as the physical machine is in a secure location. Bitlocker just complicate things at the OS level.
  • Is it just me to think that "crowdstrike" is a extremely fitting name for a company to to do a blunder like this?
  • My rage at everyone downplaying this for CrowdStrike is immeasurable. This is a billion dollar company, with a B, trusted by critical government, public, and private services and they shafted each and everyone. The lack of outrage from our authorities is absolutely disgusting. Speaks a lot to the state of cybersecurity and tech in general
  • @PeterSedesse
    5 days now of getting paid $100/hr to turn off and turn on computers all day.
  • I also work in a hospital as an end-user support tech, and I went in thinking I was going to have an easy Friday since I usually try to finish my tasks and incidents throughout the week, so I even went out for drinks....I was hung over AF coming into work, and even my machine was down, so my entire Friday was just non-stop lol.
  • @corstian_
    Is the cio thinking of switching away from crowdstrike? Nice job getting the hospital up and running so quickly
  • Charge Crowdstrike for overtime. They only seem to care for their money not their users satisfaction.
  • And all government facilities run on old IBM systems. It's pretty obvious why at this point. They wont be affected by their own corruption.
  • Imagine the pressure those crowdstrike engineers must be under.
  • How much you want to bet nothing will happen to crowdstrike
  • @joshman1019
    The company I work for has several layers of infrastructure, a significant amount of which lives in Azure. Azure VM's were a nightmare to repair. My team and I worked 20 hours beginning at 3 AM on Friday. Rested a bit, worked 20 additional hours, then worked 10 hours Sunday. We had bitlocker issues, Azure permissions issues, unmanaged disk issues, issues with detached disks reattaching with the wrong drive letters, Share permissions being lost, etc etc. It was insanity.
  • Thats a solid CIO. Staying in the trenches with the team and not asking for an ETA from the ivory tower. Wish I had leadership like that when I worked in Healthcare IT.
  • Hehe, every time some duffus claims to me that vga is dead I point to the massive server room and laugh
  • Crowdstrike CEO was probably losing millions in stock by the minute. No wonder he felt like throwing up.
  • I've dealt with several large incidents in my career (including security incidents). If there's any good from this one, it's that a good chunk of IT pros all over the world got to share the experience together. If there is a glass-half-full way to look at it, we'll all get to hone our response plans together and compare notes. While this was just a bad push, it was also a good dress-rehearsal for a supply chain attack (albeit with a simple, but tedious, resolution). I'm a CIO, myself, and was behind the keyboard with my folks on this one. They inspired the heck out me, how well they worked under pressure on this. Beyond just giving us in IT an annoying workload, I'm sure we'll learn that real human damage was done. Stories like this involving hospitals certainly point to such damage. I don't want to take anything away from that, but anyone not taking the opportunity to extract great lessons and measure the effectiveness of their response is missing out.
  • @drumitar
    this is not a networking problem, send this ticket back to help desk !
  • @InquisiitorWH44K
    Feel your pain and others out there that have had to deal with this fiasco. I got called at 3 AM here in VA. Server guy here at regional large region clinic. We recovered all or our servers in about 14 hours. Had enough resiliency that our hospitals remained open and able to serve our patients. Field services are still cleaning up endpoints.