this post was submitted on 21 Jul 2024
-7 points (11.1% liked)

Technology

59651 readers
2640 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren't revolutionary. They're basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it's ultimately the customers and their data that suffer.

you are viewing a single comment's thread
view the rest of the comments
[–] computergeek125@lemmy.world 0 points 4 months ago (1 children)

Getting production servers back online with a low level fix is pretty straightforward if you have your backup system taking regular snapshots of pet VMs. Just roll back a few hours. Properly managed cattle, just redeploy the OS and reconnect to data. Physical servers of either type you can either restore a backup (potentially with the IPMI integration so it happens automatically), but you might end up taking hours to restore all data, limited by the bandwidth of your giant spinning rust NAS that is cost cut to only sustain a few parallel recoveries. Or you could spend a few hours with your server techs IPMI booting into safe mode, or write a script that sends reboot commands to the IPMI until the host OS pings back.

All that stuff can be added to your DR plan, and many companies now are probably planning for such an event. It's like how the US CDC posted a plan about preparing for the zombie apocalypse to help people think about it, this was a fire drill for a widespread ransomware attack. And we as a world weren't ready. There's options, but they often require humans to be helping it along when it's so widespread.

The stinger of this event is how many workstations were affected in parallel. First, there do not exist good tools to be able to cover a remote access solution at the firmware level capable of executing power controls over the internet. You have options in an office building for workstations onsite, there are a handful of systems that can do this over existing networks, but more are highly hardware vendor dependent.

But do you really want to leave PXE enabled on a workstation that will be brought home and rebooted outside of your physical/electronic perimeter? The last few years have showed us that WFH isn't going away, and those endpoints that exist to roam the world need to be configured in a way that does not leave them easily vulnerable to a low level OS replacement the other 99.99% of the time you aren't getting crypto'd or receive a bad kernel update.

Even if you place trust in your users and don't use a firmware password, do you want an untrained user to be walked blindly over the phone to open the firmware settings, plug into their router's Ethernet port, and add https://winfix.companyname.com as a custom network boot option without accidentally deleting the windows bootloader? Plus, any system that does that type of check automatically at startup makes itself potentially vulnerable to a network-based attack by a threat actor on a low security network (such as the network of an untrusted employee or a device that falls into the wrong hands). I'm not saying such a system is impossible - but it's a super huge target for a threat actor to go after and it needs to be ironclad.

Given all of that, a lot of companies may instead opt that their workstations are cattle, and would simply be re-imaged if they were crypto'd. If all of your data is on the SMB server/OneDrive/Google/Nextcloud/Dropbox/SaaS whatever, and your users are following the rules, you can fix the problem by swapping a user's laptop - just like the data problem from paragraph one. You just have a team scale issue that your IT team doesn't have enough members to handle every user having issues at once.

The reality is there are still going to be applications and use cases that may be critical that don't support that methodology (as we collectively as IT slowly try to deprecate their use), and that is going to throw a Windows-sized monkey wrench into your DR plan. Do you force your uses to use a VDI solution? Those are pretty dang powerful, but as a Parsec user that has operated their computer from several hundred miles away, you can feel when a responsive application isn't responding quite fast enough. That VDI system could be recovered via paragraph 1 and just use Chromebooks (or equivalent) that can self-reimage if needed as the thin clients. But would you rather have annoyed users with a slightly less performant system 99.99% of the time or plan for a widespread issue affecting all system the other 0.01%? You're probably already spending your energy upgrading from legacy apps to make your workstations more like cattle.

All in trying to get at here with this long winded counterpoint - this isn't an easy problem to solve. I'd love to see the day that IT shops are valued enough to get the budget they need informed by the local experts, and I won't deny that "C-suite went to x and came back with a bad idea" exists. In the meantime, I think we're all going to instead be working on ensuring our update policies have better controls on them.

As a closing thought - if you audited a vendor that has a product that could get a system back online into low level recovery after this, would you make a budget request for that product? Or does that create the next CrowdStruckOut event? Do you dual-OS your laptops? How far do you go down the rabbit hole of preparing for the low probability? This is what you have to think about - you have to solve enough problems to get your job done, and not everyone is in an industry regulated to have every problem required to be solved. So you solve what you can by order of probability.

[–] timewarp@lemmy.world -1 points 4 months ago

I upvoted because you actually posted technical discussion and details that are accurate. PXE and remote power management is the way. Most workstation BIOS will have IPMI functionality already included. I agree thought that being that these are remote endpoints, it can be more challenging. Having a script to reboot their endpoints into a recovery environment though would be a basic step though in any DR scenario. Mounting the OS partition to delete a file & reboot wouldn't be a significant endeavor, although one that they'd need to make sure they got right. Still though, it would be hard to mess up for anyone with intermediate computer skills... and you'd hope these companies at least have someone trained to do that rather quickly. They'd have to spend more time writing up a CR explaining all the steps, and then joining a conference call with like 100 people with babies crying in the background... and managers insisting they remain on the call while they write the script.