this post was submitted on 19 Jul 2024
1 points (100.0% liked)

Technology

59566 readers
4839 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: "It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers."

He isn't alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

top 50 comments
sorted by: hot top controversial new old
[–] Buffalox@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (1 children)

At leas no mission critical services were hit, because nobody would run mission critical services in Windows, right?
..
RIGHT??

[–] catloaf@lemm.ee 0 points 4 months ago (10 children)

We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

[–] jet@hackertalks.com 0 points 4 months ago (3 children)

The good news is! This is a shake out test and they're going to update those playbooks

[–] jlh@lemmy.jlh.name 0 points 4 months ago

Sysadmins are lucky it wasn't malware this time. Next time could be a lot worse than just a kernel driver with a crash bug.

3rd party companies really shouldn't have access to ship out kernel drivers to millions of computers like this.

[–] Evotech@lemmy.world 0 points 4 months ago

The bad news is that the next incident will be something else they haven't thought about

[–] Quexotic@infosec.pub 0 points 4 months ago

I wish you were right. I really wish you were. I don't think you are. I'm not trying to be a contrarian but I don't think for a large number of organizations that this is the case.

For what it's worth I truly hope that I'm 100% incorrect and everybody learns from this bullshit but that may not be the case.

[–] SapphironZA@sh.itjust.works 0 points 4 months ago (1 children)

We also backup our bitlocker keys with our RMM solution for this very reason.

[–] catloaf@lemm.ee 0 points 4 months ago (1 children)

I hope that system doesn't have any dependencies on the systems it's protecting (auth, mfa).

[–] SapphironZA@sh.itjust.works 0 points 4 months ago

It's outside the primary failure domain.

[–] Zron@lemmy.world 0 points 4 months ago

I remember a few career changes ago, I was a back room kid working for an MSP.

One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

It was our air-gapped encryption key backup.

I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

[–] ripcord@lemmy.world 0 points 4 months ago (2 children)

They also don't seem to have a process for testing updates like these...?

This seems like showing some really shitty testing practices at a ton of IT departments.

[–] catloaf@lemm.ee 0 points 4 months ago (3 children)

Unfortunately, the pace of attack development doesn't really give much time for testing.

load more comments (3 replies)
[–] USSEthernet@startrek.website 0 points 4 months ago (5 children)

Apparently from what I was reading these are forced updates from Crowdstrike, you don't have a choice.

load more comments (5 replies)
load more comments (6 replies)
[–] Boozilla@lemmy.world 0 points 4 months ago* (last edited 4 months ago) (1 children)

If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:

  • Shut down the affected instance.
  • Detach the boot volume.
  • Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
  • Remove the file(s) recommended by Crowdstrike:
  • Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
  • Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
  • Detach and move the volume back over to original instance (attach)
  • Boot original instance

Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.

[–] Defaced@lemmy.world 0 points 4 months ago (1 children)

A word of caution, I've done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.

load more comments (1 replies)
[–] slacktoid@lemmy.ml 0 points 4 months ago (5 children)

Sounds like the best time to unionize

[–] gari_9812@lemmy.world 0 points 4 months ago (1 children)

Any time is a good time to unionize

[–] slacktoid@lemmy.ml 0 points 4 months ago

Agreed, just here they have then by the metaphorical balls.

[–] Quexotic@infosec.pub 0 points 4 months ago (1 children)

I'm in. This world desperately needs an information workers union. Someone to cover those poor fuckers in the help desk and desktop support as well as the engineers and architects that keep all of this shit running.

Those of us that aren't underpaid are treated poorly. Today is what it looks like if everybody strikes at once.

[–] slacktoid@lemmy.ml 0 points 4 months ago (4 children)

This dude here coming in hot with a name, Information Workers Union (IWU). Love it

Soo are you gonna create the community or am I?

load more comments (4 replies)
load more comments (3 replies)
[–] MrNesser@lemmy.world 0 points 4 months ago (4 children)

Lemmy appears to be weathering the storm quite well.....

..probably runs on linux

[–] RootBeerGuy@discuss.tchncs.de 0 points 4 months ago (1 children)

It runs on hundreds of servers. If any of them ran windows they might be out but unless you got an account on them you'd be fine with the rest. That's the whole point of federation.

load more comments (1 replies)
[–] cygnus@lemmy.ca 0 points 4 months ago

The overwhelming majority of webservers run Linux (it's not even close, like high 90 percent range)

[–] bilb@lem.monster 0 points 4 months ago

I wonder if any Lemmy servers run on Windows without WSL. I can't think of any hard dependencies on Linux, so it should be possible.

load more comments (1 replies)
[–] db0@lemmy.dbzer0.com 0 points 4 months ago (2 children)

Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.

Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D

[–] sugar_in_your_tea@sh.itjust.works 0 points 4 months ago* (last edited 4 months ago) (1 children)

That's why the 3-2-1 rule exists:

  • 3 copies of everything on
  • 2 different forms of media with
  • 1 copy off site

For something like keys, that means:

  1. secure server share
  2. server share backup at a different site
  3. physical copy (either USB, printed in a safe, etc)

Any IT pro should be aware of this "rule." Oh, and periodically test restoring from a backup to make sure the backup actually works.

[–] IphtashuFitz@lemmy.world 0 points 4 months ago (1 children)

We have a cron job that once a quarter files a ticket with whoever is on-call that week to test all our documented emergency access procedures to ensure they’re all working, accessible, up-to-date etc.

load more comments (1 replies)
[–] kescusay@lemmy.world 0 points 4 months ago (1 children)

Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.

[–] pearsaltchocolatebar@discuss.online 0 points 4 months ago (4 children)

Linux can shit the bed too. You need to maintain a physical copy.

[–] gnutrino@programming.dev 0 points 4 months ago (1 children)

Sure but the chances of your Windows and Linux machines shitting the bed at the same time is less than if everything is running Windows. It's exactly the same reason you keep a physical copy (which after all can break/burn down etc.) - more baskets to spread your eggs across.

[–] pearsaltchocolatebar@discuss.online 0 points 4 months ago (1 children)

Very few businesses are going to spend the money running redundant infrastructure on two different operating systems. Most of them won't even spend the money on a proper DR plan.

[–] Revan343@lemmy.ca 0 points 4 months ago (1 children)

Then they get to suffer the consequences when shit like this happens

[–] stringere@sh.itjust.works 0 points 4 months ago

Then they get to suffer the consequences when shit like this happens

Oh, they are.

[–] StaySquared@lemmy.world 0 points 4 months ago (2 children)

CS did take down Linux a few years back.. I forget the exact details.

load more comments (2 replies)
[–] Voroxpete@sh.itjust.works 0 points 4 months ago

Their point is not that linux can't fail, it's that a mix of windows and linux is better than just one. That's what "heterogeneous environment" means.

You should think of your network environment like an ecosystem; monocultures are vulnerable to systemic failure. Diverse ecosystems are more resilient.

[–] noobface@lemmy.world 0 points 4 months ago

Hey Ralph can you get that post-it from the bottom of your keyboard?

[–] gravitas_deficiency@sh.itjust.works 0 points 4 months ago (3 children)

Lmao this is incredible

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

"Most of our comms are down, most execs' laptops are in infinite bsod boot loops, engineers can't get access to credentials to servers."

N.B.: Reddit link is from the source

I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.

[–] CodexArcanum@lemmy.world 0 points 4 months ago (1 children)

Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.

Sounds like a lot of architects and admins are going to get thrown under the bus for this one.

"Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we're firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka."

[–] Evotech@lemmy.world 0 points 4 months ago

Stupid argument though, honestly just chance that crowdstrike was the vendor to shit the bed. Might aswell have been set. You should still have procedures for this

[–] SkybreakerEngineer@lemmy.world 0 points 4 months ago

Fired? I hope they get class-actioned out of existence as a warning to anyone who skimps on QA

[–] MagicShel@programming.dev 0 points 4 months ago

C-suites fired? That's the funniest thing I've heard yet today. They aren't getting fired - they are their own ass-coverage. How can they be to blame when all these other companies were hit as well?

I guess this is a good week for me to still be laid off.

[–] pelletbucket@lemm.ee 0 points 4 months ago

I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.

[–] jlh@lemmy.jlh.name 0 points 4 months ago (2 children)

Why the fuck does an antivirus need a kernel driver

[–] catloaf@lemm.ee 0 points 4 months ago

Because that's where filesystem access lives? AV wouldn't do very much good if it could only run from userspace.

[–] SapphironZA@sh.itjust.works 0 points 4 months ago (1 children)

Because the windows OS is inherently insecure with lots of permission elevation opportunities.

[–] Blaster_M@lemmy.world 0 points 4 months ago (1 children)

Pretending linux privelege escalation doesn't exist... to fight something that gets root you have to be able to fight at the root level, or the root access malware can simply nuke the av from userland.

[–] jlh@lemmy.jlh.name 0 points 4 months ago* (last edited 4 months ago)

Or you could just use kernel namespaces, SELinux, Systemd sandboxing, etc. There is zero need to run in ring 0 for security reasons.

Also, privilege escalation is a lot rarer on Linux than it is on Windows.

[–] Max_P@lemmy.max-p.me 0 points 4 months ago (5 children)

This is why every machine I manage has a second boot option to download a small recovery image off the Internet and phone home with a shell. And a copy of it on a cheap USB stick.

Worst case I can boot the Windows install in a VM with the real disk, do the maintenance remotely. I can reinstall the whole thing remotely. Just need the user to mash F12 during boot and select the recovery environment, possibly input WiFi credentials if not wired.

I feel like this should be standard if you have a lot of remote machines in the field.

load more comments (5 replies)
[–] Cossty@lemmy.world 0 points 4 months ago (25 children)

I didnt know so many servers still run windows.

load more comments (25 replies)
load more comments
view more: next ›