this post was submitted on 21 Jul 2024
-7 points (11.1% liked)

Technology

59651 readers
2640 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren't revolutionary. They're basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it's ultimately the customers and their data that suffer.

you are viewing a single comment's thread
view the rest of the comments
[–] scytale@lemm.ee 0 points 4 months ago* (last edited 4 months ago) (1 children)

For sure there is a problem, but this issue caused computers to not be able to boot in the first place, so how are you gonna remotely reboot them if you can’t connect to them in the first place? Sure there can be a way like one other comment explained, but it’s so complicated and expensive that not all of even the biggest corporations do them.

Contrary to what a lot of people seem to think, CrowdStrike is pretty effective at what it does, that’s why they are big in the corporate IT world. I’ve worked with companies where the security team had a minority influence on choosing vendors, with the finance team being the major decision maker. So cheapest vendor wins, and CrowdStrike is not exactly cheap. If you ask most IT people, their experience is the opposite of bloated budgets. A lot of IT teams are understaffed and do not have the necessary tools to do their work. Teams have to beg every budget season.

The failure here is hygiene yes, but in development testing processes. Something that wasn’t thoroughly tested got pushed into production and released. And that applies to both Crowdstrike and their customers. That is not uncommon (hence the programmer memes), it just happened to be one of the most prevalent endpoint security solutions in the world that needed kernel level access to do its job. I agree with you in that IT departments should be testing software updates before they deploy, so it’s also on them to make sure they at least ran it in a staging environment first. But again, this is a tool that is time critical (anti-malware) and companies need to have the capability to deploy updates fast. So you have to weigh speed vs reliability.

[–] timewarp@lemmy.world -1 points 4 months ago (1 children)

Booting a system or recovery image remotely over an IPMI or similar interface is not complicated or expensive. It is one of the most basic server management tasks. You acting like the concept is challenging seriously concerns me and I seriously wonder how anyone that thinks like that gets hired.

There are exceptions, granted. However, the IT budget at most mid to large-size corporations is extremely bloated. I don't think you can in good faith argue otherwise, unless you want to show me a budget that isn't. Do you have a real one that you can provide?

These companies don't even attract smart talent. They attract people that are complacent with doing nothing & collecting a paycheck. Smart people do not continue to work at these companies. The bureaucracy and management is soul-sucking. It took me a while to accept it too. I used to be optimistic thinking there is a logical explanation that can be fixed. Turns out they don't want to be fixed. They like to be broken. Like I said, it starts from the top down. A lot of the staff wouldn't even have a job if people actually tried to make things better.

[–] scytale@lemm.ee 0 points 4 months ago* (last edited 4 months ago) (1 children)

It is one of the most basic server management tasks.

Except these were endpoint machines, not servers. Things grinded to a halt not because servers went down, but because the computers end users interacted with crashed and wouldn’t boot, kiosk and POS systems included.

You acting like the concept is challenging seriously concerns me and I seriously wonder how anyone that thinks like that gets hired.

Damn, I guess all the IT people running the systems that were affected aren’t fit for the job.

unless you want to show me a budget that isn't. Do you have a real one that you can provide?

Can you show me the bloated budgets and where they are allocated on those mid to large size corporations? You are the one who insinuated that. All I said is that my experience for all the companies I worked with is that we always had to fight hard for budget, because the sales and marketing departments bring in the $$$ and that’s only what the executives like to see, therefore they get the budget. If your entire working experience is that your IT team had too much budget, then consider yourself privileged.

It’s weird how you’re all defensive and devolve to insults when people are just responding to your post.

[–] timewarp@lemmy.world -1 points 4 months ago

Except these were endpoint machines, not servers. Things grinded to a halt not because servers went down, but because the computers end users interacted with crashed and wouldn’t boot, kiosk and POS systems included.

Endpoint machines still have IPMI type of interfaces and PXE. When you manage thousands of machines, if you treat them all like a pet then you're doing it wrong.

Damn, I guess all the IT people running the systems that were affected aren’t fit for the job.

Is it going to take them several days to weeks to recover? Then they aren't fit for the job, or should consider another profession.

Can you show me the bloated budgets and where they are allocated on those mid to large size corporations?

All of them. The Form 10k fillings are available for public corporations. The ones claiming that they will be impacted for a while are the ones I'm concerned most about.

It’s weird how you’re all defensive and devolve to insults when people are just responding to your post.

I spent a career arguing with sales reps who had one goal in mind, and that was to make the biggest commission possible. I sound argumentative because those sales reps had every tool imaginable to show up out of no where.