My coworkers and I have to remap the network drives to our office wide file systems 2-3 times a day to access the files. This is the main file storage(some teams have moved some stuff to google drive but that doesnβt work for sensitive info).
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
I know it's a bit of a silly example, but in the public school in Korea where I taught for a while, teachers would write their Windows passwords on post-its and stick them to the monitors. Haha!
The IT guy wasn't really an IT guy.
A non profit where the Executive director is the only IT person (she's not tech savvy at all). It's horrific.
All Macs
Office Depot. They are still using IBM machines from the 90s with receipt printers the size of a shoebox.
The recent Falcon cock up?
I actually disagree. I only know a little of Crowdstrike internals but they're a company that is trying to do the whole DevOps/agile bullshit the right way. Unfortunately they've undermined the practice for the rest of us working for dinosaurs trying to catch up.
Crowdstrike's problem wasn't a quality escape; that'll always happen eventually. Their problem was with their rollout processes.
There shouldn't have been a circumstance where the same code got delivered worldwide in the course of a day. If you were sane you'd canary it at first and exponentially increase rollout from thereon. Any initial error should have meant a halt in further deployments.
Canary isn't the only way to solve it, by the way. Just an easy fix in this case.
Unfortunately what is likely to happen is that they'll find the poor engineer that made the commit that led to this and fire them as a scapegoat, instead of inspecting the culture and processes that allowed it to happen and fixing those.
People fuck up and make mistakes. If you don't expect that in your business you're doing it wrong. This is not to say you shouldn't trust people; if they work at your company you should assume they are competent and have good intent. The guard rails are there to prevent mistakes, not bad/incompetent actors. It just so happens they often catch the latter.