this post was submitted on 03 Feb 2025
120 points (100.0% liked)

Linux

49544 readers
1363 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] Evil_Shrubbery@lemm.ee 3 points 8 hours ago* (last edited 8 hours ago)

We underfund our heroes, don't we?

(Also that monitors model name in the thumbnail "UHD 4K 2K" :D

[–] merthyr1831@lemmy.ml 10 points 22 hours ago (5 children)

yet another reason to back flatpaks and distro-agnostic software packaging. We cant afford to use dozens of build systems to maintain dozens of functionally-identical application repositories

[–] chaoticnumber@lemmy.dbzer0.com 1 points 1 hour ago* (last edited 1 hour ago)

This is such a superficial take.

Flatpaks have their use-case. Alpine has its use-case as a small footprint distro, focused on security. Using flatpaks would nuke that ethos.

Furthermore, they need those servers to build their core and base system packages. There is no distro out there that uses flatpaks or appimages for their CORE.

Any distro needs to build their toolchain, libs and core. Flatpaks are irrelevant to this discussion.

At the risk of repteating myself, flatpaks are irrelevant to Alpine because its a small footprint distro, used alot in container base images, containers use their own packaging!

Furthermore, flatpaks are literal bloat, compared to alpines' apk packages which focus on security and minimalism.

Edit: Flatpak literally uses alpine to build its packages. No alpine, no flatpaks. Period

Flatpaks have their use. This is not that. Check your ignorance.

[–] harsh3466@lemmy.ml 5 points 11 hours ago

I'm a fan of flatpaks, so this isn't to negate your argument. Just pointing out that Flathub is also using Equinix.

Source

Interlude: Equinix Metal née Packet has been sponsoring our heavy-lifting servers doing actual building for the past 5 years. Unfortunately, they are shutting down, meaning we need to move out by the end of April 2025.

[–] ubergeek@lemmy.today 7 points 17 hours ago

Pretty sure flatpak uses alpine as a bootstrap... Flatpak, after all, brings along an entire distro to run an app.

[–] balsoft@lemmy.ml 4 points 17 hours ago

I don't think it's a solution for this, it would just mean maintaining many distro-agnostic repos. Forks and alternatives always thrive in the FOSS world.

[–] Mwa@lemm.ee 1 points 14 hours ago* (last edited 14 hours ago)

Let the community package it to deb,rpm etc while the devs focus on flatpak/appimage

[–] utopiah@lemmy.ml 7 points 21 hours ago (1 children)
[–] KarnaSubarna@lemmy.ml 4 points 13 hours ago

That solves the media distribution related storage issue, but not the CI/CD pipeline infra issue.

[–] eskuero@lemmy.fromshado.ws 15 points 1 day ago (2 children)

First FreeDesktop and now Alpine damm

[–] qaz@lemmy.world 16 points 1 day ago

Equinix seems to be shutting down their bare metal service in it's entirety. All projects using it will be affected.

[–] ryannathans@aussie.zone 9 points 1 day ago (1 children)

How are they so small and underfunded? My hobby home servers and internet connection satisfy their simple requirements

[–] colournoun@beehaw.org 14 points 1 day ago (3 children)

800TB of bandwidth per month?

[–] DaPorkchop_@lemmy.ml 4 points 12 hours ago

That's ~2.4Gbit/s. There are multiple residential ISPs in my area offering 10Gbit/s up for around $40/month, so even if we assume the bandwidth is significantly oversubscribed a single cheap residential internet plan should be able to handle that bandwidth no problem (let alone a for a datacenter setup which probably has 100Gbit/s links or faster)

[–] chaoticnumber@lemmy.dbzer0.com 6 points 1 day ago* (last edited 1 day ago) (3 children)

That averages out to around 300 megabytes per second. No way anyone has that at home comercially.

One of the best comercial fiber connections i ever saw will provide 50 megabytes per second upload, best effort that is.

No way in hell you can satisfy that bandwidth requirement at home. Lets not mention that they need 3 nodes with such bw.

[–] Evil_Shrubbery@lemm.ee 2 points 8 hours ago* (last edited 8 hours ago)

Yeah, thats almost 150% more than my (theoretical) bandwidth at home (Gbps but I live alone & just don't want to pay much), and that is just assuming constant workload (peaks must be massive).

This is indeed considerate, yet hopefully solvable. It certainly is from the link perspective.

[–] DaPorkchop_@lemmy.ml 4 points 13 hours ago* (last edited 13 hours ago) (1 children)

50MB/s is like 0.4Gbit/s. Idk where you are, but in Switzerland you can get a symmetric 10Gbit/s fiber link for like 40 bucks a month as a residential customer. Considering 100Gbit/s and even 400Gbit/s links are already widely deployed in datacenter environments, 300MB/s (or 2.4Gbit/s) could easily be handled even by a single machine (especially since the workload basically consists of serving static files).

[–] chaoticnumber@lemmy.dbzer0.com 1 points 9 hours ago

So I have to move to Swiss then, got it.

[–] merthyr1831@lemmy.ml 2 points 22 hours ago (1 children)

Probably not one person, but that could be distributed.

[–] chaoticnumber@lemmy.dbzer0.com 2 points 22 hours ago

Like folding at home :D

[–] ryannathans@aussie.zone 5 points 1 day ago (1 children)

On my current internet plan I can move about 130TB/month and that's sufficent for me, but I could upgrade plan to satisfy the requirement

[–] KarnaSubarna@lemmy.ml 1 points 13 hours ago (1 children)

Your home server might have the required bandwidth but not requisite the infra to support server load (hundreds of parallel connections/downloads).

Bandwidth is only one aspect of the problem.

[–] ryannathans@aussie.zone 3 points 11 hours ago

Ten gig fibre for internal networking, enterprise SFP+ network hardware, big meaty 72 TB FreeBSD ZFS file server with plenty of cache, backup power supply and UPS

The tech they require really isn't expensive anymore

[–] racketlauncher831@lemmy.ml 5 points 1 day ago