TheHobbyist

joined 1 year ago
[–] TheHobbyist@lemmy.zip 9 points 1 hour ago

I fully understand this to be a controversial take, but I think it is important to acknowledge that not all advertisement is the same. While I dislike all forms of advertisement, I only take issue with non ethical ones, which are based on surveillance. I don't have any ethical concern with contextual advertisement which is how some search engines provide advertisement, such as giving advertisement for food when searching for food.

But it is also critically important that extensions remain a part of the browser, to give a certain level of control to the person navigating the web instead of just allowing any website to freely track our activities.

I don't know what the path forward is for Mozilla. Google is unlikely to be able to fund Mozilla the way it has until now as a recent ruling which has deemed google as a monopolistic actor clawing at its default status everywhere it can. This was a major founding source for Mozilla. They need to figure out financing and while it is easy to criticize, we must also recognize the challenge it is to give sustainable and important funding sources to Mozilla. I really wish I had an answer... Can it somehow depend exclusively on its users for donations? Should It sell support services? Should it branch into more lucrative areas? If yes, which ones? It may need to be a combination thereof but for now, I'm personally blinded. We need to get together on this, because if we can't help Mozilla, can we help anyone who might fall into this situation?

[–] TheHobbyist@lemmy.zip 3 points 1 day ago

I think the only thing to keep in mind is that Nvidias proprietary drivers work better for Linux whereas for AMD it is the open-source ones.

I have an Nvidia card and the prop. drivers have worked flawlessly for me for years.

I know the open source drivers are closing the gap for Nvidia, and they also seem to be playing ball on that front. But for AMD the open source drivers are definitely the way to go from what I understand.

[–] TheHobbyist@lemmy.zip 8 points 2 days ago (7 children)

This was already quite a significant challenge compared to socketed RAM, but now with Lunar Lake I guess this is simply impossible? The RAM chips are colocated with the CPU...

[–] TheHobbyist@lemmy.zip 13 points 1 week ago (4 children)

You only mention your laptop is running out of space so you need to get a new computer? does your laptop have a soldered SSD? If that's not the case, I think the reflex should first be to see what storage you can get your laptop so that you can keep using it rather than discarding it :(

[–] TheHobbyist@lemmy.zip 0 points 1 week ago

im not sure this applies to Switzerland but Framework now allows freight forwarding within the EU it seems (it also seems recent as most older discussion says it was prohibited).

https://knowledgebase.frame.work/en_us/eu-unsupported-SJByUb7a

Also, I think a delivery to Switzerland is not too far out as they have finalized a keyboard layout a while ago and this is a necessary step before delivery.

https://community.frame.work/t/request-review-of-norwegian-portuguese-swedish-swiss-slovenian-thai-hungarian-and-danish-keyboards/26949

(Notice how there is Sweden in that list which is now available for a laptop to order too officially).

I'm hoping these countries get expanded to soon!

[–] TheHobbyist@lemmy.zip 16 points 2 weeks ago

i think they mean that signal on desktop does not encrypt their content at rest, which is acknowledged and not an issue they are intending on addressing.

But it seems to have recently changed? I'm learning thus as I wanted to find a source.

Source: https://candid.technology/signal-encryption-key-flaw-desktop-app-fixed/

[–] TheHobbyist@lemmy.zip 0 points 4 weeks ago (2 children)

We had captchas to solve that a while ago. Turns out, some people are willing to be paid a miserable salary to solve the captchas for bots. How would this be different? The fact of being a human becomes a monetizable service which can just be rented out for automated systems. No "personhood" check can prevent this.

[–] TheHobbyist@lemmy.zip 2 points 1 month ago

I think early last year they hyped some potential partnership to have a custom grapheneOS device, anyone know what happened to that?

[–] TheHobbyist@lemmy.zip 11 points 1 month ago (1 children)

I'm sorry, what? That does not make sense to me.

[–] TheHobbyist@lemmy.zip 2 points 1 month ago* (last edited 1 month ago) (1 children)

Would moving to a european country be within your considerations? Europe have stronger privacy laws and as a latin american (assumed) you have an easier entry through spain which offers some facilitated job market access. But I do concede that depending on how much you value your privacy, this may be an option out of question?

[–] TheHobbyist@lemmy.zip 1 points 1 month ago (10 children)

Youre right, thats how it works in almost all messaging apps. But signal implemented sealed sender specifically to counter this.

You can read more about it here: https://signal.org/blog/sealed-sender/

I encourage you to read the first paragraph, which is important in the context of our conversation.

[–] TheHobbyist@lemmy.zip 5 points 1 month ago (2 children)

Can you further explain? A red flag to open-source, federation and such, can't disagree. But to privacy and security? I'm not convinced.

 

Hi folks,

I'm seeing there are multiple services which externalise the task of "identity provider" (e.g. login with Facebook, google or what not).

In my case, I am curious about Tailscale, a VPN service which allows one to chose an identity provider/SSO between Google, Microsoft, Github, Apple and OIDC.

How can I find out what data is actually communicates to the identity provider? Their task should simply be to decide whether I am who I claim to be, nothing more. But I'm guessing there may be some subtleties.

In the case of Tailscale, would the identity provider know where I'm trying to connect? Or more?

Answers and insights much appreciated! The topic does not seem to have much information online.

 

Hi folks, I'm looking for a specific YouTube video which I watched around 5 months ago.

The gist of the video is that it was comparing the transcoding performance of an Intel iGPU when used natively, compared to when passed through to a VM. From what I recall there was a significant performance hit and it was around 50% or so (in terms of fps transcoding). I believe the test was performed on jellyfin. I don't remember whether it was using xcpng, proxmox or another OS. I don't remember which channel published this video nor when it was published, just that I watched it sometime between April and June this year.

Anyone recall or know what video I'm talking about? Possible keywords include: quicksync, passthrough, sriov, iommu, transcoding, iGPU, encoding.

Thank you in advance!

 

Hi y'all,

I am exploring TrueNAS and configuring some ZFS datasets. As ZFS provides with some parameters to fine-tune its setup to the type of data, I was thinking it would be good to take advantage of it. So I'm here with the simple task of choosing the appropriate "record size".

Initially I thought, well this is simple, the dataset is meant to store videos, movies, tv shows for a jellyfin docker container, so in general large files and a record size of 1M sounds like a good idea (as suggested in Jim Salter's cheatsheet).

Out of curiosity, I ran Wendell's magic command from level1 tech to get a sense for the file size distribution:

find . -type f -print0 | xargs -0 ls -l | awk '{ n=int(log($5)/log(2)); if (n<10) { n=10; } size[n]++ } END { for (i in size) printf("%d %d\n", 2^i, size[i]) }' | sort -n | awk 'function human(x) { x[1]/=1024; if (x[1]>=1024) { x[2]++; human(x) } } { a[1]=$1; a[2]=0; human(a); printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }'

Turns out, that's when I discovered it was not as simple. The directory is obviously filled with videos, but also tiny small files, for subtitiles, NFOs, and small illustration images, valuable for Jellyfin's media organization.

That's where I'm at. The way I see it, there are several options:

    1. Let's not overcomplicate it, just run with the default 64K ZFS dataset recordsize and roll with it. It won't be such a big deal.
    1. Let's try to be clever about it, make 2 datasets, one with a recordsize of 4K for the small files and one with a recordsize of 1M for the videos, then select one as the "main" dataset and use symbolic links for each file to the other dataset such that all content is "visible" from within one file structure. I haven't dug too much in how I would automate it, but might not play nicely with the *arr suite? Perhaps overly complicated...
    1. Make all video files MKV files, embed the subtitles, rename the videos to make NFOs as unnecessary as possible for movies and tv shows (though this will still be useful for private videos, or YT downloads etc)
    1. Other?

So what do you think? And also, how have your personally set it up? Would love to get some feedback, especially if you are also using ZFS and have a videos library with a dedicated dataset. Thanks!

Edit: Alright, so I found the following post by Jim Salter which goes through more detail regarding record size. It clarifies my misconception about recordsize not being the same as the block size, but also it can easily be changed at any time. It's just the size of the chunks of data to be read. So I'll be sticking to 1M recordsize and leave it at that despite having multiple smaller files, because the important will be to effectively stream the larger files. Thank you all!

view more: next ›