this post was submitted on 26 Sep 2024
13 points (100.0% liked)
Technology
59672 readers
2785 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Who cares? It's going to be hashed anyway. If the same user can generate the same input, it will result in the same hash. If another user can't generate the same input, well, that's really rather the point. And I can't think of a single backend, language, or framework that doesn't treat a single Unicode character as one character. Byte length of the character is irrelevant as long as you're not doing something ridiculous like intentionally parsing your input in binary and blithely assuming that every character must be 8 bits in length.
It matters for bcrypt/scrypt. They have a 72 byte limit. Not characters, bytes.
That said, I also think it doesn't matter much. Reasonable length passphrases that could be covered by the old Latin-1 charset can easily fit in that. If you're talking about KJC languages, then each character is actually a whole word, and you're packing a lot of entropy into one character. 72 bytes is already beyond what's needed for security; it's diminishing returns at that point.