this post was submitted on 23 Jun 2024
1 points (100.0% liked)

Technology

59651 readers
2640 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] just_another_person@lemmy.world 0 points 5 months ago (7 children)

Is this a question?

We haven't even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it's technically the opposite.

[–] Cethin@lemmy.zip 0 points 5 months ago (2 children)

Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It's harder to do half. Sure, it'd be slightly faster for some things, but it's not significant.

[–] sugar_in_your_tea@sh.itjust.works 0 points 5 months ago (1 children)

And you can get 128-bit data to the CPU, so those things can be fast if we need them to be.

[–] henfredemars@infosec.pub 0 points 5 months ago

And we have wide instructions that can process this data, such as for multimedia applications.

Addressing and memory size has been the historic motivator for wider registers, but it’s probably not going to be in my lifetime that I see the need for 128.

[–] jlh@lemmy.jlh.name 0 points 5 months ago

There's plenty of instructions for processing integers and fp numbers from 8 bits to 512 bits with a single instruction and register. There's been a lot of work in packed math instructions for neural network inference.

[–] jwr1@kbin.earth 0 points 5 months ago (1 children)

It's a link to an article I found interesting. It basically details why we're still using 64-bit CPUs, just as you mentioned.

load more comments (1 replies)
[–] Technus@lemmy.zip 0 points 5 months ago (1 children)

We don't even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.

[–] just_another_person@lemmy.world 0 points 5 months ago (3 children)

What about this is not my point?

[–] b34k@lemmy.world 0 points 5 months ago (1 children)

I bet you’re fun at parties.

[–] just_another_person@lemmy.world 0 points 5 months ago (1 children)

I bet you just recycle thoughts and comments because you have none of your own, but want to feel that "upside" of your sad interaction with people on the Internet.

[–] Technus@lemmy.zip 0 points 5 months ago

Jesus Christ, what crawled up your ass and died?

[–] Technus@lemmy.zip 0 points 5 months ago (1 children)

I actually added detail that wasn't already discussed in the article?

[–] AlphaAutist@lemmy.world 0 points 5 months ago (1 children)

I actually didn’t know that about addressing before your comment and so I found it very interesting, thanks

load more comments (1 replies)
[–] MrQuallzin@lemmy.world 0 points 5 months ago

I think they were just adding to the conversation

load more comments (4 replies)
[–] dlundh@lemmy.world 0 points 5 months ago (3 children)
[–] stembolts@programming.dev 0 points 5 months ago
load more comments (2 replies)
[–] unreachable@lemmy.world 0 points 5 months ago (2 children)

so i guess the next bit after 64 cpu is qu-bit, quantum bit

[–] Ephera@lemmy.ml 0 points 5 months ago (2 children)

Quantum computers won't displace traditional computers. There's certain niche use-cases for which quantum computers can become wildly faster in the future. But for most calculations we do today, they're just unreliable. So, they'll mostly coexist.

[–] UraniumBlazer@lemm.ee 0 points 5 months ago

In other words like GPUs. GPUs suck ass at complex calculations. They however, work great for a large number of easy calculations, which is what is needed for graphics processing.

[–] amanda@aggregatet.org 0 points 5 months ago* (last edited 5 months ago) (1 children)

Presumably you’d have a QPU in your regular computer, like with other accelerators for graphics etc, or possibly a tiny one for cryptography integrated in the CPU

[–] Tinidril@midwest.social 0 points 5 months ago (1 children)

There would have to be some kind of currently unforseen breakthroughs before something like that would be even remotely possible. In all likelihood, quantum computing would stay in specialized data centers. For the problems quantum would solve, there is really no advantage to having it local anyways.

[–] amanda@aggregatet.org 0 points 5 months ago (1 children)

I assume we need a lot of breakthroughs to even have useful quantum computing at all, but sure.

Isn’t quantum encryption interesting for end users?

load more comments (1 replies)
load more comments (1 replies)
[–] amanda@aggregatet.org 0 points 5 months ago (1 children)

The comments on this one really surprised me. I thought the kinds of people who hang out on XDA-developers were developers. I assumed that developers had a much better understanding of computer architecture than the people commenting (who of course may not be representative of all readers).

I also get the idea that the writer is being vague not to simplify but because they genuinely don’t know the details, which feels even worse.

[–] sandalbucket@lemmy.world 0 points 5 months ago (1 children)

I think it’s a D-tier article. I wouldn’t be surprised if it was half gpt. It could have been summarized in a single paragraph, but was clearly being drawn out to make screen real-estate for the ads.

load more comments (1 replies)
[–] irotsoma@lemmy.world 0 points 5 months ago (3 children)

Because computers have come even close to needing more than 16 exabytes of memory for anything. And how many applications need to do basic mathematical operations on numbers greater than 2^64. Most applications haven't even exceeded the need for 32 bit operations, so really the push to 64bit was primarily to appease more than 4GB of memory without slow workarounds.

[–] jlh@lemmy.jlh.name 0 points 5 months ago (2 children)

Tons of computing is done on x86 these days with 256 bit numbers, and even 512-bit numbers.

[–] pivot_root@lemmy.world 0 points 5 months ago (1 children)

Being pedantic, but...

The amd64 ISA doesn't have native 256-bit integer operations, let alone 512-bit. Those numbers you mention are for SIMD instructions, which is just 8x 32-bit integer operations running at the same time.

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] mox@lemmy.sdf.org 0 points 5 months ago

John Mashey wrote about this nearly 30 years ago. This Usenet thread is worth a read.

[–] AmidFuror@fedia.io 0 points 5 months ago (3 children)

That would be like 6 minutes abs.

[–] AnarchoSnowPlow@midwest.social 0 points 5 months ago

That's crazy. You can't do six. It's seven! SEVEN MINUTE ABS!

load more comments (2 replies)
[–] ArbiterXero@lemmy.world 0 points 5 months ago (4 children)

32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.

[–] amanda@aggregatet.org 0 points 5 months ago (4 children)

Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager

[–] ArbiterXero@lemmy.world 0 points 5 months ago

Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.

Also the entire article comes down to simple math.

Bits is the number of digits.

So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999

So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.

10^4 is WAY smaller than 10^8

[–] neclimdul@lemmy.world 0 points 5 months ago (1 children)

It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It's more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.

A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.

https://en.m.wikipedia.org/wiki/Wikipedia

[–] amanda@aggregatet.org 0 points 5 months ago (1 children)

Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc

load more comments (1 replies)
[–] pivot_root@lemmy.world 0 points 5 months ago* (last edited 5 months ago) (2 children)

Only slightly related, but here's the compiler flag to disable an arbitrary 2GB limit on x86 programs.

Finding the reason for its existence from a credible source isn't as easy, however. If you're fine with an explanation from StackOverflow, you can infer that it's there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.

load more comments (2 replies)
[–] aard@kyu.de 0 points 5 months ago (5 children)

You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.

load more comments (5 replies)
load more comments (2 replies)
[–] hades@lemm.ee 0 points 5 months ago (8 children)

We used to drive bicycles when we were children. Then we started driving cars. Bicycles have two wheels, cars have four. Eight wheels seems to be the logical next step, why don't we drive eight-wheel vehicles?

[–] kayazere@feddit.nl 0 points 5 months ago (5 children)

Funny how we are moving back to bicycles, as cars aren’t scalable solution.

load more comments (5 replies)
load more comments (6 replies)
[–] djehuti@programming.dev 0 points 5 months ago (1 children)
[–] kilgore_trout@feddit.it 0 points 5 months ago (2 children)

Why are we not using them in end-user devices

[–] ms_lane@lemmy.world 0 points 5 months ago

We are.

Addressing-wise, no we don't have consumer level 128bit CPUs and probably won't ever need them.

Instructions though, SSE had some 128bit ops (OR/XOR, MOVE) and AVX is 128bit vector math. AVX2 is 256bit vector math, AVX512 is- you guessed it 512bit vector math. AltiVec on PPC had 128bit vectors 20 years ago.

[–] db2@lemmy.world 0 points 5 months ago

There's no benefit.

[–] Mio@feddit.nu 0 points 5 months ago (2 children)

Would it be a downside? Slower? Very costly?

load more comments (2 replies)
[–] kerrigan778@lemmy.world 0 points 5 months ago (1 children)

Uh, the PlayStation 2 would like a word?

load more comments (1 replies)
[–] hades@lemm.ee 0 points 5 months ago
[–] vane@lemmy.world 0 points 5 months ago

tell that to playstation2 owners

load more comments
view more: next ›