this post was submitted on 15 Jun 2024
1 points (100.0% liked)

Technology

59587 readers
5279 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Munkisquisher@lemmy.nz 0 points 5 months ago (1 children)

Sync them right next to each other, then move one of them. The other way you could test this theory is to have one clock tell the other the time over an optical link and then have the other do the same. If the speed of light was different in different directions. Each would measure a different lag.

[–] abs_mess@lemmy.blahaj.zone 0 points 5 months ago (1 children)

Well, moving them is out of the question, since, you know, motion will change the clocks time. If you re-sync them, you bake the "error" into your framework. If you try a timer, the timer is offset. If you try and propagate a signal, the signal is offset. And eventually, you have to compare the two times, which muddies the waters by introducing a third clock.

Basically, there is no way to sync two clocks without checking both clocks, ergo, no way of proving or disproving. That's the premise.

In practicality, I assume it is constant, but it's like N=NP. You can't prove it within the framework, even if you really, really want to believe one thing.

[–] ricdeh@lemmy.world 0 points 5 months ago (2 children)

If you move one clock very slowly away from the other, the error is minimised, perhaps even to a degree that allows for statistically significant measurements.

To cite the Wikipedia entry that one of the other commenters linked:

"The clocks can remain synchronized to an arbitrary accuracy by moving them sufficiently slowly. If it is taken that, if moved slowly, the clocks remain synchronized at all times, even when separated, this method can be used to synchronize two spatially separated clocks."

One-Way Speed of Light

[–] hikaru755@feddit.de 0 points 5 months ago

Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn't help.

[–] InnerScientist@lemmy.world 0 points 5 months ago

And further down:

Unfortunately, if the one-way speed of light is anisotropic, the correct time dilation factor becomes {\displaystyle {\mathcal {T}}={\frac {1}{\gamma (1-\kappa v/c)}}}, with the anisotropy parameter κ between -1 and +1.[17] This introduces a new linear term, {\displaystyle \lim \_{\beta \to 0}{\mathcal {T}}=1+\kappa \beta +O(\beta ^{2})} (here {\displaystyle \beta =v/c}), meaning time dilation can no longer be ignored at small velocities, and slow clock-transport will fail to detect this anisotropy. Thus it is equivalent to Einstein synchronization.