I've found the following work-around works pretty well. If you host an instance that's currently on 0.19.0 or 0.19.1, consider implementing this.
There are two bugs that this helps with:
Work-around:
Create cronjobs that restart the Lemmy container every 6 hours (but not at midnight). The following example is used for a Debian system running Lemmy in Docker.
Type crontab -e
into the terminal
Add something like the following:
~~0 1 * * * docker container restart lemmy-lemmy-1
0 7 * * * docker container restart lemmy-lemmy-1
0 13 * * * docker container restart lemmy-lemmy-1
0 19 * * * docker container restart lemmy-lemmy-1~~
3 1-23/6 * * * docker container restart lemmy-postgres-1 && sleep 60 && docker container restart lemmy-lemmy-1
By restarting the container every 6 hours, outbound federation continues to work. There may still be some delays, but everything gets cleared up regularly.
By telling it what time to restart (0100, 0700, 1300, and 1900 as opposed to "every 6 hours"), it avoids restarting at midnight. This avoids the second bug.
My instance has been doing this for enough days where I'm confident that it's working. You can check your federation status here. Note that it's normal for there to be 0 up-to-date instances and a lot of lagging instances. As long as they sometimes turn "up to date", then everything is getting caught up.
Is "testing" a swear word to Lemmy "devs"? Such bugs after so many RCs? (Though it seems that for Lemmy "devs" RC actually means "random crap").
Extra frustrating for you, I imagine, since you write a lot of API stuff for Lemmy.
Yeah, still have to fix some stuff, though I've taken a break during the holidays.
That's great, hope you're having some fun with family and friends. Happy New Year.
Happy New Year as well!