After some discussion within the admin team, i think we’ll have a wait and see approach in this since we’re a small instance and any problem have a chance of driving newcomer away. Besides, we don’t actually in critical need of the database optimisation in this new update, but the potential issue that might popup mean it will be a pita to keep track. Since World commited to the update we’ll see if there’s any real big issue coming up(like that security breach stuff after updated to 0.18 lol) before commit to the update.
Yes, been working fine. Again this is just a ‘fun’ attempt, if it goes down - I can always jump to another instance
edit - alamak, of course is 18.3 for me cos i’m viewing from my own instance.
i think every up/down vote federated event is stored in the database for 6 months or something, so it’s probably true that instance owners have access to that info
there will be at least 5-10 minutes of downtime for db migrations though (more if the db size is huge). of course, after the long migrations the db size is reported to shrink significantly.
anyway, there’s already been at least 2 easily triggered/discoverable bugs:
view context for comments does not work properly (i heard it may even crash jerboa)
all software versions in the instances list are displayed as 2.0
hopefully they will release a 0.18.4 quickly with fixes for that.
monyet already has a dev instance, so they can import the db from the main instance there, and do the db migrations to see if that’ll succeed.
precautionary measures can be taken on the dev instance so it doesn’t attempt to federate with all the data from here.
i can think of (for testing out db migrations on the dev instance):
run lemmy_server with the --disable-scheduled-tasks cli flag, which will prevent background federation tasks from running (so federation events will only be triggered by actual interaction, like posting, voting, subscribing, and of course such things should not be done on the dev instance)
if that’s still not enough and we want to be extra sure the dev instance doesn’t federate with data from over here, then just block outgoing http/https connections at the firewall while doing the db migrations try-out (also don’t run lemmy-ui and just watch the docker logs for it to say migrations completed successfully)
i think the lemmy.zip admins said they’re still trying to work out a way to spin a dev instance on a dedicated server, but monyet already has a dev instance, so the db migrations can be tried first to ensure safety.
Lemmy.world is upgrading to 1.18.3 tomorrow. Apparently list results are going to change drastically.
We got ETA on our own upgrade?
p.s. it’s looking like the bawang bouquet about to be picture of the week tomorrow.
After some discussion within the admin team, i think we’ll have a wait and see approach in this since we’re a small instance and any problem have a chance of driving newcomer away. Besides, we don’t actually in critical need of the database optimisation in this new update, but the potential issue that might popup mean it will be a pita to keep track. Since World commited to the update we’ll see if there’s any real big issue coming up(like that security breach stuff after updated to 0.18 lol) before commit to the update.
monyet.cc is running 18.3 now? Scroll to bottom for that info. It took about 15-20mins for my personal instance, cuts of disk usage by like 10gb.
i’m still seeing 0.18.2. by the way, are you still using lemmy easy deploy on your rpi4?
Yes, been working fine. Again this is just a ‘fun’ attempt, if it goes down - I can always jump to another instance edit - alamak, of course is 18.3 for me cos i’m viewing from my own instance.
personal instance takes up more than 10gb? Walaowei 💀
if you are subscribed to lots of communities, it is not surprising.
i think every up/down vote federated event is stored in the database for 6 months or something, so it’s probably true that instance owners have access to that info
35g total usage before 0.18.3 update. Size depends on the communities you have, I am sure monyet.cc disk use is quite high too.
there will be at least 5-10 minutes of downtime for db migrations though (more if the db size is huge). of course, after the long migrations the db size is reported to shrink significantly.
anyway, there’s already been at least 2 easily triggered/discoverable bugs:
2.0
hopefully they will release a 0.18.4 quickly with fixes for that.
lemmy zip failed to upgrade and had to roll back 12 hours,
don’t scare our admins 😱
monyet already has a dev instance, so they can import the db from the main instance there, and do the db migrations to see if that’ll succeed.
precautionary measures can be taken on the dev instance so it doesn’t attempt to federate with all the data from here.
i can think of (for testing out db migrations on the dev instance):
run
lemmy_server
with the--disable-scheduled-tasks
cli flag, which will prevent background federation tasks from running (so federation events will only be triggered by actual interaction, like posting, voting, subscribing, and of course such things should not be done on the dev instance)if that’s still not enough and we want to be extra sure the dev instance doesn’t federate with data from over here, then just block outgoing http/https connections at the firewall while doing the db migrations try-out (also don’t run lemmy-ui and just watch the docker logs for it to say migrations completed successfully)
i think the lemmy.zip admins said they’re still trying to work out a way to spin a dev instance on a dedicated server, but monyet already has a dev instance, so the db migrations can be tried first to ensure safety.
My apologies.
it’s alright 👌