hello, on my server on which only Lemmy is running, I don’t understand why it fills up space all the time. Every day it increases by about 1GB thus risking reaching the limit after a short time.

It is not the images’ fault because there is a size limit for uploading and they are also transformed into .webp.

The docker-compose file is the one from Ansible 0.18.2 with the limits for loggin already in it (max-size 50m, max-file 4).

What could it be? Is there anything in particular that I can check?

Thanks!

28% - 40G (3 July)
29% - 42G (4 July)
30% - 43G (5 July)
31% - 44G (6 July)
36% - 51G (10 July)
37% - 52G (11 July)
37% - 53G (12 July)
39% - 55G (13 July)
39% - 56G (14 July)
  • hitagi@ani.social
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    Might want to check out this issue. Honestly, I’m not sure what exactly the activity table consists of or what it does but it’s been eating through everyone’s disk space pretty fast.

    • skariko@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 year ago

      well, indeed it could be just that. I just checked and it’s size is 15GB 😱

    • Muddybulldog@mylemmy.win
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      The activity table is every message that is sent or received between your instance and the rest of the fediverse; posts, comments, votes, admin actions, etc.

    • FuzzChef@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Extra config options always result in more complexity, so I would strongly prefer to change the hardcoded pruning interval instead.

      Why would that be the case?

      • hitagi@ani.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’m not too sure as well. Most people running their own Lemmy instance probably want more config options to fit their needs. I like how much I can configure with pictrs straight from the docker compose file for example.

    • DrManhattan@lemmy.design
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      1 year ago

      I don’t think you should be enjoying the fact that there are some problems that could realistically cause a large portion of Lemmy instances to become unsustainable. We should be working towards a way that we can ensure the Lemmy ecosystem thrives.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        1 year ago

        I mean… it was kind of a joke. At the same time, if someone is hosting a database on their own hardware, it is important to understand when and how the database actually releases disk.

        That said, I completely agree that this is an issue that does need to be addressed.

    • SalamanderA
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      At this point it might actually be worth adding it to my CV 😂

  • chiisana@lemmy.chiisana.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Bearing in mind that posts and comments from communities your users are subscribed to will flow into your instance, not as a reference, but as a copy. So all those “seeding” scripts are terrible ideas in bringing in content you don’t care about and filling up space for the heck of it. If you’re hosting a private instance, you can unsubscribe from things that won’t interest you, thereby slow down the accumulation of contents that are irrelevant and just wasting space.

    • skariko@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yes I had considered that, but considering that ours is not a giant but moderate instance (a thousand subscribers) it seemed exaggerated to get to have 1GB occupied every day.

      • PriorProject@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago
        • The growth is not about user count… not directly anyway. Rather, it’s about the number and activity of subscribed communities. When your users sub to big and highly active meme communities on Lemmy world, the post activity on world determines your storage reqs. I don’t really know, but I could imagine that a 1k user instance might have 80% of the federation storage that a 5k user instance has. 1k users is enough to sub most big communities, whereas the next 4k users are “mostly” going to sub the same big communities and a few low-traffic niche communities. So the next 4k users cause much less federated storage load than the first 1k did.
        • But for comparison, a month ago… the largest Lemmy instance in the world had just over a thousand active users. I’m not sure 1k is as small as you think it is.
  • Wander@yiffit.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    You can delete old entries from the table. The space will not be released to the filesystem automatically though, but you won’t have to worry about it until enough days pass where it’s filled up the same amount that was freed.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    Back in the early 2000s, Usenet servers could store posts in a fixed-size database with the oldest posts expiring as new posts came in. This approach would mean that you can’t index everything forever, but it does allow you to provide your users with current posts without having infinite storage space.

  • RoundSparrow@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    What could it be? Is there anything in particular that I can check?

    lemmy_server creates a ton of system logs. /var/log/ paths, look at usage on that path.

    The docker-compose file is the one from Ansible 0.18.2 with the limits for loggin already in it (max-size 50m, max-file 4).

    I still suggest verification of how much total size is in /var/log tree.