With forewarning about a huge influx of users, you know Lemmy.ml will go down. Even if people go to https://join-lemmy.org/instances and disperse among the great instances there, the servers will go down.

Ruqqus had this issue too. Every time there was a mass exodus from Reddit, Ruqqus would go down, and hardly reap the rewards.

Even if it’s not sustainable, just for one month, I’d like to see Lemmy.ml drastically boost their server power. If we can raise money as a community, what kind of server could we get for 100$? 500$? 1,000$?

  • Lobstronomosity@lemmy.ml
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I’m sure you know this, but getting progressively larger servers is not the only way, why not scale horizontally?

    I say this as someone with next to no idea how Lemmy works.

      • Lobstronomosity@lemmy.ml
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        1 year ago

        Is it possible to make Lemmy (the system as a whole) able to be compatible with horizontally scaling instances? I don’t see why an instance has to be confined to one server, and this would allow for very large instances that can scale to meet demand.

        Edit: just seen your other comment https://lemmy.ml/comment/453391

        • nutomic@lemmy.mlM
          link
          fedilink
          arrow-up
          28
          ·
          1 year ago

          It should be easy once websocket is removed. Sharded postgres and multiple instances of frontend/backend. Though I don’t have any experience with this myself.

          • wiki_me@lemmy.ml
            link
            fedilink
            arrow-up
            13
            ·
            1 year ago

            I think that is unavoidable, Look at the most popular subreddits , they can get something like 80 million upvotes and 66K comments per day, do you think a single server can handle that?

            Splitting communities just so that it will be easier technically is not good UX.

          • Bob/Paul@fosstodon.org
            link
            fedilink
            arrow-up
            12
            ·
            1 year ago

            @nutomic @Lobstronomosity In one of the comments I thought I saw that the biggest CPU load was due to image resizing.

            I think it might be easier to split the image resizer off to its own worker that can run independently on one (or more) external instances. Example: client uses API to get a temporary access token for upload, client uploads to one of many image resizers instead of the main API, image resizer sends output back the main API.

            Then your main instance never sees the original image

          • ccunix@lemmy.ml
            link
            fedilink
            arrow-up
            8
            ·
            1 year ago

            There is already a docker image so that should not be too hard. I’d be happy to set something up, but (as others have said) the DB will hit a bottleneck relatively quickly.

            I like the idea of splitting off the image processing.