Its now running on a dedicated server with 6 cores/12 threads and 32 gb ram. I hope this will be enough for the near future. Nevertheless, new users should still prefer to signup on other instances.

This server is financed from donations to the Lemmy project. If you want to support it, please consider donating.

  • Dreyns@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Just started supporting this instance on liberapay, if other follow you’ll hopefully be able to upgrade the potato soon !

  • ahimsabjorn@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Please know that your work is genuinely appreciated in fascilitating the migration from Reddit to Lemmy. Your efforts will hopefully ensure a bright future for communities on this platform. Kudos @nutomic@lemmy.ml !

  • BajakLaut@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    The server has become more responsive definitely. I thought my internet routing was so shitty that it took so long to load the site. Nice!

  • tinselsnips@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    Are there still issues with cross-instance content? Previously, if I signed in to lemmy.ca and subscribed to a channel on .ml, I didn’t see all content. Likewise, if I left a comment from my .ca account, it wouldn’t necessarily show up for users on .ml.

    If this is still a problem, it’s a HUGE roadblock in being able to just tell people to join other instances, if we don’t want to fracture an existing community.

    Edit: That may have simply been a result of the excess load?

  • mst@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This new machine is speedy! Getting pretty much instant loading times. Thank you to the donators, I will be joining you soon!

    • Catsrules@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 year ago

      I was getting 502 Bad Gateway. When I pinged Lemmy.ml I got an IPV6 address. It disabled IPV6 on my local computer and now when I ping I get a IPV4 IP address it works now.

      I am wondering if DNS is screwed up on the IPV6 network for Lemmy.ml.

      ~~Note. This could totally be something on my end, I really haven’t done much with IPV6 but it did solve the 502 Error so I might do the same for you. ~~

      Edit. Multiple people are reporting the same thing I am seeing. It is defiantly something about IPV6 on the lemmy.ml server end.

      • Techviator@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Thanks for mentioning the IPv6, I’ve been banging my head all day trying to figure out why I kept getting the 502 yet no one was complaining anywhere and isitdown was showing the server as Up.

        I forced my DNS to resolve only IPv4 for lemmy.ml and now I can use it.

        My suspicion is that nginx is misconfigured and not listening via IPv6. Or maybe the AAAA record is pointing to the wrong IPv6 address.

        @nutomic@lemmy.ml Thanks for upgrading the server!

      • nutomic@lemmy.mlOPM
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        You are right, I forgot to configure IPv6. Will be fixed shortly.

        Edit: Should be fixed now.

  • anugeshtu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Thank you for your hard work! Although I kinda foresee for the future if Lemmy really would become the new “reddit” with such servers and millions of users, wouldn’t that also rise the server costs and ultimately make the hosts dependent on asking money for it, maybe by a paywall or by ads? I think to make this community really be “free” without any host responsible for spending a huge amount of money for servers, the best solution would be to make the actual “servers” be a p2p cluster. Unfortunately I’m not quite sure how to realize that without losing a huge fraction of the model if a lot of nodes (i.e., the actual users) are offline. Sorry, I’m just brainstorming.

    • orthizaR@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I would love a social network powered by the users that are using it. Maybe also running something like serverless functions on the client devices.

      • narF@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You might want to check the Earthstar project: https://earthstar-project.org/ They are working on that. Right now, it’s super early; they are building the foundations. Peer-to-peer is unfortunately much more difficult to code then servers, because less people have built the building-blocks required, and because mobile phones are actively making it hard to run peer-to-peer apps.

  • Emi@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Have we explored the possibility of “porting” the larger communities to other instances? It seems that many of us simply wish to subscribe to the largest (insert type of community here) and can do so from various home instances. Might lower demand on this specific instance at the very least.

  • v_krishna@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Something isn’t working with ipv6, on my phone network I get nginx 502 but on wifi it works

  • Blaskowitz@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Is it possible to horizontally scale these instances instead of just upping the machine hardware? What are the main performance bottlenecks typically?

    • mwlczk@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Hey, what do you mean by “scale horizontally”? There are multiple approaches to tackle this.

      • Have multiple nodes/pods for the same instance and run them on a cloud-like service provider
      • have RO-instances to handle to read-load
      • share/merge bigger communities/subs over multiple instances
      • … All of these requiere most likely a major rewrite/change of Lemmy server software I guess. In my opinion the first option would fit the most.
      • Blaskowitz@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        My comment was without knowing the topology of Lemmy at all, but my thoughts were initially that vertically scaling can have diminishing returns past a certain threshold. Since the servers seem to be struggling I’m wondering if that has been surpassed and if it’s more cost-effective and reliable to scale this way? But if the application isn’t written in that way, or the underlying data store isn’t equipped for multiple instances then fair enough, I’d be interested as to why especially if Lemmy grows. I’ll take a look at open issues and educate myself a bit more though.