I’m excited to announce the first alpha preview of this project that I’ve been working on for the past 4 months. I’m initially posting about this in a few small communities, and hoping to get some input from early adopters and beta testers.

What is a DHT crawler?

The DHT crawler is Bitmagnet’s killer feature that (currently) makes it unique. Well, almost unique, read on…

So what is it? You might be aware that you can enable DHT in your BitTorrent client, and that this allows you find peers who are announcing a torrent’s hash to a Distributed Hash Table (DHT), rather than to a centralized tracker. DHT’s lesser known feature is that it allows you to crawl the info hashes it knows about. This is how Bitmagnet’s DHT crawler works works - it crawls the DHT network, requesting metadata about each info hash it discovers. It then further enriches this metadata by attempting to classify it and associate it with known pieces of content, such as movies and TV shows. It then allows you to search everything it has indexed.

This means that Bitmagnet is not reliant on any external trackers or torrent indexers. It’s a self-contained, self-hosted torrent indexer, connected via the DHT to a global network of peers and constantly discovering new content.

The DHT crawler is not quite unique to Bitmagnet; another open-source project, magnetico was first (as far as I know) to implement a usable DHT crawler, and was a crucial reference point for implementing this feature. However that project is no longer maintained, and does not provide the other features such as content classification, and integration with other software in the ecosystem, that greatly improve usability.

Currently implemented features of Bitmagnet:

  • A DHT crawler
  • A generic BitTorrent indexer: Bitmagnet can index torrents from any source, not only the DHT network - currently this is only possible via the /import endpoint; more user-friendly methods are in the pipeline
  • A content classifier that can currently identify movie and television content, along with key related attributes such as language, resolution, source (BluRay, webrip etc.) and enriches this with data from The Movie Database
  • An import facility for ingesting torrents from any source, for example the RARBG backup
  • A torrent search engine
  • A GraphQL API: currently this provides a single search query; there is also an embedded GraphQL playground at /graphql
  • A web user interface implemented in Angular: currently this is a simple single-page application providing a user interface for search queries via the GraphQL API
  • A Torznab-compatible endpoint for integration with the Serverr stack

Interested?

If this project interests you then I’d really appreciate your input:

  • How did you get along with following the documentation and installation instructions? Were there any pain points?
  • There’s a roadmap of high-priority features on the website - what do you see as the highest priority for near-term development?
  • If you’re a developer, are you interested in contributing to the project?

Thanks for your attention. If you’re interested in this project and would like to help it gain momentum then please give it a star on GitHub, and expect further updates soon!

  • Shdwdrgn
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Not sure the OS configuration is really a burden :-) I have several servers I have to keep up to date anyway. And backups aren’t really an issue, I just run rdiff-backup on everything to provide a year’s worth of incremental backups, which doesn’t really take much extra space. Maybe one of these days when I catch up on other projects I’ll look into it though.

    • cyberpunk007@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      On truenas scale though it’s just tiles in a web browser, it’s super easy. And since it runs on ZFS backups are easier too. Just click your way through periodic volume snapshot tasks.

      Definitely a bit of a learning curve but it’s a sleek setup once you understand.

      • Shdwdrgn
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m not quite sure what “truenas” is? All of my stuff is individually installed, I decided a long time ago to split it up onto VMs that each perform an specific task. I have a main file server that runs zfs, then two servers to run the redundant VMs. There’s not really anything difficult about backups, I just add a cron job to run a script once a day and never touch it again, so I have backups of each VM but then the backups of the main servers includes the VM image files so each VM gets backed up twice. There’s a lot of info there but the backups of all the critical stuff only use about 6TB (I could actually cut that in half if I got rid of the backups from older machines).

        So lets say I put in the time to learn how docker works, and then put in a lot more time converting all of my existing systems over to docker images… What exactly what I get out of all that effort? The thing that nobody’s been able to sell me on so far is that I don’t see how docker is going to make anything any easier, it just seems like it’s a “different” way to do things but nothing more.

        • cyberpunk007@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Your data footprint would be less. Maintenance is a breeze. If you update your image and it breaks, just roll it back. Less consumption of resources. No need to divide your storage and ram for VMs. There are millions of docker images so you can start something new in seconds. And the learning curve isn’t too bad if you’re on truenas scale. Truenas core is a NAS operating system built on freebsd (Unix), and truenas scale is built on Linux. Both use ZFS for the underlying storage.

          • Shdwdrgn
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            OK so my current strategy is that when I want to do a major update I simply make a copy of the vm image file, then I can drop it back in place if something goes wrong. I run KVM which means it just gives out CPU and memory as needed even though I can set maximums. The resources I’m using are laughably small anyway, half the systems run fine on a single cpu core although it was nice to recently bump web and mail services up higher (I just upgraded over the Summer from Poweredge 860 servers to some R620’s – crazy difference in available resources!). Same with memory, I have some systems running on as little as 512M, but I just bumped my web servers up to 8G to give them plenty of room. Considering I have 64G in each server with tons of space for growth, I’m not worried about any of that. And storage space… well it seems linux is suffering with bloat since the introduction of systemd as I’ve had to increase my image files from 4G to 8G for updates, but it’s still a drop in the bucket for storage. And all the services use shared storage space for things like email and websites, and I have around 105TB of shared storage, so again not really a concern.

            Now it sounds like I kind of need TrueNas to easily use docker, which means another system that I would need to learn from scratch? Truenas scale says it’s built on debian and yet there are no debian packages available to install it, so I can only assume that I would have to completely replace all of my existing servers with brand new systems that I have no knowledge of troubleshooting, just so I can replace all of my existing VMs with docker images which I also have no knowledge of how to troubleshoot.

            Sorry but none of this is selling me on the idea - it just sounds like I’m supposed to replace systems that work perfectly well with new systems that I can’t fix when they break? I’m really not understanding where the advantage is.

            • cyberpunk007@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Not trying to sell you on it, you do what works best for you. Truenas scale is an operating system built on Debian. There will be no packages for it. It’s hard to explain until you start using it. I came from VMs on truenas core for many years and it was annoying to migrate to docker but after I used it for a while I liked it a lot more. It’s hard to explain without just using it, so if you’re not into playing around and what you have works great, then great. I’ve been working with jails and VMs and containers for well over 15 years since I work in IT so I’ve played with big and small systems. There are definitely some annoyances when it comes to the VM approach.

              • Shdwdrgn
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                No worries, and thanks for all the info. Eventually I’ll have some time to look into docker, but for now it’s just an aggravation to me when it used to be a simple matter of just running a couple commands to compile something.