A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It’s less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn’t appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    edit-2
    11 hours ago

    Yeah, that has like 0 chances for working. At most it would annoy bots for web search, at least it has a proper robots.txt.

    But any agent trying to process data for AI is not going to go to random websites. It’s going to use a curated list of sites with valuable content.

    At this point text generation datasets can be achieved with open data, and data sold by companies like reddit or Microsoft, they don’t need to “pirate” your blog posts.

    • LovableSidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      True to a limited extent. Anyone can post a link to somebody’s blog on a site like reddit without the blogger’s permission, where a web crawler scanning through posts and comments would find it. But I agree with you that a thing like Nepehthes probably wouldn’t work. Infinite loop detection is an important part of many types of software and there are well-known techniques for it, which as a developer I would assume a well written AI web crawler would have (although I’ve never personally made one).

      • LovableSidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        LOL wow, this is probably the most elegant way to say what I just said to somebody else. Well written web crawlers aren’t like sci-fi robots that rock back and forth smoking when they hear something illogical.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        A bot that’s ignoring robots.txt is likely going to be pretending to be human. If your site has valuable content that you want to show to humans, how do you distinguish them from the bots?

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 hours ago

      I think sites that feel they have valuable content can deploy this and hope to trap and perhaps detect those bots based on how they interact with the tarpit