A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

Registration bypass: https://archive.is/3tEl0

  • ToxicWaste@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    sure, it is easy to detect and they will. however, at the moment they don’t seem to be doing it. The author said this after deploying a POC:

    Aaron B told 404 Media “If that’s, true, I’ve several million lines of access log that says even Google Almighty didn’t graduate” to avoiding the trap.

    So no, it is not a silver bullet. but it is a defense strategy, which seems to work at the moment.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      No, a few million hits from bots is routine for anything that’s facing the public at all. Others have posted on this thread (or others like it, this article’s been making the rounds a lot in the past few days) that even the most basic of sites can get that sort of bot traffic, and that it’s just a simple recursion depth limit setting to avoid the “infinite maze” aspect.

      As for AI training, the access log says nothing about that. As I said, AI training sets are not made by just dumping giant piles of randomly scraped text on AIs any more. If a trainer scraped one of those “infinite maze” sites the quality of the resulting data would be checked, and if it was generated by anything remotely economical for the site to be running it’d almost certainly be discarded as junk.

      • ToxicWaste@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        The main angle is not to ‘poisen’ the training set. it is to waste time, energy and resources. the site loads deliberately slow and produces garbage, which has to be filtered out.

        as i said: not a silver bullet. but at least some threads where tied up collecting garbage painfully slow. as the data is useless, whatever their cleanup process is, has more to do. or it might even be tricked into discarding the whole website, as the signal to noise ratio is bad.

        so i would still say the author achieved his goal.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          The site producing the nonsense has to produce lots of it any time a bot comes along, the trainers only have to filter it once. As others have pointed out it’s likely easy for an automated filter to spot. I don’t see it as being a clear win.