This could be a tool that works across the entire internet but in this case I’m mostly thinking about platforms like Lemmy, Reddit, Twitter, Instagram etc. I’m not necessarily advocating for such thing but mostly just thinking out aloud.

What I’m imagining is something like a truly competent AI assistant that filters out content based on your preferences. As content filtering by keywords and blocking users/communities is quite a blunt weapon, this would be the surgical alternative which lets you be extremely specific in what you want filtered out.

Some examples of the kind of filters you could set would for example be:

  • No political threads. Applies only to threads but not comments. Filters out memes aswell based on the content of the media.
  • No political content whatsoever. Hides also political comments from non-political threads.
  • No right/left wing politics. Self explainatory.
  • No right/left wing politics with the exception of good-faith arguments. Filters out trolls and provocateurs but still exposes you to good-faith arguments from the other side.
  • No mean, hateful or snide comments. Self explainatory.
  • No karma fishing comments. Filters out comments with no real content.
  • No content from users that have said/done (something) in the past. Analyzes their post history and acts accordingly. For example hides posts from people that have said mean things in the past.

Now obviously with a tool like this you could build yourself the perfect echo chamber where you’re never exposed to new ideas which probably is not optimal but it’s also not obvious to me why this would be a bad thing if it’s something you want. There’s way too much content for you to pay attention to all of it anyway so why not just optimize your feed to only have stuff you’re interested in? With a tool like this you could quite easily take a platform that’s an absolute dumpster fire like Twitter or Reddit and just clean it up and all of a sudden it’s useable again. This could possibly also discourage certain type of behaviour online because it means that trolls for example could no longer reach the people they want to troll.

  • z00s@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    9 months ago

    The AI would definitely develop an implicit bias, as it has in many implementations already.

    Plus, while I understand the motivation, its good to be exposed to dissenting opinions now and then.

    We should be working to decrease echo chambers, not facilitate them

    • Lvxferre
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      OP is talking on hypothetical grounds of a “competent AI”. As such, let’s say that “competence” includes the ability to avoid and/or offset biases.

      • z00s@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        9 months ago

        Assuming that was possible, I would probably still train mine to remove only extremist political views on both sides, but leave in dissenting but reasonable material.

        But if I’m training it, how is it any different than me just skipping articles I don’t want to read?

        • Lvxferre
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          Even if said hypothetical AI would require training instead of simply telling it what you want to remove, it would be still useful because you could train it once and use it forever. (I’d still not use it.)

    • xor@infosec.pub
      link
      fedilink
      arrow-up
      1
      arrow-down
      7
      ·
      9 months ago

      weird how if i click your username, it takes me right to your comments…
      almost as if, you’re dumb troll…