TikTok and other social media companies use AI tools to remove the vast majority of harmful content and to flag other content for review by human moderators, regardless of the number of views they have had. But the AI tools cannot identify everything.

Andrew Kaung says that during the time he worked at TikTok, all videos that were not removed or flagged to human moderators by AI - or reported by other users to moderators - would only then be reviewed again manually if they reached a certain threshold.

He says at one point this was set to 10,000 views or more. He feared this meant some younger users were being exposed to harmful videos. Most major social media companies allow people aged 13 or above to sign up.

TikTok says 99% of content it removes for violating its rules is taken down by AI or human moderators before it reaches 10,000 views. It also says it undertakes proactive investigations on videos with fewer than this number of views.

When he worked at Meta between 2019 and December 2020, Andrew Kaung says there was a different problem. […] While the majority of videos were removed or flagged to moderators by AI tools, the site relied on users to report other videos once they had already seen them.

He says he raised concerns while at both companies, but was met mainly with inaction because, he says, of fears about the amount of work involved or the cost. He says subsequently some improvements were made at TikTok and Meta, but he says younger users, such as Cai, were left at risk in the meantime.

  • Lvxferre
    link
    fedilink
    arrow-up
    23
    ·
    edit-2
    4 months ago

    It gets worse, when you remember that there’s no dividing line between harmful and healthy content. Some content is always harmful, some is by default healthy, but there’s a huge gradient of content that needs to be consumed in small amounts - not doing it leads to alienation, and doing it too much leads to a cruel worldview.

    This is doubly true when dealing with kids and adolescents. They need to know about the world, and that includes the nasty bits; but their worldviews are so malleable that, if all you show them is nasty bits, they normalise it inside their heads.

    It’s all about temperance. And yet temperance is exactly the opposite of what those self-reinforcing algorithms do. If you engage too much with content showing nasty shit, the algo won’t show you cats being derps to “balance things out”. No, it’ll show you even more nasty shit.

    It gets worse due to profiling, mentioned in the text. Splitting people into groups to dictate what they’re supposed to see leads to the creation of extremism.


    In the light of the above, I think that both Kaung and Cai are missing the point.

    Kaung believes that children+teens would be better if they stopped using smartphones; sorry but that’s stupid, it’s proposing to throw the baby out with the dirty bathtub water.

    Cai on the other hand is proposing nothing but a band-aid. We don’t need companies to listen to teens to decide what we should be seeing; we need them to stop altogether deciding what teens and everyone else should be seeing.

    Ah, and about porn, mentioned on the text: porn is at best a small example of a bigger issue, if not a red herring distracting people from the issue altogether.

    • ericjmorey@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      It’s nice to see that others get it. Unfortunately, neither of us have any immediate influence on the largest social media platforms.

      • Lvxferre
        link
        fedilink
        arrow-up
        6
        ·
        4 months ago

        To make it worse decision makers - regardless of country - are typically old and clueless about “this computer stuff”. As such they literally don’t see the problem.