• @azimir@lemmy.ml
    link
    fedilink
    417 months ago

    We had an ongoing project studying network communication structures within social media groups. The primary goal was to identify patterns of misinformation dissemination. We lost our ability to poll the API and pull messages to build up our data sets to work with. The cost to hit the API used to be free for a limited rate for researchers, but the new doofus in charge demanded a massive rate to get even a reasonable quantity of data so we had to fold up shop. We just routed the students to other projects, but it’s one more way to isolate and control the network so the dictator can be in charge however they like.

    • @Powerpoint@lemmy.ca
      cake
      link
      fedilink
      157 months ago

      They’re losing users and advertisers every day. It’s mostly just his bots and fascists now that are still on it.

    • The Doctor
      link
      fedilink
      English
      47 months ago

      It’s useful for keeping an eye on the chuds in some places. Since the API got neutered it’s harder to do it with automated tools, though.

  • @fubarx@lemmy.ml
    link
    fedilink
    127 months ago

    There’s been a steady exodus of news and legal people onto Threads. Techie people seem to be moving more to Mastodon.

    Once the automated posting tools catch up with the Threads and Mastodon APIs, there will be less reason to check anything relevant on Twitter.

  • AutoTL;DRB
    link
    fedilink
    English
    77 months ago

    This is the best summary I could come up with:


    Meanwhile, X’s content moderation efforts have continued to be heavily scrutinized as X struggles to prove that it’s containing the spread of misinformation and hate speech under Musk’s new policies.

    Most recently, X CEO Linda Yaccarino had to step in—amid outcry from X advertisers and staff—to remove a pro-Hitler post that went viral on the platform, The Information reported.

    X later claimed that the post was removed because it broke platform rules, not because of the backlash, but X’s efforts to proactively monitor antisemitic speech seemingly failed there.

    And nobody’s sure why X’s global escalation team delayed action, although it’s possible that they feared that removing the post might be considered censorship and incite the ire of Musk, the “free speech absolutist.”

    In February, the CITR published a letter, warning that Musk charging high fees for access to Twitter data that was previously free “will disrupt critical projects from thousands of journalists, academics, and civil society actors worldwide who study some of the most important issues impacting our societies today.”

    Reuters reported that CITR’s survey, “for the first time,” importantly quantifies the number of studies canceled since these fees were imposed.


    The original article contains 494 words, the summary contains 191 words. Saved 61%. I’m a bot and I’m open source!

  • @pancake@lemmygrad.ml
    link
    fedilink
    27 months ago

    The article gives me bad vibes… On the one hand, it (and linked articles) seems to present the implicit assumption that Israel = Zionism = Judaism, which is very clearly false but could be easily used to used to “prove” other statements, like this: “Israel = Judaism -> Criticism of Israel = Criticism of Judaism = antisemitism”. Same logic can be used for “anti-Zionism = antisemitism”.

    Additionally, the article does not mention any criticism of Israel that would not be considered disinformation, leaving that question open. This, of course, is dangerous, as it leaves open the possibility that people who “only care about truth” (but do not unconditionally support Israel) support restrictive measures on X as suggested by the article while those measures are then effectively meant to silence criticism of Israel.

    Finally, one linked article seems to support the idea that all footage from the warzone should be fact-checked before being published. While this would curb some (minority) false footage, it would dramatically reduce the exposure that the conflict can get, as well as potentially exposing its spread to censorship from many sources.

    So, overall, I think this article is using a reasonable-sounding rhetoric to push forward centralized control of social media narratives. It’s not a problem that some information on the platform is false, but if the overall narrative is biased, that would really become a problem, and X already implemented community notes (which use a really innovative de-biasing algorithm) to fight that. I can only conclude that we should resist the call to introduce potential sources of systematic bias to counter ultimately “inoffensive” random bias, which would be a step towards true authoritarianism.

  • @Whirlybird@aussie.zone
    link
    fedilink
    1
    edit-2
    7 months ago

    The one he threatened with legal action was so egregiously stupid and biased that they deserved it. They were clearly spreading lies to try and harm Twitter.

    For those that aren’t aware of it, it was one that claimed hate speech has exploded, but their methodology counted all replies to any tweet that they deemed “hateful” as hate speech, even if 1000 of the 1001 replies were telling the original tweeter that what they said was hateful and they should stop.

    Yes you read that right - 1 person calling someone the N word followed by 100 people calling them out and saying they’re racist was counted as 101 instances of hate speech 😂