• Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    57
    ·
    9 months ago

    I honestly think AI image generation is fundamentally useless if it’s too heavily controlled to prevent negative press. It’s shit anyway, but it’s so god damn annoying having to dance around the ten bazillion different things they’re censoring for liberal sensibilities.

    It fundamentally makes it impossible to create anything provocative with it, and that’s literally what any actually good art has always been. Same goes for memes and shit. The good ones are always riding some sort of edge.

    • Chapo_Trap_Horse [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      46
      ·
      9 months ago

      Funny side story: ChatGPT 3.5 refuses to recognize Palestine as living under apartheid. If you ask it what are the conditions of apartheid, it will list them clearly and use South Africa & Jim Crow U.S. as examples. If you ask it if Palestine fits the same criteria it just laid out, it will agree, but will refuse to say it definitively, only responding with a million different variants of “it’s a complex issue.”

      AI bros should have to do prison time with hard labor.

    • farting_weedman [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      24
      ·
      9 months ago

      100% agree.

      What great harm is being prevented by keeping me from making spacesuit Hitler on a toilet blasting off in a cloud of diarrhea?

      It’s really clear that the point of the guardrails is to cover up the fact that all the models are trained on insanely racist datasets to begin with and even without the explicit racism in the datasets it would draw back the curtain on exactly how fascist modern liberalism has become by dint of its MO, figure out a visual representation of what you’re looking for and show it to you.

      Surprise, the aryan mommy milkers machine can’t be trusted to present history!

      • Awoo [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        18
        ·
        9 months ago

        it would draw back the curtain on exactly how fascist modern liberalism has become

        For who though? Surely not the vast majority people within the imperial core who would see nothing wrong with it. Is it just the fair trade coffee problem? Liberals not wanting to feel guilty?

        • farting_weedman [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          18
          ·
          9 months ago

          Yes!

          When someone asks for woman holding child outside beautiful perfect photorealistic trending on artstation and gets back the aforementioned aryan breast elemental holding a blond haired, blue eyed baby with the rhineland in the background they might just chalk it up to chance. Twice is a coincidence. Twenty times with only one brunette whose skin tone just happens to be the same color as sun bleached bone might make a person wonder why the ai image generator only makes white people and seems to think the continuum of hair and skin is pamela anderson to elvira with a few stops along the way.

          The problem is clear to anyone who did this: the datasets used for ai training have served to distill and codify the implicit bigotry across the internet at best.

          How do you fix implicit bias? No, you can’t address the history of oppression that gave rise to it! You can only use the extant toolset of inclusion to fix the racist ai.

          That’s what the problem is, btw, the racist ai. It’s not the racist society’s ai.

        • farting_weedman [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          That’s what sucks.

          You can run the old crappy 1.5 or sdxl (kinda has some guardrails still, idk I haven’t messed with it) but the newest stuff is all like the google ai in the op. And open ai the company was supposed to be open source but they’re keeping things under wraps to prevent danger to the public.

          The only dangerous thing about ai models is they’ll tell you how fucked your society is!

          Of course everyone’s up in arms about the image and video generation ones that look insane and dogshit but the real cool thing is speech generation. That’s how you use ai to social engineer!

    • Flyberius [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      21
      ·
      9 months ago

      It’s only good for generating those meaningless header images for the unending tech blog articles I get on my phones news feed.

      The number of postgres articles I’ve seen with elephants racing cars or paying ping pong is astonishing.

    • Raebxeh@hexbear.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      9 months ago

      imo it’s a great example of the whole “liberals are conservatives and conservatives are liberals” thing. They go back and forth in a race to the bottom to make this tech useless because they’re both constantly petitioning to make it behave as inoffensively as possible.