• Doug7070@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    4
    ·
    1 year ago

    This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

    Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.

    • underisk@lemmy.ml
      link
      fedilink
      arrow-up
      19
      arrow-down
      3
      ·
      1 year ago

      It means they can’t make porn images of celebs or anime waifus, usually.

    • 👁️👄👁️@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      That’s not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? It’s the same thing as regular, but it doesn’t attempt to block itself for saying stupid stuff, like “I cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to cultures”. It’s literally just removing the safeguard.

    • RobotToaster
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s a machine, it should do what the human tells it to. A machine has no business telling me what I can and cannot do.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.

      • Doug7070@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’ve found a very simple expedient to avoid any such issues is just to not use things like ChatGPT in the first place. While they’re an interesting gadget, I have been extremely critical of the massive over-hyped pitches of how useful LLMs actually are in practice, and have regarded them with the same scrutiny and distrust as people trying to sell me expensive monkey pictures during the crypto boom. Just as I came out better of because I didn’t add NFTs to my financial assets during the crypto boom, I suspect that not integrating ChatGPT or its competitors into my workflow now will end up being a solid bet, given that the current landscape of LLM based tools is pretty much exclusively a corporate dominated minefield surrounded by countless dubious ethics points and doubts on what these tools are even ultimately good for.