image description (contains clarifications on background elements)

Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg “big brother is watching” poser, two images of fluttershy (a pony from my little pony) one of them reading “u only kno my swag, not my lore”, a picture of parkzer parkzer from the streamer “dougdoug” and a slider gameplay element from the rhythm game “osu”. The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn’t cause too much hate. i just wanna know what u people and creatures think <3

  • Smorty [she/her]@lemmy.blahaj.zoneOP
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    16 hours ago

    of course using ai stuffs for medical usage is going to have to be monitored by a human with some knowledge. we can’t just let it make all the decisions… quite yet.

    in many cases, ai models are already better than expert humans in the field. recognizing cancer being the obvious example, where the pattern recognition works perfectly. or with protein folding, where humans are at about 60% accuracy, while googles alphafold is at 94% or so.

    clearly humans need to oversee AIs output, but we are getting to a point where maybe humans make the wrong decision, and deny an AIs correct generation. so: no additional lives are lost, but many more could be saved

    • Lvxferre [he/him]
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      15 hours ago

      I mostly agree with you, I think that we’re disagreeing on details. And you’re being far, far more level-headed than most people who discuss this topic, who pretend that AI is either e-God or Satanic bytes. (So no, you aren’t an evil AI tech sis. Nor a Luddite.)

      That said:

      For clinical usage, just monitoring it isn’t enough - because when people know that there’s some automated system to catch their mistakes, or that they’re just catching the mistakes of that system, they get sloppier. You need really, really good accuracy.

      Like, 95% accuracy might look like a lot, right? If it involves death or life, it means a death for each 20 cases, it’s rather high. In the meantime, if AlphaFold got it wrong 60% of the time instead of just 6%, it wouldn’t be a big deal.

      Also, note that we’re both talking about “AI” as if it was a single thing. Under the hood it’s a bunch of completely different things; pattern recognition AI, predictive AI, generative AI, they work so differently from each other that we’d need huge walls of text to decide how good or bad each of them is.