• ravhall@discuss.online
    link
    fedilink
    arrow-up
    19
    arrow-down
    2
    ·
    3 months ago

    I cringe every time I hear it. It’s not AI. It’s code. Or if you want to be fancy, call it an LLM if you must.

    • 1stTime4MeInMCU
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      3 months ago

      The real problem is that AI is ill defined and the goal posts move to “wherever we are now plus a little more” and is always not quite there yet. Writing a simple script to take input of user on cli and performing some action on behalf of them is arguably “AI” in that it automates a task that a human would otherwise have to do themselves.

      I think it’s probably a lot more useful to talk about a systems capabilities rather than its labels. Can this _ actually drive a car without human intervention? Can this _ actually write software without a coder looking over it and modifying the mistakes? For most domains, we aren’t there yet, where the thing is a human level (or better) autonomous agent.

      But i guess it’s no surprise that an industry that exists primarily on hype clings to its stockholder-edging labels and marketing terms.

    • enkers@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      3 months ago

      I mean, yes and no. Colloquially, there’s some confusion between AI and Strong AI. Strong AI is an AI that mimics human intelligence (which doesn’t exist yet), whereas AI is an umbrella term for a lot of loosely related topics in comp sci. The way it’s often used currently where AI = LLMs/GANs lately is incorrect, but they both do fall under the field of AI.

      It’s certainly a problem that some people think that AI = Strong AI = LLMs.