Latest generation of products not becoming part of people’s “routine internet use”, researchers say.

  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    To address your last point, there have been continual improvements in making LLMs more efficient as well. While I definitely couldn’t run GPT4 on my local machine, I can get decently close to something like 3.5-turbo, which would’ve been unheard of only a year or two ago.

    And from the sounds of it, GPT-4o is another big step in efficiency (though it’s hard to say for certain).