• Jesus_666@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    With the caveat that the majority of its “solutions” are wrong. So it generates output that looks plausible enough to be accepted as an answer but is not exactly correct. That’s pretty much on par for LLMs.

    The lack of precision may be acceptable for a chatbot or a summarizer. But for coding you need precision and that’s something LLMs don’t offer.