A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125

  • Womble@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    No the argument is current techniques give logarithmic returns in data size, which is bad. But it said nothing about other potential techniques or made any suggestion that this was a general result.

    • Murvel@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Well obviously they cannot rule out techniques no one has though of but likewise they obviously accounted for what they deemed to be within the realm of possibility