• The Octonaut
    link
    fedilink
    arrow-up
    7
    arrow-down
    7
    ·
    2 days ago

    All that is true of Meta’s products too. It doesn’t make them open source.

    Do you disagree with the OSI?

    • Grapho@lemmy.ml
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      1 day ago

      What makes it open source is that the source code is open.

      My grandma is as old as my great aunts, that doesn’t transitively make her my great aunt.

      • The Octonaut
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        1 day ago

        A model isn’t an application. It doesn’t have source code. Any more than an image or a movie has source code to be “open”. That’s why OSI’s definition of an “open source” model is controversial in itself.

        • Grapho@lemmy.ml
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          It’s clear you’re being disingenuous. A model is its dataset and its weights too but the weights are also open and if the source code was as irrelevant as you say it is, Deepseek wouldn’t be this much more performant, and “Open” AI would have published it instead of closing the whole release.

      • The Octonaut
        link
        fedilink
        arrow-up
        11
        arrow-down
        3
        ·
        edit-2
        2 days ago

        The data part. ie the very first part of the OSI’s definition.

        It’s not available from their articles https://arxiv.org/html/2501.12948v1 https://arxiv.org/html/2401.02954v1

        Nor on their github https://github.com/deepseek-ai/DeepSeek-LLM

        Note that the OSI only ask for transparency of what the dataset was - a name and the fee paid will do - not that full access to it to be free and Free.

        It’s worth mentioning too that they’ve used the MIT license for the “code” included with the model (a few YAML files to feed it to software) but they have created their own unrecognised non-free license for the model itself. Why they having this misleading label on their github page would only be speculation.

        Without making the dataset available then nobody can accurately recreate, modify or learn from the model they’ve released. This is the only sane definition of open source available for an LLM model since it is not in itself code with a “source”.

          • The Octonaut
            link
            fedilink
            arrow-up
            10
            arrow-down
            2
            ·
            2 days ago

            That’s the “prover” dataset, ie the evaluation dataset mentioned in the articles I linked you to. It’s for checking the output, it is not the training output.

            It’s also 20mb, which is miniscule not just for a training dataset but even as what you seem to think is a “huge data file” in general.

            You really need to stop digging and admit this is one more thing you have surface-level understanding of.

              • The Octonaut
                link
                fedilink
                arrow-up
                13
                arrow-down
                1
                ·
                edit-2
                2 days ago

                Since you’re definitely asking this in good faith and not just downvoting and making nonsense sealion requests in an attempt to make me shut up, sure! Here’s three.

                https://commoncrawl.org/

                https://github.com/togethercomputer/RedPajama-Data

                https://huggingface.co/datasets/legacy-datasets/wikipedia/tree/main/

                Oh, and it’s not me demanding. It’s the OSI defining what an open source AI model is. I’m sure once you’ve asked all your questions you’ll circle back around to whether you disagree with their definition or not.

                • HappyTimeHarry@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 hours ago

                  Thank you for posting those links, while I’m not sure the person you replied to was asking in good faith, I myself was wanting to see an example after reading the discussion.

                  Seems like even if it’s not fully open source it’s a step in the right direction in a world where terms like “open” and non profit have been co-opted by corporations to lose their original meaning.

                  • The Octonaut
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    7 hours ago

                    It’s certainly better than "Open"AI being completely closed and secretive with their models. But as people have discovered in the last 24 hours, DeepSeek is pretty strongly trained to be protective of the Chinese government policy on, uh, truth. If this was a truly Open Source model, someone could “fork” it and remake it without those limitations. That’s the spirit of “Open Source” even if the actual term “source” is a bit misapplied here.

                    As it is, without the original training data, an attempt to remake the model would have the issues DeepSeek themselves had with their “zero” release where it would frequently respond in a gibberish mix of English, Mandarin and programming code. They had to supply specific data to make it not do this, which we don’t have access to.

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  10
                  ·
                  edit-2
                  2 days ago

                  So you found a legacy data set that’s been released nearly a year ago as your best example. Thanks for proving my point. And since you obviously know what you’re talking about, do explain to the class what stops people from using these data sets to train a DeepSeek model?