• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    2 days ago

    What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn’t all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.

    • trevor@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.

      It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.

      It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        Again, if people feel strongly about this then there’s a very clear way to address this problem instead of whinging about it.

        • trevor@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Yes. That solution would be to not lie about it by calling something that isn’t open source “open source”.

            • trevor@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term “open source”, that’d be great.

              • KeenFlame@feddit.nu
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                I don’t really think they are stealing, because I don’t believe publicly available information can be property. The algorithm is open source so it is a correct labelling

                • trevor@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  24 hours ago

                  My use of the word “stealing” is not a condemnation, so substitute it with “borrowing” or “using” if you want. It was already stolen by other tech oligarchs.

                  You can call the algo open source if the code is available under an OSS license. But the larger project still uses proprietary training data, and therefor the whole model, which requires proprietary training data to function is not open source.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Plenty of debate on what classifies as an open source model last I checked, but I wasn’t expecting honesty from you there anyways.