• The Octonaut
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    2 days ago

    I ignored the bit you edited in after I replied? And you’re complaining about ignoring questions in general? Do you disagree with the OSI definition Yogsy? You feel ready for that question yet?

    What on earth do you even mean “take a model and train it on thos open crawl to get a fully open model”? This sentence doesn’t even make sense. Never mind that that’s not how training a model works - let’s pretend it is. You understand that adding open source data to closed source data wouldn’t make the closed source data less closed source, right?.. Right?

    Thank fuck you’re not paid real money for this Yiggly because they’d be looking for their dollars back

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      5
      ·
      2 days ago

      Why would you lie about something with timestamps. I edited 18 min ago, and you replied 17 min ago. 🤡

      Do you disagree with the OSI definition Yogsy? You feel ready for that question yet?

      I already answered this question earlier in the thread, but clearly your reading comprehension needs some work.

      What on earth do you even mean “take a model and train it on thos open crawl to get a fully open model”?

      I’m talking about taking the code that DeepSeek released publicly, and training it on the open source data that’s available. That’s what model training is. The fact that this needs to be spelled out for you is amazing.

      You understand that adding open source data to closed source data wouldn’t make the closed source data less closed source, right?.. Right?

      What closed source data are you talking about, nobody is suggesting this.

      Thank fuck you’re not paid real money for this Yiggly because they’d be looking for their dollars back

      You sound upset there little buddy. I guess misspelling my handle was the peak insult you could muster. Really showing your intellectual prowess there champ.

      • The Octonaut
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        I take more than a minute on my replies Autocorrect Disaster. You asked for information and I treat your request as genuine because it just leads to more hilarity like you describing a model as “code”.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          The only hilarity here is you exposing yourself as being utterly clueless on the subject you’re attempting to debate. A model is a deep neural network that’s generated by code through reinforcement training on the data. Evidently you don’t understand this leading you to make absurd statements. I asked you for information because I knew you were a troll and now you’ve confirmed it.

          • The Octonaut
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            I understand it completely in so much that it’s nonsensically irrelevant - the model is what you’re calling open source, and the model is not open source because the data set not published or recreateable. They can open source any training code they want - I genuinely haven’t even checked - but the model is not open source. Which is my point from about 20 comments ago. Unless you disagree with the OSI’s definition which is a valid and interesting opinion. If that’s the case you could have just said so. OSI are just of dudes. They have plenty of critics in the Free/Open communities. Hey they’re probably American too if you want to throw in some downfall of The West classic hits too!

            If a troll is “not letting you pretend you have a clue what you’re talking about because you managed to get ollama to run a model locally and think it’s neat”, cool. Owning that. You could also just try owning that you think its neat. It is. It’s not an open source model though. You can run Meta’s model with the same level of privacy (offline) and with the same level of ability to adapt or recreate it (you can’t, you don’t have the full data set or steps to recreate it).

            • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              1 day ago

              I never disagreed that you can run Meta’s model with the same level of privacy, so don’t know why you keep bringing that up as some sort of gotcha. The point about DeepSeek is its efficiency. OSI definition for open source is good, and it does look like you’re right that the full data set is not available. However, the real question is why you’d be so hung up on that.

              Given that the code for training a new model is released, and it can be applied to open data sets, that means it’s perfectly possible to make a version that’s trained on open data that would check off the final requirement you keep bringing up. Also, adapting it does not require having the original training set since it’s done by tuning the weights in the network itself. Go read up on how LoRA works for example.

              • The Octonaut
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                1 day ago

                I know how LoRA works thanks. You still need the original model to use a LoRA. As mentioned, adding open stuff to closed stuff doesn’t make it open - that’s a principle applicable to pretty much anything software related.

                You could use their training method on another dataset, but you’d be creating your own model at that point. You also wouldn’t get the same results - you can read in their article that their “zero” version would have made this possible but they found that it would often produce a gibberish mix of English, Mandarin and code. For R1 they adapted their pure “we’ll only give it feedback” efficiency training method to starting with a base dataset before feeding it more, a compromise to their plan but necessary and with the right dataset - great! It eliminated the gibberish.

                Without that specific dataset - and this is what makes them a company not a research paper - you cannot recreate DeepSeek yourself (which would be open source) and you can’t guarantee that you would get anything near the same results (in which case why even relate it to thid model anymore). That’s why those are both important to the OSI who define Open Source in all regards as the principle of having all the information you need to recreate the software or asset locally from scratch. If it were truly Open Source by the way, that wouldn’t be the disaster you think it would be as then OpenAI could just literally use it themselves. Or not - that’s the difference between Open and Free I alluded to. It’s perfectly possible for something to be Open Source and require a license and a fee.

                Anyway, it does sound like an exciting new model and I can’t wait to make it write smut.

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 day ago

                  I didn’t say that using LoRA makes it more open, I was pointing out that you don’t need the original data to extend the model.

                  Basically what you’re talking about is being able to replicate the original model from scratch given the code and the data. And since the data component is missing you can’t replicate the original model. I personally don’t find this to be that much of a problem because people could create a comparable model from scratch if they really wanted to using an open data set.

                  The actual innovation with DeepSeek lies in the use of mixture-of-experts approach to get far better performance. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. That’s the really interesting part of the whole thing. That’s the part where openness really matters.

                  And I full expect that OpenAI will incorporate this idea into their models. The disaster for open AI is in the fact that their whole business model around selling subscriptions is now dead in the water. When models were really expensive to run, then only a handful of megacorps could do it. Now, it turns out that you can get the same results at a fraction of the cost.