Hello, tone-policing genocide-defender and/or carnist 👋

Instead of being mad about words, maybe you should think about why the words bother you more than the injustice they describe.

Have a day!

  • 0 Posts
  • 68 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle


  • trevor@lemmy.blahaj.zonetoMemes@lemmy.mlCouldn't have happened to a nicer guy
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    The relevant parts of the comment thread was about the claim that the model is open source. Below, you will find the subject of the comments bolded, for your better understanding of the conversation at hand:

    Deepseek is a Chinese AI company that released Deepseek R1, a direct competitor to ChatGPT.

    You forgot to mention that it’s open source.

    Is it actually open source, or are we using the fake definition of “open source AI” that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?

    many more inane comments…

    And your most recent inane comment…

    That’s something they included, just like open source games include content. I would not say that the model itself (DeepSeek-V3) is open source, but the tech is. It is such an obvious point that I should not have to state it.

    Well, cool. No one ever claimed that “the tech” was not included or that parts of their process were open sourced. You answered a question that no one asked. The question was asking if the model itself is actually open source. No one has been able to substantiate the claim that the model is open source, which has made talking to you a giant waste of time.




  • They did not release the final model without the data

    They literally did exactly that. Show me the training data. If it has been provided under an open source license, then I’ll revise my statement.

    You literally cannot create a useful LLM without the training data. That is a part of the framework used to create the model, and they kept that proprietary. It is a part of the source. This is such an obvious point that I should not have to state it.


  • You’re conflating game engines being open source with the games themselves being proprietary. Proprietary products can use (some) open source things, but it doesnt make the end product open source.

    Given that LLMs literally need the training data to be worth anything, releasing the final model without training data is not open source.


  • I think this touches on the concept of labor aristocracy pretty well. But at the point where you’re a billionaire, even labor aristocrats would have needed to do some level of exploitation. At which point, they’re just doing the same thing the owning class does.

    For instance, once you start doing shit like licensing IP (private property is theft; including “intellectual property”), creating fashion brands, perfume, and other forms of “passive income” (A.K.A. stealing from someone else) like that, you’re not really profiting off of your own labor anymore. You’re exploiting others.

    I don’t think anyone from labor aristocracy can ever get to the point that they’re approaching billionaire status with clean hands (relative to how “clean” one can be under capitalism). But artists like Chappell Roan aren’t anywhere close to that, as someone else pointed out.





  • My use of the word “stealing” is not a condemnation, so substitute it with “borrowing” or “using” if you want. It was already stolen by other tech oligarchs.

    You can call the algo open source if the code is available under an OSS license. But the larger project still uses proprietary training data, and therefor the whole model, which requires proprietary training data to function is not open source.





  • That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.

    It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.

    It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.


  • trevor@lemmy.blahaj.zonetoMemes@lemmy.mlCouldn't have happened to a nicer guy
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    1 month ago

    The training data is the important piece, and if that’s not open, then it’s not open source.

    I don’t want the data to avoid using the official one. I want the data so that so that I can reproduce the model. Without the training data, you can’t reproduce the model, and if you can’t do that, it’s not open source.

    The idea that a normal person can scrape the same amount and quality of data that any company or government can, and tune the weights enough to recreate the model is absurd.