• dan@upvote.au
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    I don’t know much about AI models, but that’s still more than other vendors are giving away, right? Especially "Open"AI. A lot of people just care if they can use the model for free.

    How useful would the training data be? Training of the largest Llama model was done on a cluster of over 100,000 Nvidia H100s so I’m not sure how many people would want to repeat that.

    • baguettefish@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      scientific institutions and governments could rent enough GPUs to train their own models, with potentially public funding and public accountability, and also it’d be nice to know if the data llama was trained with was literally just facebook user data. i’m not really in the camp of “if user content is on my site then the content belongs to me”.

    • Martineski@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Without the same training data you wouldn’t be able to recreate the results even when having the computing power. Thus it’s not fully open source. Training data is a part of the source to create the result, “LLM”. It’s like having to add your own lines of code to open source program to make it work because the company doesn’t provide it.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 month ago

      How useful would the training data be

      Open datasets are getting much better (Tulu for an instruct database/recipe is a great example), but its clear the giants still have “secret sauce” that gives them at least a small edge over open datasets.

      There actually seems to be some vindication of using massively multilingual datasets as well, as the hybrid chinese/english models are turning out very good.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 month ago

      It turns out these clusters are being used very inefficiently, seeing how Qwen 2.5 was trained with a fraction of the GPUs and is clobbering models from much larger clusters.

      One could say Facebook, OpenAI, X and such are “hoarding” H100s but are not pressured to utilize them efficiently since they are so GPU unconstrained.

      Google is an interesting case, as Gemini is getting better quickly, but they presumably use much more efficient/cheap TPUs to train.