• 0 Posts
  • 65 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle

  • BakedCatboy@lemmy.mltohmmm@lemmy.worldhmmm
    link
    fedilink
    English
    arrow-up
    7
    ·
    21 days ago

    This is why it’s important to get one with an adjustable emissivity, so you can adjust it to whatever material you are measuring. Or you can stick some electrical tape on what you want to measure, 3M super 88 is 0.96 so I just set my fluke to 0.96 and stick that shit everywhere I want to measure.




  • That seems kind of like pointing to reverse engineering communities and saying that binaries are the preferred format because of how much they can do. Sure you can modify finished models a lot, but what you can do with just pre trained weights vs being able to replicate the final training or changing training parameters is just an entirely different beast.

    There’s a reason why the OSI stipulates that code and parameters used to train is considered part of the “source” that should be released in order to count as an open source model.

    You’re free to disagree with me and the OSI though, it’s not like there’s 1 true authority on what open source means. If a game that is highly modifiable and moddable despite the source code not being available counts as open source to you because there are entire communities successfully modding it, then all the more power to you.


  • It’s worth noting that OpenR1 have themselves said that DeepSeek didn’t release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn’t be able to replicate it without re-discovering what they did.

    OSI specifically makes a carve-out that allows models to be considered “open source” under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.


  • It really comes down to this part of the “Open Source” definition:

    The source code [released] must be the preferred form in which a programmer would modify the program

    A compiled binary is not the format in which a programmer would prefer to modify the program - it’s much preferred to have the text file which you can edit in a text editor. Just because it’s possible to reverse engineer the binary and make changes by patching bytes doesn’t make it count. Any programmer would much rather have the source file instead.

    Similarly, the released weights of an AI model are not easy to modify, and are not the “preferred format” that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI “open source”, the training code and data used to produce the weights are considered the “preferred format”, and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it’s used.

    OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.

    I would call “open weights” models actually just “self hostable” models instead of open source.



  • BakedCatboy@lemmy.mltoProgrammer Humor@programming.devLDAC
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    I’m pretty sure if you rip CDs directly to FLAC, it’s a perfect copy assuming you’re using good software. PCM isn’t lossy or lossless because it’s not a compressed format, it’s an uncompressed bitstream. Think of it like the original data. If it was burned to a CD as digital MP3 data and then ripped that to FLAC, then yes you’d be going from lossy compressed to lossless, which would hide the fact that quality was lost when it went to MP3 in the first place.

    Just as an example, you can rip a CD directly to FLAC (you should also find and use the correct sample offset for your CD drive), rip the cue sheet for track alignment, then burn the FLAC back to a new CD using the cuesheet (and the correct write offset configuration), and you’ll get a CD with the exact bit for bit pattern of “pits” burned into the data layer.

    You can then rip both CDs to a raw uncompressed wav file (wav is basically just a container for PCM data) and then you’ll be able to MD5sum both wav files and see that they are identical.

    This is how I test my FLAC rips to make sure I’m preserving everything. This is also how CD checksum databases (like CDDB) work - people across the globe can rip to wav or flac and because it’s the same master of the CD, they’ll get identical checksums, and even after converting the PCM/wav into a flac you are still able to checksum and verify it’s identical bit for bit.


  • I once had someone open an issue in my side project repo who asked about a major release bump and whether it meant there were any breaking changes or major changes and I was just like idk I just thought I added enough and felt like bumping the major version ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯


  • Unironically why I switched my parents to Linux - they don’t touch any important settings so usually the only problems are when they get a new popup / prompt they’ve never seen, which ofc happens a lot more on windows especially when they decide to push some new thing or decide that they want to convince people to enable something new or change a setting that they want people to use.

    I also love that if they call me I can just ssh in over tailscale and do whatever needs doing.



  • BakedCatboy@lemmy.mltoComic Strips@lemmy.worldNew TV
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 months ago

    I still have a smart TV so I don’t need to have a non smart tv. But I refuse to use smart features for several reasons:

    • The built in software is often laggy, ugly, and hard to navigate (mine is from like 2016 so all 3 of these are huge issues for my specific TV but my parents just bought a 2024 model oled and I find their gyro / touchpad / pointer remote to be excruciating to use)
    • I hate the idea of getting used to the Samsung apps / os and then feeling like I need to stick with Samsung
    • They never seem to support the software very long - my TV pre-dates Samsung’s current tv OS and no longer receives updates, so the Plex app available for it doesn’t even connect - so I couldn’t use it even if I wanted to

    I mostly watch stuff downloaded to my Plex, so a PC running Plex htpc / desktop or any android box (Nvidia shield is pretty good) with the Plex or jellyfin app is all I need. I also like that I can easily watch YouTube through a browser with ad block and sponsorblock (I think smarttube does that for Android boxes like the shield)

    I also game on the PC so I guess you could consider it a game console for the purposes of categorizing the use case.


  • BakedCatboy@lemmy.mltoComic Strips@lemmy.worldNew TV
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    The nice thing about Samsungs is that basically all their remotes work with all their TVs, so I just found one without the smart button so I can’t tell that mine is smart, and I obviously never connected it to internet. I think it’s a lot cheaper than trying to get a commercial dumb TV too.