• 8 Posts
  • 289 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle






  • SmokeyDope@lemmy.worldtomemes@lemmy.worldMath
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 days ago

    My favorite reference for what youve just described is 3blue1browns Binary, Hanoi, and sierpinski which is both fascinating and super acessable for the average non-nerd.

    The pressing point is that this recursive method of counting isn’t just a good way of doing it, but basically the most efficient way that it can be done. There are no simpler or more efficient ways of counting.

    This allows the same ‘steps’ to show up in other unexpected areas that ask questions about the most efficient process to do a thing. This allows you to map the same binary counting pattern across both infinite paths of fractal geometry with sierpinskis arrowhead curve and solve logic problems like towers of Hanoi stacking. Its wild to think that on some abstract level these are all more or less equal processes.


  • That would require being able to grow a somewhat light textile plant such as linen or cotton or jute. If Canadian growing seasons are anything like I imagine they are, that idea is more or less a nonstarter because all those need a warmer zone climate enviroment. So you’re left with the dense heavy textile that comes from shearing farm animal wools for clothing making. In modern times you can theoretically grow textile indica hemp with cold resistance and short growing cycles, then process it into a softer and somewhat light clothing through making yarn but that may not br part of native indigenous Canadian culture.


  • I’m like 60% sure during the height of subway in mid-2000s-early 2010s the subway commercials actually did advertise the foot long as being 12 inch sub’s for 5$ (goddamn the 5$ foot long ads were catchy). However unlike real measurements that have defined standards, as subway enshitified they weren’t forced to change naming as they slowly shrank the sandwiches. Of course this is long enough ago that an entire generation doesnt know who Jared was so its okay to assume it was never a real foot long.


  • The real answer is that the techie nerds willing to learn git and contribute to open source projects are likely to be hobbyist programmers cutting their teeth on bugfixes/minor feature enhancements and not professional programmer-designers with an eye for UI and the ability to make it/talk with those who can. Also in open source projects its expected that the contributor be able to pull their own weight with getting shit done so you need to both know how to write your own code and learn how to work with specific UI formspecs. Delegating to other people is frowned upon because its all free voulenteer work so whatever you delegate ends up eating up someone elses free time and energy fixing up your pr.



  • SmokeyDope@lemmy.worldtomemes@lemmy.worldIf it works, it works.
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    edit-2
    20 days ago

    Thank you for the explanation! I never really watched the Olympics enough to see them firing guns. I would think all that high tech equipment counts as performance enhancement stuff which goes against the spirit of peak human based skill but maybe sports people who actually watch and run the Olympics think differently about external augmentations in some cases.

    Its really funny with the context of some dude just chilling and vibing while casually firing off world record level shots





  • For all the verbal fellatio Office Space receives I was expecting it to be a god-like ultimate peak of human culture type deal but in reality it was a mid movie humor and plot wise. Its not bad but its very catery to a specific audience I wasn’t part of. I can see it being one of the first and few relatable films for white collar cubicle boglins at the turn of the century which feels like pretty much the sole reason of why I have to see it occasionally referenced 25 years later.


  • There are some pretty close physical analogs that are fun to think about. You cant move a black hole by exerting physical force on it in the normal way so practically infinite gravity wells are like a immovable “object”, though if you’re sufficently nerdy enough you can cook some fun ways to harness its gravitational rotation into a kind of engine, or throw another black hole at it to create a big explosion and some gravitational waves which are like a kind of unstoppable force moving at the speed of light.



  • Ken Cheng is a great satirist and probably knows thats not how it works anymore. Most model makers stopped feeding random internet user garbage into training data years ago and instead started using collections of synthetic training data + hiring freelance ‘trainers’ for training data and RLHF.

    Oh dont worry your comments are still getting scraped by the usual data collection groups for the usual ad selling and big brother bs. But these shitty AI poisoning ideas I see floating around on lemmy practically achieve little more than feel good circle jerking by people who dont really understand the science of machine learning models or the realities of their training data/usage in 2025. The only thing these poor people are poisoning is their own neural networks from hyper focusing defiance and rage on a new technology they can’t stop or change in any meaningful way. Not that I blame them really tech bros and business runners are insufferable greedy pricks who have no respect for the humanities who think a computer generating an image is the same as human made art. Also its bs that big companies like meta/openAI got away with violating copyright protections to train their models without even a slap on the wrist. Thank goodness theres now global competition and models made from completely public domain data.


  • Which ones are not actively spending an amount of money that scales directly with the number of users?

    Most of these companies offer direct web/api access to their own cloud supercomputer datacenter, and All cloud services have some scaling with operation cost. The more users connect and use computer, the better hardware, processing power, and data connection needed to process all the users. Probably the smaller fine tuners like Nous Research that take a pre-cooked and open-licensed model, tweak it with their own dataset, then sell the cloud access at a profit with minimal operating cost, will do best with the scaling. They are also way way cheaper than big model access cost probably for similar reasons. Mistral and deepseek do things to optimize their models for better compute power efficency so they can afford to be cheaper on access.

    OpenAI, claude, and google, are very expensive compared to competition and probably still operate at a loss considering compute cost to train the model + cost to maintain web/api hosting cloud datacenters. Its important to note that immediate profit is only one factor here. Many big well financed companies will happily eat the L on operating cost and electrical usage as long as they feel they can solidify their presence in the growing market early on to be a potential monopoly in the coming decades. Control, (social) power, lasting influence, data collection. These are some of the other valuable currencies corporations and governments recognize that they will exchange monetary currency for.

    but its treated as the equivalent of electricity and its not

    I assume you mean in a tech progression kind of way. A better comparison might be is that its being treated closer to the invention of transistors and computers. Before we could only do information processing with the cold hard certainty of logical bit calculations. We got by quite a while just cooking fancy logical programs to process inputs and outputs. Data communication, vector graphics and digital audio, cryptography, the internet, just about everything today is thanks to the humble transistor and logical gate, and the clever brains that assemble them into functioning tools.

    Machine learning models are based on neuron brain structures and biological activation trigger pattern encoding layers. We have found both a way to train trillions of transtistors simulate the basic information pattern organizing systems living beings use, and a point in time which its technialy possible to have the compute available needed to do so. The perceptron was discovered in the 1940s. It took almost a century for computers and ML to catch up to the point of putting theory to practice. We couldn’t create artificial computer brain structures and integrate them into consumer hardware 10 years ago, the only player then was google with their billion dollar datacenter and alphago/deepmind.

    Its exciting new toy that people think can either improve their daily life or make them money, so people get carried away and over promise with hype and cram it into everything especially the stuff it makes no sense being in. Thats human nature for you. Only the future will tell whether this new way of precessing information will live up to the expectations of techbros and academics.