I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 255 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle

  • Yeah, there’s an important difference between having covid and having covid with no herd immunity.

    I caught covid during the pandemic, it was around a month after receiving the vaccine. So actually I never even got a fever. I had like a day of cold symptoms, never even showed positive on lateral flow.

    But since my girlfriend did test positive, as per the rules here at the time, I did a proper lab test. That confirmed that yes I had covid at the time or close to when I had the test.

    So at the very height of the immunity provided by the vaccine I still technically caught the virus. Which makes sense.

    So, with the general increase in immunity there can be a lot of covid detected in wastewater, but many or most of those people putting that waste out may only be having cold level symptoms.




  • r00ty@kbin.lifetoLemmy Shitpost@lemmy.worldDnD
    link
    fedilink
    arrow-up
    26
    ·
    26 days ago

    You know, I think he says a lot of the stupid shit he does, just so his name is constantly in the news cycle. Just like the old 1980s game “A rockstar ate my hamster” the phrase “Any publicity is good publicity” must be his mantra.

    Anyway, I agree. I’m sick of hearing about his latest antics.



  • OK, look back at the original picture this thread is based on.

    We have two situations.

    The first is a dedicated system for providing navigation and other subsystems for a very specific purpose, with very specific hardware that is very limited. An 8 bit CPU with a very clearly known RISCesque instruction set, 4kb of ram and an bus to connect devices.

    The second is a modern computer system with unknown hardware, one of many CPUs offering the same instruction set, but with differing extensions, a lot of memory attached.

    You are going to write software very differently for these two systems. You cannot realistically abstract on the first system, in reality you can’t even use libraries directly. Maybe you can borrow code from a library at best. On the second system you MUST abstract because, you don’t know if the target system will run an Intel or Amd CPU, what the GPU might be, what other hardware is in place, etc etc.

    And this is why my original comment was saying, you just cannot compare these systems. One MUST use abstraction, the other must not. And abstractions DO produce overhead (which is an inefficiency). But we NEED that and it’s not a bad thing.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    27 days ago

    Exactly my point though. My original point was that you cannot compare this. And the main reason you cannot compare them is because of the abstraction required for modern development (and that happens at the development level and the operating system you run it on).

    The Apollo software was machine code running on known bare metal interfacing with known hardware with no requirement to deal with abstraction, libraries, unknown hardware etc.

    This was why my original comment made it clear, you just cannot compare the two.

    Oh one quick edit to say, I do not in any way mean to take away from the amazing achievement from the apollo developers. That was amazing software. I just think it’s not fair to compare apples with oranges.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    5
    arrow-down
    8
    ·
    27 days ago

    It does. It definitely does.

    If I write software for fixed hardware with my own operating system designed for that fixed hardware and you write software for a generic operating system that can work with many hardware configurations. Mine runs faster every time. Every single time. That doesn’t make either better.

    This is my whole point. You cannot compare the apollo software with a program written for a modern system. You just cannot.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    27 days ago

    Wait a second. When did I say abstraction was bad? It’s needed now. But when you are comparing 8bit machine code written for specific hardware against modern programming where you MUST handle multiple x86/x86_x64 cpus, multiple hardware combinations (either via the exe or by the libraries that must handle the abstraction) of course there is an overhead. If you want to tell me there’s no overhead then I’m going to tell you where to go right now.

    It’s a necessary evil we must have in the modern world. I feel like the people hating on what I say are misunderstanding the point I make. The point is WHY we cannot compare these two things!


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    10
    arrow-down
    4
    ·
    27 days ago

    Except it’s not nonsense. I’ve worked in development through both eras. You need to develop in an abstracted way because there are so many variations on hardware to deal with.

    There is bloating for sure, and of course. A lot is because it’s usually much better to use an existing library than reinvent the wheel. And the library needs to cover many other use cases than your own. I encountered this myself, where I used a Web library to work with releases on forgejo, had it working generally, but then saw there was a library for it. The boilerplate to make the library work was more than I did to just make the Web requests.

    But that’s mostly size. The bloat in terms of speed is mostly in the operating system I think and hardware abstraction. Not libraries by and large.

    I’m also going to say legacy systems being papered over doesn’t always make things slower. Where I work, I’ve worked on our legacy system for decades. But on the current product for probably the past 5-10. We still sell both. The legacy system is not the slower system.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    31
    arrow-down
    4
    ·
    27 days ago

    It’s a different world now though. I could go into detail of the differences, but suffice to say you cannot compare them.

    Having said that, Windows lately seems to just be slow on very modern systems for no reason I can ascertain.

    I swapped back to Linux as primary os a few weeks ago and it’s just so snappy in terms of ui responsiveness. It’s not better in every way. But for sure I never sit waiting for windows to decide to show me the context menu for an item in explorer.

    Anyway in short, the main reason for the difference with old and new computer systems is the necessary abstraction.




  • The privacy stuff? I’ve seen it happen in 11 for sure. I always check after an update now out of habit. But, not seen it in a while.

    Resetting dual boot stuff? Before EFI/UEFI it would happen on most windows updates. It would just overwrite the boot record in a totally arrogant fuck you to whatever was already there. But since EFi/UEFI it plays nice with other operating systems generally.


  • I can’t use my plugins for elite dangerous or extra software, like EDMC.

    Why not? The github page even says it will work with wine. I’ve not played ED for a long time. But, I am sure I had EDDiscovery at least working with it in linux a few years ago. Other games like WoW I have external tools that interface with it working fine, some within the same wine environment, some even external. You just need to make sure the drive is mapped (you can always go via the Z: drive too) where the app expects it.

    From my experience, I have steam working and pretty much every game I want to play has worked. I don’t play games with kernel anti-cheat even in windows, so I’m not missing anything there. Battle net runs fine even with ray-traced shadows in wow. Pretty much everything else I need works. The only things I miss are the games that are part of XBOX/Windows store, but that’s hardly Linux’s fault. Maybe visual studio too. But I do have the OSS “Code” to cover most I did in VS so…

    I have dual boot, I’ve not used it to go to windows in weeks. Almost everything just works fine.


  • I’ve been lucky then, only problems I’m having (Wayland + NVidia) are:

    • Steam menu corruption, mostly on friends window (can be solved by maximising window)
    • Maximising browser on my second screen results in not all the screen being used, but buttons react as if they were using the whole screen (so you’re not clicking where you think you are). Solution is to resize window to maximum manually. Minor annoyance.

    Oh and I disabled stand-by entirely. It’s was 50/50 if it would return from it. I think most problems are because I have mismatched resolutions (1080 and 1440).


  • Sort of when it clicked for me, was when I realized that your code needs to be a tree of function calls.
    I mean, that’s what all code is anyways, with a main-function at the top calling other functions which call other functions. But OOP adds a layer to that, i.e. objects, and encourages to do all function calls between objects. You don’t want to do that in Rust. You kind of have to write simpler code for it to fall into place.

    Yes, this ties in with what I’m saying though. You need a paradigm shift in your design philosophy, which is hard when you come from a Cx background.

    I also think that in OO there shouldn’t be much cross contamination. It happens (and it happens a lot in my personal projects to be fair) but when well designed it shouldn’t need to be. In C# for example it should be the case that rather than a function owning a resource, a class should. So when using an object between classes you take it as a reference from a method in one class and pass it into a method to another class rather than call that class and make it a dependency of that class too. In this way you would have a one way dependency, rather than a two way.

    This kind of thinking has moved into creating objects in rust. Also I think yes within a same class the idea of a function (that isn’t static) accepting an object that is part of the class that was returned by another function in the case class feels very wrong from a Cx style point of view. If we knew we were going to do that, we’d just make it a class level variable and use it in both functions.

    Like I say, just another way of thinking and I’m not there yet.


  • The bingo one actually uses crossbeam channels instead of mutexes, so that’s nice. I haven’t looked too closely at it though.

    The C# original uses the equivalent of read/write locks. But I found it problematic to work the same way in rust and then discovered the communication option was far easier to implement and actually avoids holding up threads. So went with that. Much easier and much faster in execution I think.

    I don’t think you can do too much about the Spectrum one if you want to keep the two threads, but here’s what I would change related to thread synchronization. Lemmy doesn’t seem to allow me to attach patch files for whatever reason so have an archive instead… dblsaiko.net/pub/tmp/patches.tar.bz2 (I wrote a few notes in the commit messages)

    In reality I’m never likely to remake the CPU project in rust. Firstly because I’d need to entirely re-engineer it because it’s extensively using hierarchical classes, which just doesn’t work the same way in rust. And I’m not sure traits would allow me to do things in even close to the same way. But if it were to work with a CPU emulator they need to share the memory, and also the CPU needs its own thread.

    So basically it’s channels indexed by channel number and name? That one is actually one of the easy cases. Store indices instead:

    This was something I was thinking about the other evening. I needed to get the index to remove some other data anyway and wondered if I’d be better off having a master vector and usize lookups for that data store. It’s one extra lookup, but by index it’s the tiniest and the speed isn’t a real issue anyway. It’s replacing perl scripts pulling data from mysql. It couldn’t possibly run slower than that :P

    Thanks for the commentary though and I think I’m going to make the changes to use indices to lookup data. I wanted to re-order the way things are done a bit anyway. The problem I see potentially is that the lookups probably need to be regenerated every time I delete something. But actually I think that since it is rebuilt from a file on load. Maybe I just remove the items from the lookups and leave them in the vector. Since next run they would be gone anyway.


  • r00ty@kbin.lifetolinuxmemes@lemmy.worldSnap...
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I remember those times too. The difference today is that there are so many more libraries and projects use those libraries a lot more often.

    So using configure and make means that the user also has the responsibility of ensuring all those libraries are up to date. Which again if we’re talking about not using binary install, each also need a regular configure/make process too. It’s not that unusual for large packages to have dependencies on 100+ libraries. At which point building and maintaining the build for all of them yourself becomes untenable really. However I think gentoo exists to automate a lot of this while still building from source.

    I understand why binaries with references to other binary packages for prerequisites are used. I also understand where the limits of this are and why the AppImage/Flatpak/snaps exist. I just don’t particularly like the latter as a concept. But accept there’s times you might need them.