Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)

  • 0 Posts
  • 99 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle








  • Fair point, though there are ways to change the probabilities of fusion paths, just not ever fully to 0.
    Reaction probabilities scale with reactant concentration and temperature in ways we can exploit.
    I tried to find some numbers on the relative probabilities and fusion chains, and ran into The helium bubble: Prospects for 3He-fuelled nuclear fusion (2021) which I hope is a credible source.

    This paper contains a figure, which gives numbers to the fusion preferences you mentioned.
    Figure 1. Cross-section of different candidate fusion reactions as a function of the ion temperature.

    Paraphrasing the paper in chapter “Technical feasibility of D-3He fusion” here, first we see that up to 2 billion K, the discrepancy between ²H-³He and ²H-²H fusion grows, up to about 10x. ²H-²H reactions will either produce a ¹n (neutron) and a ³He, or produce an ¹H and an ³H, with the ³H then (effectively) immediately undergoing the much more reactive ²H-³H producing a neutron too.
    In addition to picking an ideal temperature (2GK), we can also further, for the price of less than a factor 2 increase of pressure, use a 10:90 mixture of ²H:³He, or even more. This will proportionally make the ²H-²H branch a factor 10/90 ≈ 11% as likely as the ²H-³He correcting for reaction crossection.
    Past that, reactivity goes about with the square of pressure and the inverse of ²H concentration, so another 10x in fusion plasma pressure would net another 100x decrease in neutron emission at equal energy output.
    Given how quickly fusion reactivity rises with better fusion devices, we can probably expect to work with much higher concentrations than 10:90 when the technology matures, but 10:90 at 2GK would still have about 1/100ᵗʰ the neutrons per reaction and less than 1/100ᵗʰ per energy produced compared to fully neutronic fusion like ²H-³H.

    The problem is solvable, but there is definitely a potential for taking shortcuts and performing ²H-³He with much higher neutron emissions.



  • Kind of, it’s more complicated.

    There are different fusion reactions, one example would be ²H-³He fusion used by Helion.
    ²H-³He is aneutronic, so doesn’t produce chargeless particles (every clump of stuff is either an electron or contains a proton). It is also an easy to achieve fusion reaction with good energy yield, with the downside that we don’t have ³He. Helion therefore has to split their fusion into two steps, producing ³He via ²H-²H fusion in a breeder-reactor and then fusing it in their energy-reactor. The first step would then emit neutrons and not really produce energy, the neutrons here could be used to further breed fuels.
    Not having neutron emissions is quite useful because it allows you to make your fusion generator a lot smaller and safer around people, so it’s certainly something you want to avoid for far more valuable reasons than improving efficiency.

    If we get very good with fusion we could also use the much harder to achieve ¹H-¹¹B reaction, which produces some neutrons but at very low energy (0.1% of total energy output), and is effectively aneutronic for safety concerns (neutrons have low penetration power and don’t really activate material, so can’t be used to breed say weapons-grade fission material). ¹H and ¹¹B are common so require no further steps to produce them.

    There might still be directly-to-electricity pinch-fusion approaches that use neutronic fusion, I tried looking for any but didn’t find an example. We’ll see what ends up being done in practice, but close to 100% energy utilization is at least possible using pinch-fusion.

    On the other hand, the losses in heat-conversion are inevitably huge. The higher the temperature of the heated fluid compared to the environment the higher the efficiency, but given that our environment has like 300 K we can’t really escape losing significant amounts of our energy even if we use liquid metal (like general fusion) and manage to get up to 1000 K. The losses of going through heat are <environment temperature>/<internal temperature> (carnot efficiency), so would still amount to 30% energy loss if we manage to use 1000K liquid metal or supercritical steam to capture the fusion energy and drive a turbine. In practice supercritical steam turbines as used in nuclear plants hover around 50% efficiency at the high end.

    The magnetic field in pinch-fusion interacts with the (charged) particles directly, which are emitted at (many many) millions of K. Therefore this theoretical efficiency will be at over 99.99%. In effect in heat-based fusion we loose a lot of that energy by mixing the extremely hot fusion results with the much colder working fluid.



  • Redjard@lemmy.dbzer0.comtoMicroblog Memes@lemmy.worldIt's always steam
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Solar is no doubt the coolest.
    Hydro and wind are also very neat, going directly from mechanical to electric via generator, without a steam-turbine.

    There is also a very cool fusion-category based on dynamic magnetic fields, that basically form a magnetic piston which expands directly due to the release of charged particles via fusion and then captures the energy from that moving electric field by slowing it back down and initiating the next compression.
    A fully electric virtual piston engine in some sense, driven my fusion explosions and capturing straight into electricity.
    Feels so much more modern than going highly advanced superconducting billion K fusion-reactor to heat to steam to turbine.







  • Smb should be fine. I used it for years on my primary systems (I moved to sshfs when I migrated to linux finally), and it wasn’t ever noticeably less performant than local disks.
    Compared to local ntfs partitions anyway, ntfs itself isn’t all that fast in file operations either.

    If you are looking at snapshots or media, that is all highly sequential and low file operations anyway. Something like gaming off of a nas via smb does also work, but I think you notice the lag smb has. It might also be iops limitations there.

    Large filesizes and highly random fast low-latency reads is a very rare combination to see. I’d think swap files, game assets, browser cache (usually not that large to be fair).

    For anything with fewer files and larger changes it always ran at over 100MiB/s for me until I exhausted the disk caches, so essentially the theoretical max accounting for protocol losses.

    for music what I use is AIMP. I only hope it can work with wine because I don’t want to run a VM for it

    I use that on android. Never knew there were desktop versions, odd that it supports android but not other linux.
    Wine is very reliable now, it will almost certainly work out of the box.
    Otherwise there are also projects to run android apps on linux, though no doubt at much more effort and lower chance of success than wine.


  • because I prefer a local player over jellyfin

    I used vlc then mpv for years before setting up jellyfin. I could still use them if I wanted to.
    For internet access, the largest of files (~30Mbit/s) came up against my upload limit, but locally still played snappily.
    Scrubbing through files was as snappy as playing off of my ssd.

    I do understand wanting music locally. I sync my music locally to my phone and portable devices too so I’m not dependent on internet connectivity. None of these devices even support hdds however, for my pc I see no reason not to play off of my nas using whatever software I prefer.

    I didn’t want to buy him an SSD unnecessarily big […] for the lower lifespan

    Larger ssds almost always have higher maximum writes. If you look at very old (128 or 256GB drives from 2010-2015 ish) or very expensive drives you will get into higher quality nand cells, but if you are on a budget you can’t afford the larger ones and the older ones may have 2-3 times the cycles per cell but like a tenth the capacity, so still 1/3rd the total writes.
    The current price optimum to my knowledge is 2TB SSDs for ~85USD with TLC up to 1.2PBW, so about 600 cycles. If you plan on a lifetime of 10 years, that is 330GB per day, or 4GB/day/USD. I can’t even find SLC on the market anymore (outside of 150USD 128GB standalone chips), but I have never seen it close to that price per bytes written. (If you try looking for slc ssds, you will find incorrectly tagged tlc ssds, with tlc prices and lifetime. That is because “slc cache” is a common ssd buzzword).

    I didn’t want to buy him an SSD unnecessarily big […] for the cost

    Another fun thing about HDDs is that they have a minimum price, since they are large bulky chunks of metal that are inherently hard to manufacture and worth their weight in materials.
    That lower cutoff seems to be around 50USD, for which you can get 500GB or 2TB at about the same price. 4TB is sold for about 90USD.
    In terms of price, ignoring value just going for the cheapest possible storage, there is never a reason to by an HDD below 2TB for ~60USD. A 1TB SSD has the same price as a 1TB HDD, below that SSDs are cheaper than HDDs.

    So unless your usecase requires 2TB+, SSDs are a better choice. Or if it needs 1TB+ and also has immensely high rewrite rates.

    a few VMs, a couple of snapshots

    I have multiple complete disk images of various defunct installs, archived on my nas. That is a prime example for stuff to put into network storage. Even if you use them, loading them up would be comparable in speed to doing it off of an HDD.