It was definetly DNS
I’ll introduce you to the concept of WAF, Wife Acceptance Factor.
Basically, all smart IoT devices MUST default back to dumb behaviour in an expected manner. All MITM systems must either fail gracefully, fall back simply, or be robust enough to not fall over.
I’ve been trying my very best to get Plex to a high WAF, but it fucks up constantly.
I get this constantly:
The WAF on my household tech is pretty high. That includes Plex.
I have in house dual/redundant DNS, and my Plex is nearly 100% 24/7/365 on old server hardware. Our living space is far enough away from the servers that the noise isn’t really a problem, and I can break most of what I have installed/setup and internet continues to work because of the independent and redundant DNS. All of my homelab domains are just a stub zone in my main DNS, so everything keeps working if something dies or stops working.
I use Jellyfin instead of Plex, and it runs on my old PC, which sits next to my regular PC. I’d like to move it, but it’s a bit too big to fit anywhere conveniently.
The WAF is teetering on a knife’s edge. I have been spending so much time getting it set up and adding content that I haven’t cleaned up the content much. I need to go and reorganize things to put her workout videos in a separate spot because they’re very hard to find. If I can manage to get everything working well, she’ll probably let me finally cancel our Netflix and Disney+ subscriptions, provided I top up our content a bit more.
I have yet to mess with DNS. I’d really like to give our Jellyfin a DNS entry, but I’d also really like it to be routed internally when on our network so we don’t take a big perf hit. Doing that means I need to run a custom DNS on our network, so I’ve set up a second wifi network to play around with. But hopefully in the next month or so we’ll have a nice domain, like “media.mydomain.com” or something, which would get routed internally when on wifi and still have TLS working properly.
For full WAF compabilty you need a front end where she can add content herself. Like Ombi or Overseer
So far, samba is working. But I’ll check those out too.
These kinds of split DNS routing issues are something I’ve struggled with for a while. From my experience, you have basically two options, and depending on your specific situation only one might be viable.
The first option, which may or may not be available to you, entirely relies on what your router can do. Bluntly, if you use the ISP provided router, you’re probably SOL, if not, you have a chance. Higher end (and/or enterprise class) routers and firewalls generally have sufficient features with a few exceptions. The feature you need to use is called hairpin NAT, though, it will pretty much never be called that in your NAT settings, so you’ll need to Google your router and the term “hairpin NAT” to figure out if it can be done and how to do it. To describe what it is, let’s start with basic port forwarding and adapt from there. I think most people know how port forwarding works: a connection to the external (or WAN) connection on a port is forwarded to an internal IP and port. Hairpin NAT is the same but from inside (the LAN port) basically if a connection from the LAN is destined for the WAN interface IP address, it will forward the connection to an internal (LAN) IP and port. This works alongside regular port forwarding, not instead of it.
If your router/firewall doesn’t support hairpin NAT, you’re going to be limited to plan B, DNS.
With bifurcated DNS, you’re going to have some frustrations if anything changes, so like with all of your port forwards, you’ll want to lock down the IP of your target system. With port forwards, it’s bothersome to update, but not unreasonable. With DNS, it’s really not fun. It’s just that much more inconvenient, since you now need to update port forwards for external connections you need to update DNS too. Not great.
So how do you do this? It’s actually not super hard. As far as I know, you can use pihole (which does not require a raspberry Pi, by the way), or any other DNS server system that tickles your fancy. I use bind, but the actual DNS software isn’t super important, it just needs to support forwarders, and custom entries in the config, which I believe both do. Pihole or similar options can do DNS based ad blocking, I’m not a fan of that, but do what you want.
So the next step is to set up DNS internally. Get your DNS software of choice, and either buy a raspberry Pi to run it (bind is also compatible with the pi), or run virtual machines, or stand up an old PC for it. Install whatever os you feel comfortable running the software on, I always use Linux, but as long as your chosen software runs on the OS, it doesn’t matter much. Give the system a static IP and install everything.
Once setup, if you own a domain, you can set an A-record for your service (in your case jellyfin), say “media.domain.com” pointing to your server for that service internally. Update your global DNS to point media.domain.com to your WAN IP.
For me, I use bind on a raspberry Pi. To make management easier, I also installed webmin, which allows management of the bind configuration on a web interface.
For bonus points, do it all over again and build a second one.
And don’t forget to set up forwarders on your internal DNS so they can resolve internet addresses. Pro tip, use the DNS benchmark tool from GRC.com to find the fastest DNS servers for you.
If you want to go crazy, like me, build a third DNS server for all your internal lab stuff on a different domain, like “homelab.local” (it can be anything), and create a stub zone for it on your primary DNS that points to the lab DNS. That way, any “homelab.local” names, like, media.homelab.local or something, can be setup once on your dedicated homelab DNS server, and the other two will simply point to it via the stub zone.
I always recommend finding fast DNS servers to use internally, and I always recommend that if you’re using internal DNS, you have at least two of them.
Last, but not least, after all of that effort, confirm that your fancy new DNS works (good luck with any troubleshooting you might need to do), and update DHCP to point clients at the internal systems for DNS resolving.
Easy, simple, barely an inconvenience, right?
hairpin NAT
I use a Mikrotik router, so it probably does. I’ll have to check it out. I assume it can do SNI-based routing just like haproxy, but if not, I’ll have to move haproxy to my LAN and just do a TCP tunnel in my VPS.
But yeah, doing this and internal DNS should make for a more robust system, thanks for the breakdown.
I work with this stuff professionally. I personally enjoy mikrotik. Not sure how to hairpin NAT on it off the top of my head, though I’m sure it can be done.
I usually use a business firewall as my gateway. Nothing wrong with mikrotik at all, it’s all personal preference. I think this is the first time I’ve heard of someone using a tik in the wild who isn’t running an ISP.
Yup, I was looking for an inexpensive, enterprise grade router, and 5 port Mikrotik was just the right size and price. I like playing with networking stuff.
The next project is getting a WiFi network with a VPN configured at the router level, as well as a WiFi network with no access to the rest of the network. I use a Ubiquiti AP, so it should be feasible.
I used to manage the network at my last job, a startup, but I’m not in IT, I’m a software engineer who gets into a lot of adjacent stuff.
My Plex server is also a literal pile of garbage, but I only host on the LAN so I don’t even have to worry about DNS fuckery.
I kinda feel like old server hardware is key here. I have pretty much my whole lab running on an old R730 I put a bunch of ECC RAM, disks, and a transcode GPU into and it’s been essentially flawless for like 2 years. Plus it has an IPMI which I don’t think I could live without now. It replaced a setup that would always give me issues which consisted of a bunch of optiplexes, and white boxes. I still hack on pi’s cuz it’s fun, but all the core stuff is surplus enterprise.
I recently upgraded my lab, it used to be an R710, and a pair of nodes from a c6100. Because that stuff was so old, I managed to cram all the VMs I was running onto a single FC630 node on a shiny, new (to me) Dell FX2s.
I really want to get a transcoding GPU, but passing out through to a VM has historically been infeasible, and now, it’s complicated at the very least… At least for Nvidia GPUs. I’ve been looking at the Intel discrete GPU lines for the task recently. I’d sure like to grab a flex 140, but looking at the prices right now, ha, that’s not happening anytime soon. With the FX2s I can only install single-slot half height cards, so options are limited. Front runners right now are the Nvidia P4 and T4, and the Intel ARC A380, with a modded cooler so it’s single slot. My only other option is to find some way to use the existing PCIe interfaces to attach an external GPU, but eGPU enclosures are pretty expensive too and most don’t even come with a GPU.
I’m trying to stay away from thunderbolt, so if I go external, I’ll probably look at either Oculink, or something similar. TB is just way too expensive IMO. I looked into it and the whole setup, a TB PCIe card, TB eGPU enclosure, and a GPU is something like 40-50% more expensive than using a different solution. I’d prefer everything just fits in the server chassis, but then I’m banging my head off of Nvidia or modding Intel ARC cards. None of these are very appealing.
So CPU transcoding for now. I store all my media in 720p AVC/AAC using MP4 as a container, so most streams are direct, and I did that very much on purpose.
Nice! That seems like a sweet little server. Direct play is for sure ideal, plus if 720p is good enough quality for you I’m sure it saves a bunch on disk space.
My set up is an A380 passed to an Ubuntu 24.04 VM on a TrueNAS CORE host. It was really simple to set up PCIe pass through, TrueNAS let’s you do everything you need though the web GUI and h.264 and HEVC transcoding worked right out of the box in Jellyfin with the Jellyfin flavored FFMPEG if I recall. It also supports AV1 encoding but I haven’t tried that out. It handles like a dozen 4k transcodes at once, they’re capable little cards. I think ASRock makes a slot powered low profile 1 slot version.
I’m familiar with the sparkle arc cards, not so much asrock. I’ll check it out.
My main motivation for 720p is a combination of me not caring about 1080/4k, space, and bandwidth. I only really get 10mbps of upload where I am. It’s basically impossible to get anything faster, so if one person tries to stream 4k, not only are they going to have a bad time, but also, nobody else is going to be watching anything.
If I had 4k/1080 content, most of the time the server would need to transcode it anyways for most people, which I’d have to pay for via my electricity bill, and I’d be footing the bill for more disk storage to keep it around. On top of that, live transcoding is generally not as good as a 2 pass vbr when running it through handbrake or something.
There’s obviously more to it overall, but I’ll leave it at that.
Plex has support for hardware transcoding, but the CPU in the server where my VM is, doesn’t have a built in GPU, so I have to add one in. It’s part of the reason I moved from the c6100 to an FX2s. The “s” variant of the FX2 has PCIe card slots in the back that connect to the hosts. In the case of the c6100, there was space for a PCIe card, but only one, and given the built in 2x1GbE onboard, I’d sooner use those for additional networking. The FX2s has 2x10GbE, so it’s less of a concern to use the PCIe slots for graphics… Also, there’s two slots per half-width blade, which is what I have, so I could add two GPUs per host.
I also want to experiment with 3D accelerated vdi, and cluster hosted gaming (similar to stadia), in house… For that I need a decent graphics card. The only one with a good amount of RAM is the Intel flex 140 and the Nvidia T4. The arc A380 is decent, but 6G of memory is limiting. The flex 140 has 12G IIRC and the T4 has 16G. It seems like a lot until you split up the GPU among a couple of VMs… On the T4 you get either 2x8G VRAM systems, or 3x 5.33G VRAM systems… I’d rather 6G per as a minimum standard. This means that to have two GPU enabled systems with the A380, you’d basically need one card per VM. Even though they’re pretty cheap cards, having 3 hosts (as is the plan) gets expensive pretty fast.
My WAF with radarr+sonarr+kodi is sky high Plus Home Assistant with smart switches and outlets in every room.
I bet your wife is really cool. You know, by the standards of some nerd on the Internet, but I’m guessing I’d think she was cool.
She’s the coolest. She is also lazy, like me, so home automation is right up her alley.
Women are temporary. Enshitification is eternal. Sail the high seas matey. Arrrrr
If you do the whole home server self host thing, you could probably fool most people by changing the skin to a red theme though. I use a custom made php piece of shit for mine but there’s this better one everybody uses, I just can’t remember what it’s called.
As Captain Jack Sparrow put it: “I’m deeply flattered, but my first and only love is the sea.”
F
Hahahahah this
You’re probably using containers
Plex is on the Native Synology app. Sonarr, radarr, etc are in containers. The Synology NAS intermittently stops being accessible. I haven’t been able to figure out the problem. I find it impossible to troubleshoot network problems. I think it is my router. Restarting the router seems to fix it. Factory resetting the router didn’t solve the problem.
In summary: my wife is fed up and wants Netflix back.
If this ends up being the place someone is able to offer support, I’ll add some details:
Equipment:
Virgin Media Hub 3 (set to modem only mode) -> TP-link AX73 | AX5400 -> LAN connection to Synology with static IP set in Synology settings.
Synology has the Plex app (native Synology version from the Plex website). Alongside that I’m running the following Docker containers:
GlueTUN project with surfshark VPN. This runs qBittorrent, Prowlarr and FlareSolverr (I used (this guide)[https://drfrankenstein.co.uk/qbittorrent-with-gluetun-vpn-in-container-manager-on-a-synology-nas/] and (this guide)[https://drfrankenstein.co.uk/prowlarr-and-flaresolverr-via-gluetun-in-container-manager-on-a-synology-nas/])
Media fetch project containing Sonarr, Radarr, and Bazarr.
This seemed to work fine before I added the Arr’s. Even after I added Arr’s it worked fine initially, but now for the past few weeks it has been causing constant problems that are solved by restarting the router (once or twice a day).
Getting Plex and the *arrs off the NAS and onto a NUC really helped speed things up for me. That and moving over to UNIFI for my networking hardware.
Synology was terrible for me. Unraid is where it’s at. I don’t really ever mess with it anymore. Also Plex was horrendous for uptime too. I use jellyfin.
Which router is it?
TP-link AX73 | AX5400
I’ve added details to the comment above in case anyone is able to make sense of my problem.
Yeah, this is not a U shaped curve. As you learn more and start to implement concepts like fail-safe and redundancy, the chances of everything in your house being broken goes way back down again.
The main thing you gotta learn though is stop fucking with it.
Or get a second homelab airgapped away from the first one.
Wife Acceptance Factor
You learn something new every day.
My NAS is currently sitting apart while I turn my wife’s old PC into our new media/game/whatever server, it’s been 3 weeks of different random shit not working/being forgotten (whoops, I tossed all my old sata cables! Oops, forgot that the PSU is shit and needs replacement! Oops, the dog PISSED ON JT AND RUINED THE MOTHERBOARD)
Wife is clearly annoyed that the automatic piracy machine isn’t working and has threatened to resubscribe to streaming services if I don’t fix it soon lol
(Just gonna upgrade my gaming PC and use MY old parts to cover the busted mobo I guess)
Ok, clearly this one is on you. And I don’t blame your wife.
- You tossed out perfectly good cables. I’ve made this mistake too, so I feel your pain.
- You need to have at least two piles: one for working parts, and one for non-working parts. Any organization beyond that is icing on the cake.
- The cake is a lie.
- I have no words for how your dog was able to piss on your computer. I would suggest looking up clicker-based training and teach your dog to piss on the carpet and not the hardware.
I mean com’on. Those are all rookie mistakes!
You tossed out perfectly good cables. I’ve made this mistake too, so I feel your pain.
Ha ha, I haven’t!
<is consumed by giant pile of IDE, parallel, serial, VGA, telephone, USB A-B, RCA, and other assorted very obsolete cables>
And there will come a day where you will be asked for a cable, and eventually you’ll find it in the tangled mess of cables that you’ve put somewhere in your domicile, you’re sure of it, just give yourself a minute to check; hold on, you swear you have it, just give yourself a minute to find it…. No not that one, almost but not quite…. Ah ha! Found it… no, you’ll keep looking.
It’s definitely not the dog’s fault. This is their case:
What, you took the old one offline before the new one was ready? What the hell, man
I am not made of money and cannot afford new drives, so once the backup was done I pulled em out thinking it was going to be a quick weekend job
Then… Life happened.
The REAL issue is the dog pissing on the mobo one night when I left the parts on the ground. He doesn’t do that usually so my guess is the Spray-Paint (I also painted the case) was causing doggy nose problems so he doused the smell or something. Took 2 days (I only have so much patience after work) of troubleshooting to figure out what fucking parts were functional after that.
This is not my first rodeo. No, I will never learn. If I still had my ADHD medication this would have been done in 1 weekend without issue but hey, I have broken brain!
Don’t work on electronics on the floor. Spread them across the dining room table so that it can’t be used for meals and your wife can complain about that, too!
When I had a dining room table I did that one time and earned anger.
Nowadays the only table big enough currently has a different project sprawled all over it because a part is on Back-Order, so I make due
Buy a table saw, hammer, and screwdriver and build yourself a new table. What could go wrong? 😉
Jeez. I’m tempted to send you my old Dell R710. It’ll at least work. The system is pretty bulletproof.
You can generally get something newer with lower power requirements for cheap… So I won’t, but still.
Yeah sure but for some of us it’s not because we have over-complicated our homes.
It’s because we do “fix the damn tech” at work all day and are too damn old to do it at home as well!
You can always tell who is the car mechanic on the block. He drives the shittiest barely functional car.
This!
And/or just cheap. So I end up replacing various parts in my laptop over the years, and solder a JR connector onto the charging connectors rather than just buying a new port
Oh yeah, I think a big part of it is not just what you can accomplish, but how efficiently/cheaply. Same with fixes, once they’re annoying enough to actually spend time on, lol.
Get yourself a partner(s) who know a thing or two about tech and can at least perform basic troubleshooting and report to you.
Huge, thick cock but tiny brain and reeeeeeeeee? Pass. Small cock but can tell me when my homelab goes down, what services are actually affected, and suggest a solution that is plausible and is for up-to-date versions of X? Call in pizza and ice cream and clear your schedule, it’s sexy time. And they knew a temporary solution for the outage so they aren’t impacted while I was busy/away? Marry me.
There’s a lot of other factors but that defo plays a factor. Learn tech, get blowjobs. It’s that simple.
I fucking wish! Despite my profession and hobbies all being very technical I have never had a partner that knew anything beyond turning it off and on again 😭. I’d be eating them out like a bulldog with a jar of mayonnaise every night if they did! Though I guess I would do that if they didn’t too…🤔
I may need to rethink my approach…
I can’t even do that for my own homelab. If restarting everything in order from most to least likely culprit doesn’t make it work again I’m usually fucked and looking forward to a couple hours of work.
Example: My “Smart” TV must have something like this in its code:
void main() { if(hasLocalIP && !hasInternetAccess) { randomlyQuitJellyfinEvery20MinutesOrSo = true; } startTV(); }
This took 2 weeks of restarting, app reinstalling, factory resetting, OS updating, OS downgrading, OS updating but different method, etc. to figure out. I’m literally just unplugging its ethernet port before starting now, it’s that simple. I’ve never allowed it to connect to the internet though - no ad revenue for you, Google!
Dang, I went small cock and reeeee. :)
But seriously, my partner is pretty nerdy, and while they don’t know exactly how everything is set up, they’re reasonably good at troubleshooting. I have a VPN set up, so if everything gets borked, I can probably fix it on my lunch break or something (or they can just turn it off and on again).
First of all, my parents have a Raspberry Pi V1.0 (the still holeless one) that has been piholing since day one. That’s like a decade.
I keept it there, caseless and dangling from the lan cable, for sentimental reasons, I’ve grown fond of it.Second of all, there is a secondary dns on Proxmox should the Pi need a rest.
Edit:
Forgot the third of all - that Raspberry doesn’t even have a heatsink, much less a fan.I’m running the same setup down to the dangling LAN cable. How do you deal with sd card deaths? Just a fact of life?
Log2RAM seems to help
Yes, what the other two said … however I have never ever (in any device) had a memory card go deaded.
Idk. I do keep in mind how fragile they are (the internet people have scared me enough) when setting them up, but nothing ultra special.Or make it read-only
Does this thing still get updates??
Why would it need updates
Updates are good, they automatically install you extra RAM, extra AI assistant features, promotional targeted ads, extra bloatware, more bugs … no, wait, that’s Windows, nvm.
Updates?
It’s running Pi Hole … the lists get updated, as for the base os I don’t even remember what I installed (I think I switched from regular Debian to DietPi at one point, I think the Debian upgrade borked something & I changed it up).Thx, I should check it.
So… you have a not updated device, running some form of Debian, connected to your network, and all your devices rely on that for DNS???
It should be getting updates and it’s in LAN, the exploits would have to be fairly specific.
But I have indeed not checked on it for years. Proxmox has made me lazy af.
I could VPN there now via phone & check it (if I even enabled SSH), or just shut it down (threy have AdGuard as backup sinkhole), but that sounds like work.Where does it get the DNS list if it is only in LAN?
No, no, not isolated
Kernel should still support PI1. Might install linux-next even.
I feel attacked right now
It’s called a secondary DNS server. Like, literally the reason it exists. I guess it’s still on the line towards knowing what TF you’re doing. Every DHCP server offers at least 2 dns server options.
Came here to make a DHCP config backup DNS joke, but it turns out I’m on Lemmy and 5 other people got it covered
The trick is only having Lemmy, no other social media. Now I only get lost with which instance I’m using!
Setting a second DNS in your router will bypass the pihole though, making it useless unless both servers point to a different pihole server
Kubernetes or Swarm, or a dozen other solutions to this.
Care to elaborate? Last time I tried to set the secondary DNS as backup while keeping the pihole filtrering there was no real way to do it without having two piholes. Even the pihole developers said as much
A secondary DNS server set in your DHCP options will do no such thing.
The secondary DNS server is only used if lookups to the primary fail, say like when your pihole crashes or something.
The only way it will work the way you think it’s going to, is if you set your DNS resolver to use round robin on a list of DNS servers.
Its literally just a backup DNS server address, and is only used should the primary fail, and returning an nxdomain is not a failure.
Please note, I use secondary to refer to the 2nd IP in your DHCP/DNS options, not to a secondary DNS server, which is something else.
Right, we are on the same page but from experience, setting DNS 1 to the pihole and DNS 2 to for example 8.8.8.8 in your DHCP router will make the pihole useless. There are dozens of similar threads across different forums saying the same thing but if you know how to do it please let me know… Example: https://www.reddit.com/r/pihole/comments/864oli/secondary_dns_setting/
Well, after working as some form of systems engineer for the last 17 years, including quite a few where some of my primary responsibilities were looking for DNS servers, this is literally the first time I have come across this.
Also not quite sure what they’re doing because my Debian, 2x Windows, 3x Android and occasionally Apple clients never bypass my primary DNS setting. Neither do the server farms I run at work. So who knows.
Yeah, all I could find was vague statements about the DNS server lookup order being “OS specific” and windows specifically being know for “not respecting DNS order”, see https://www.reddit.com/r/networking/comments/kb8hvt/dhcp_dns_server_order/
This is also supported by statements from pihole devs: https://www.reddit.com/r/pihole/comments/x2248t/is_it_worth_creating_a_second_pihole_dns_server/ And: https://discourse.pi-hole.net/t/primary-vs-secondary-dns/1536/4
And then there are the hundreds of similar questions which one could take as “evidence by quantity” or whatever.
In that case I’ll be very thankful that mine works as I expect and will try not to change anything.
Fun fact, theres a linux Pihole.
I kept having issues with my raspberrypi so i put a linux VM on my trueNAS server, then Pihole on it.
PiHole doesnt run on Linux?
What the hell do you think Raspberry Pi’s run?! Linux!
But if you’re referring to PiHole running on X86_64 Linux… It does that too! :D
Yeah that was my question. Could have used netbsd or something
It does, im doing it right now
Try adguardhome, it’s very simple to install and works like a charm
Except when you also use Portmaster on your computer and it constantly brags about your DNS server not responding only to be OK about it minutes later.
I plan on doing exactly this, but I was thinking I should have set up a raspberry pi for redundancy as well. Is that just silly?
deleted by creator
This is a mistake you only make once, which is why I now have a dedicated dmz network for work equipment that doesn’t use the pihole for DNS resolution.
I just have my router fail over to 1.1.1.1 if pihole disappears. (I don’t use pi-hole for dhcp). It saves my ass from this.
Interesting idea, may I know what router you’re using which supports this feature? One of the things I do like about having work related DNS going through pihole is I can create custom responses to trick my laptop into thinking it’s on the office network, which disables our VPN requirement. 😁
As far as I have seen, they all can do this. Just set your DNS to your pi and your second DNS to cloudflair or whichever.
If your pi is missing, it goes to the second listing
Or have 2 piholes on 2 separate pieces of hardware, giving you the opportunity to fix things should one go down.
Full arr stack makes life much easier. Only time I got that look was when it pulls a .rar that didn’t automatically extract. Wrote a script that transmission runs on completion and they extract when finished now.
I’ve had a bunch of issues that totally belong on the right side of this graph. Broke the nginx proxy trying to add a rule for a game server and can’t access the arrs anymore. Subscribed to a list that had too much crap on and it downloaded everything and filled up my drives. Buggered up permissions somehow so sonarr can download the files but I have to extract it manually with ssh
How do you extract them? I have some nzbs that stop halfway through extraction and I have no idea why
It was permissions for me, I think I bodged it with a chmod 777
transmission
Meh. Try other clients like rtorrent. As I remember transmission is not most healthy for swarm client. Or maybe it’s my memory is shit.
Two of my private trackers have client and version requirements and transmission 3.0 was the only one on both if I use a VPN. It’s the Debian of torrent clients.
I think Transmission is a very competent client. However, when I last used it a few years back, as soon as you reach 1800+ seeded torrents it really struggles. I switched to rTorrent but experienced similar issues at around 2500+ torrents. After hours of tinkering (which rTorrent / ruTorrent does have a lot to tinker!) and not getting anywhere, I made the switch to qBittorrent which is still happily chugging along seeding 4500+ torrents.
I’m in this picture and I don’t like it
Can confirm. Everything is broken. I wish I could say I was typing this on the laptop I built by duct taping a battery, a screen and a pi into a laptop but that doesn’t work either because I have to mod up a laptop keyboard fpga hackfuck first 🤷
I’m not quite at Pi-Hole, I use OpenWRT on my router though.
I sent this to my wife and said “good thing our pi-hole is never down”. Long story short, I think I’m sleeping on the couch tonight
Its a rookie mistake to implement a highly desirable, but low WAF (wife acceptance factor) solution to some shared resource.
The linked picture should have had a separate SSID that doesn’t route through Pihole, so if the raspberry pie dies, wife know to simply change the SSID she connects to.