I self-host literally everything (email, calendar/contacts, VOIP, XMPP, you name it) from by basement with used 1U servers from eBay and a cable internet connection.
It was probably more hassle than most people would want to bother with to get it set up. But, with everything up and running, there's very little maintenance. I probably spend a few hours a month tinkering still, just because I enjoy it.
I use a stack of Proxmox VMs, FreeIPA for authn/authz, and Rocky Linux for all servers and workstations. My phone runs GrapheneOS with a Wireguard VPN back to the house. I don't expose anything to the public internet unless absolutely necessary.
I recently anonymized and Ansibilized my entire setup so that others might get some use out of it:
I had fun doing this until I had kids.
I have a rack with 10gbe, ups, kubernetes a zfs storage server, multiple vlans, 4 unifi APs & locally hosted controller and all sorts of self-hosted stuff.
My heart breaks slightly as I watch things slowly degrade and break down due to bit-rot and version creep, I now wish I had a synology, flat network and cloud everything possible.
There are days when the kids can't watch a particular movie and I find out it's because a particular kube component failed (after an hour of root-causing) because I haven't touched it in 2 years. I then have regrets about my life choices. Sometimes the rack starts beeping while I'm working and I realise the UPS batteries are due for replacement because it's been 4 years. I silence the alarm and get back to the production issue at work, knowing it'll beep at me again in 30 days. I'll still be too busy to fix it. It doesn't help that in Australia the ambient can get to 45 degrees C pushing disks and cpus to their limits.
Just sharing a different perspective...
Sounds like a bit of overkill too if you ask me. You can self-host most things that make sense to keep private without going all in on the fun stuff.
As in, k8s is cool to play with and understand and all but why would I bring that complexity to a simple home setup that can run on a single machine in a corner somewhere?
You don't have to go to a synology box and give up everything but there are simpler options without going "Cloud everything". Of course you will be giving up some features as well, the more you strip things down, but that can beneficial in and of itself if you ask me.
Personally I went from being the "Linux from scratch" guy to running Ubuntu LTS. Natural progression and the kids can watch any of their movies at any time they want. Keep the hard drives rotated, do an LTS to LTS upgrade every few years and that's about it. Heck I've been running the exact same Postfix, fetchmail and IMAP setup for probably 20 years now and I don't even remember what all the options I set do any longer. I also don't need to though. It's just rock solid. All the other fun stuff has passed me by and I don't care. Don't get me wrong, it's still fun to play with stuff and we do use k8s at work and it's great. But it's just complete overkill for home.
> I had fun doing this until I had kids.
As i keep telling people, self hosting is fun as long as your user count is 1. When it grows beyond that, you suddenly have a SLA.
I self hosted almost everything (e-mail is pointless from privay concerns), and when we had kids i moved to a dual Synology setup with a single proxmox server for running services. Fast forward some years and electricity suddenly costs about an arm and a leg, so i had to do "something".
I completely stopped self hosting anything "publicly" available. Everything moved to the cloud including most file storage, using Cryptomator for privacy where applicable.
The server got reduced to a small ARM device with the prime task of synchronizing our cloud content locally, and making backups of it, both remote and local. As a side bonus it also runs a Plex server off of a large USB hard drive. All redundancy has been removed, and my 10G network has been switched off, leaving only a single 16 port POE switch for Access Points and cameras.
The Synology boxes now only comes online a couple of times every week to take a snapshot of all shares, pull a copy from the ARM device, after which it powers down again.
In the process i reduced my network rack power consumption from just below 300W to 67W, and with electricity prices for the past year averaging around €0.6/kWh that means i save around 2050 kWh/year, which adds up to €1225/year, or just over €100/month.
Subtract from those savings the €25/month i pay for cloud services and i still come out ahead. On top of that i literally have zero maintenance now. My home network is only accessible from the outside through a VPN. The only critical part is backups, but i use healthchecks.io to alert me if those fail.
I still kept the network seggregation, so everything "IoT" is on it's separate VLAN, as well as the kids. The only major change was that the "adults" VLAN is now the management VLAN. I have no wired computers, so maintaining a management VLAN over WiFi was more trouble than i could be bothered with :)
Why are the kids on their own VLAN/WiFi ? Because kids wants to play games with their friends, something the normal Guest network does not support. Kids also brings all sorts of devices with new and exiting exploits/vira, and i didn't feel like doing the maintenance on that. So instead my kids have their very own VLAN with access to just printers, AirPlay devices and the Plex server.
> As i keep telling people, self hosting is fun as long as your user count is 1. When it grows beyond that, you suddenly have a SLA.
This is the principle I.T. departments fail to grasp.
> Kids also brings all sorts of devices with new and exiting exploits/vira...
Curiosity: while vira is arguably less wrong, hackers of a certain age would have expected viri or virii, which are more wrong:
https://en.wikipedia.org/wiki/Plural_form_of_words_ending_in...
From Tom Christiansen, of Perl fame:
http://www.ofb.net/~jlm/virus.html
// Meanwhile, in "Kids also brings" – I fully support what you did there!
> Meanwhile, in "Kids also brings" – I fully support what you did there!
It of course also helps that in 2023, literally all school work, for better or for worse, is done through the cloud. I wrote printers above, and yes, they do have access to the printers, but apart from our 3D Printers, the laser/inkjet printers have seen very little use.
Here the schools use Microsoft, which means assignments are done in Word/Excel, and handed in online either through a school portal, or shared from OneDrive.
I won't get into the privacy details, but we do have some fairly strict laws concerning kids and identity protection (a thing that recently got Google kicked out from the educational sector), so while not ideal it is probably not as bad as it sounds.
Apart from school work, their needs are mostly only local peer to peer networking for games, and/or internet access, and all can be accomplished by simply sticking them on a "less restricted" guest network, while at the same time making reasonably sure they're not wiping out the rest of the households computers :)
The firewall also runs a very small subset of IDS/IPS rules, mostly concerning malware/bot rules, and we use a NextDNS profile per subnet to filter out the worst.
> Curiosity: while vira is arguably less wrong, hackers of a certain age would have expected viri or virii
My bad, i used the latin plural form of virus, which is vira. In any case, my network setup should keep most vira, viruses or virii out :)
I hosted email until my email to a college student was rejected with no way of contacting either him or the admins of his school. That was the straw on the camels back.
I still self host apps today but my hardware is old enough that it costs more in power and cooling than what I get out of it, and the roi on new hardware doesn’t justify the means
> and the roi on new hardware doesn’t justify the means
That was my takeaway as well, considering that a 4 bay synology uses more in electricity than purchasing the same storage in the cloud (up to a certain point, datahoarders need not apply).
On top of that i then need to purchase new hardware every 3-6 years if i want reasonable assurance that my data is still there, and doing the math on a 5 year TCO, i would end up paying around double what i pay now, and still have worse data integrity.
I haven't done the math on where the breakaway point is, but i have around 10TB of cloud storage (including backups), as well as DNS services, static web hosting, mail, and a few other curiosities, and i average €25/month on cloud services.
Comparing that to a 4 bay synology with 4x6TB WD Red drives, you end up with €1276 in hardware costs (current prices here). Over a 5 year period that's €21.2/month for the hardware alone. Assuming the Synology draws 10W, and each WD Red draws an average of 5W, that's 30W of power, totalling around 22 kWh/month, which at €0.6/kWh adds up to an additional €13/month.
So in total around €35/month to self host what i can host in the cloud (including backups!) for €25/month.
This is of course very context dependent and no critique whatsoever. I also have kids and my self-hosting became ever more important since. No Youtube commercials or auto-continuation for kid-videos thanks to invidious, reduced costs due to a lot of cancelled software plans (because everything runs on my rust), I can care better for my parents, e.g. helping with technology, monitoring, burglars (my homelab sits at my parents house, remotely connected via IPSEC), data backup is solid and under my control (ZFS Rz2, & offsite backup with borgmatic & rsync), and most important I have reduced my life dependencies and lock-in effects to worldwide companies.
Maintenance is 1-2 hours a month: Proxmox, various Docker nested in unprivileged LXC, everything automated (cronjobs, Watchtower, Backups etc.). I also built a pretty big PV-plant to safe energy costs (30wkp). My main strategy was a "minimal" approach, going slowly, thinking carefully _what I really need_ and preferring robustness over new features or software. I usually take 1-2 months of review before deciding to install any new software, most often longer. I am against the "All in One" mentality (e.g. prefer custom bash scripts over third-party automation; or selectively install needed parts instead of the all-in-one alternatives, e.g. nextcloud/all-in-one).
Your perspective resonates with me! I have 3 kids under 6 years old, and I can definitely see this easily creeping up in my future.
My family situation is partly why I just went with plain old VMs and a Linux Distro with a 10 year support cycle. Its easy to keep all the moving parts in my head, and I figure I can mostly coast for 10 years and then reevaluate.
Thanks for reminding me, I also need to replace my UPS battery...
For procrastination you have to set yourself up for success.
For instance, what if the alarm sent you the product page for the model of battery you need? You order them, silence the alarm, and when they show up you’re reminded you need to change them. Or if that’s a bad time, when the alarm goes off again.
I think we’ve only begun to work out how alarms are the wrong solution to the problem and what we need are prompts.
Do you have kids? It doesn't work that way. It will never be urgent enough to waste time even for setting up the alarm or prompt. People vastly overestimate free time when you have kids. They somehow manage to eat up every single minute.
As a dad with more kids than the average around here I feel you.
For me it has improved slightly lately:
I have recently started giving my kids bonus allowance if they let me work the hours I need.
And lately I have also played more card games and board games with them in the evenings.
That said, I am up at around 0400 to start the day and I have already spent 15 minutes on HN so I need to leave now :-)
Follow up: it helps that they all sleep through the night now and that the pandemic is over so they are at school or kindergarten during core hours at work.
A couple of them. They're 5 and 8.
When they were younger, they slept (sometimes) and I didn't. I've never slept much, so I didn't feel like I was missing out on too much.
Last spring I noticed I could finally do things in the daytime again, too. Which is great, I really missed guitar. Suddenly they're interested in what I'm doing too.
Haven't talked either into updating my VM fleet for me, but maybe some day.
I treat my "alerts" as more of a suggested to do list. The things I'm self-hosting are important to us (we all use them), but not critical. Life will go on until I get to it.
I've also learned that "boring tech" is the way to go.
Kids and a partner with health issues. My days are all chopped to hell. If there's a 5 hour window, everyone wants to put an event smack in the middle of it so I have an hour here and an hour there and any time I have 3 hours it's probably going to yard work. If it weren't for reminders or having tasks queued up things would be much, much worse.
It does get better in highschool, sometimes middle school. Once the idea of autonomy occurs to them they don't need or want you every fifteen minutes. Plus as another responder said, sometimes they want to see you doing things, and once in a while they want to help. Though it's cool when they do and then sad when they change their minds. There was a two week period where mixing compost was the most fun in the world and then they were no longer interested.
Also in Aus - I've got a not-quite-as-complex setup, but I do have it all in a purpose-built room in the shed which is fitted out with an old box air-conditioner[0] with a thermostat power controller to keep the room below a certain temperature, which should help to extend the working life of "all the shit in there". Damn it's nice visiting the "cool room" in summer, there isn't enough floor space to sleep in there though.
Also have kids, and they can be demanding when stuff ain't working.
Also second guess my life choices, but then again I also still love playing around with this stuff, knowing that I can maintain the full stack.
[0]: Replacing that old air-con with a (far) more modern small split system could possibly have paid for itself by now in power savings. I think I should look into that.
> I watch things slowly degrade and break down due to bit-rot and version creep [..] > There are days when the kids can't watch a particular movie and I find out it's because a particular kube component failed (after an hour of root-causing) because I haven't touched it in 2 years. I then have regrets about my life choices. [..] > I now wish I had a synology, flat network and cloud everything possible
No snark intended, but this sounds as though you chose to include a lot of unnecessary complexity into your self-hosting, then discovered that there's almost always a cost to unnecessary complexity(?)
You're not alone :) The only thing I have left at this point is a rather complex network, mostly because it's a pain to undo at this point. Plex went away last year and I just "license" all the kids stuff through Google play now...
Incredible. The usual response to "should I host my own email" is "don't do it; you'll get hacked."
Three questions:
1. Have you heard of this complaint?
2. Do you use a home ISP connection, or a commercial ISP connection? A "home ISP connection" here usually comes with a dynamic IP address; you can't get your hands on a static address without paying a very large amount monthly or getting a commercial connection.
3. You say "I don't expose anything to the public internet unless absolutely necessary." Is your ip address via your domain name one of those "necessary" items?
1. Yes, most people will tell you not to host your own email, because its too complicated/difficult to get your mail delivered reliably.
A lot of this is FUD. Yes, email is a bit more difficult to get right than say, hosting a web app behind Nginx. It's an old protocol, with many "features" bolted on years later to combat spam.
I'm not sure how email is easier to "hack," unless there is a zero day in Postfix or something. Back in the day, lots of script kiddies would find poorly configured mail servers that were happy to act as an open relay...maybe the stigma persists?
To deliver mail reliably, you need 4 things (in my experience):
- A static, public IP address with a good reputation (ie, not on any spam blacklists)
- A reverse DNS record that resolves back to your mail server's IP
- A domain SPF record that says that your mail server is allowed to deliver mail
- DKIM records and proper signing of outgoing messages (DMARC records help too)
2. I have a residential cable internet connection, but pay extra for static IPs. You can probably get by with a dynamic IP and some kind of dynamic DNS service, as long as you don't want to send email. You could still receive email locally if your MX recorded pointed to some kind of dynamic DNS record.
Note that some ISPs explicitly block outbound traffic on port 25 due to spammers. You might need to check with yours.
3. The only things I expose to the internet are Postfix (to send/receive emails), XMPP (to chat with others), and my web server. Everything else (calendar/contacts, IMAP, Syncthing, etc) stays behind my firewall, accessible only to internal hosts. I use wireguard on my Android phone to access these services seamlessly when I leave the house.
I've never bothered to conceal my IP address. For awhile, I experimented with using Mullvad VPN for all my egress traffic. Unfortunately I spent all day solving CAPTCHAs...wasn't worth it (for me, anyway).
EDIT: I should add, that I also have a "normie" email address at one of the usual providers that I use for really important things like bank accounts / utility providers. If I get hit by a bus, I don't want my (very nontechnical) wife to deal with sysadminning on top of my early death.
For all our personal communications though, we use my selfhosted email domain.
A static, public IP address with a good reputation (ie, not on any spam blacklists)
Piece of cake /s
It's not that hard to do. Harder for residential address blocks for sure. But if you do all the other things previously mentioned like SPF/DKIM etc then cleaning up an IP address isn't that hard.
The only service we've ever had issues with is Outlook as they'll ban whole block for opaque reasons and we just escalate it to the provider and they sort it. We just moved two self-hosted mail servers to new IP addresses and there were only 2 lists to clear them from, which was a fill in form style automated process to resolve.
There's always SES (or other service of choice) as a backup for sending anyway if you notice something getting blocked. It's easy to switch to that for a day or two whilst you resolve an issue - though I must admit I think we only had to do that once in the last 12 months.
Maybe I'm breaking some kind of sysadmin code here and I don't realise it's a secret that self-hosting email isn't that hard? Am I supposed to keep up the myth that it is? :-) Any greybeards here please let me know!
I played around a bit with sending via SES and Sendgrid. I generally found that deliverability on either of those was actually worse than even one of my slightly dirty IPs.
Maybe try with smtp2go?
Previously, I was also using Sendgrid as well. But they seemed to start doing the "growth at any costs" bullshit which for an email sending company means accepting and delivering spam. (Regardless of their PR/weasel-words these places use to deny it, that's what it comes down to). Thus lots of places now just drop all mail that comes from Sendgrid, no workaround.
When that happened, a friend pointed me to smtp2go, which I've used since personally and we now use at work. We haven't (yet) had anything blocked as spam (less than 10k emails sent a month though), so it seems like they've not done the "growth at any costs" bullshit like Sendgrid.
You're not the first person I've heard say that. It's interesting that we haven't faced that issue. I wonder if we'll get a nasty surprise the next time we try as it has been a while since the last time we did it.
There are entire datacenters blocked by some blocklist providers. Like, AFAIK, the OVH ones.
Also note that it's super easy to configure postfix (and likely others) to send all outbound email via a third party service.
I personally use smtp2go.com, and was on their free tier for ages (now upgraded via work). Can recommend, as it "just works" and avoids all the mucking around with SPF/DKIM/etc.
Oh, on a similar note, definitely avoid Sendgrid if you want to send email via a third party. They're outright blocked (as a spam source) by way too many places to be considered reliable any more. :(
Thanks for the info. This all sounds pretty reasonable.
> DKIM records and proper signing of outgoing messages (DMARC records help too)
I've read somewhere that spammers started to use DKIM (or was it DMARC?) records faster than the legitimate web-mail providers.
DKIM and DMARC are not anti-spam techniques per se. They are used to verify that the message is authentic, and that sender is authorized to send email on behalf of the domain.
If the sender is passing as an authorized sender (DMARC aligned), then the receiver has a pretty good indication the email is legit and that the sender was delegated to sent email on behalf of the domain. If the email is then classified as spam (based on its contents), then it is easier for the receiver to choose whether to adjust the reputation of the domain (in case of DMARC alignment), or the IP (if not aligned).
A DKIM signature and DMARC alignment is no guarantee that the email passes spam filters. The whole point of DMARC is to give the receiver as much information as possible to make a confident decision on the legitimacy of the email, and the reputability of a domain.
DMARC and DKIM works both ways, if you are sending legit email (not spam), it will improve your deliverability, but if you are in fact spamming then DMARC will reduce your deliverability (as it should).
I have a $4/month VPS that comes with a static IP address. Any reason you shouldn't use that as a proxy to solve the dynamic IP problem?
I've done it for a couple of years, all traffic comes into the VPS and Wireguard immediately redirects to my home machine VM. I can take the VM down, bring it up on another machine, it calls to my VPS to the Wireguard server, establishes the tunnel and then my email and web are now going to the VM on the new home machine, or whereever in the world I want to bring that VM up. Yet, to any clients hitting my public IP (the cloud VPS), nothing has changed except for a few minutes downtime.
These IPs are often used by spammers before you get them and have bad reputations, but that's usually a solvable problem.
But if you own the IP for 6 months with no abuse, wouldn’t that solve the problem?
Some providers block or score hit IPs from popular provider blocks due to the amount of spam that comes from them.
Nope, that would totally work.
> 2. Do you use a home ISP connection, or a commercial ISP connection? A "home ISP connection" here usually comes with a dynamic IP address; you can't get your hands on a static address without paying a very large amount monthly or getting a commercial connection.
Weirdly, most of the ISP's I've had on the NBN here in Australia were happy to give me a static IPv4 address for free (and my current one will set you up an IPv6 /56 block, but its beta apparently).
How much power does it take? I've realized with some services it's cheaper to use it than the electricity and hardware cost.
I almost certainly don't save any money considering electricity cost. I have a dell r630 for compute and an r730xd that I use as a NAS. Then I have one switch for the rack and a POE switch for the house. Probably 3-5amps total?
If I started over, I would probably choose more efficient gear.
That said, I don't mind paying for the electricity too much. I enjoy the warm fuzzies of knowing my data lives under my roof.
> Probably 3-5amps total?
A raspberry pi draws 2+ amps. Your dual Xeon server is drawing a lot more power. That said, typically you’d want to measure in watts because amps is relative. Eg a RPI is 2A at 5V while a computer is probably 5A at 120V - an order of magnitude more total energy consumed.
do you backup offsite? if not, in the event of a fire, your data will live under your "poof!"
I have some automation that does a weekly archive of everything important to a ZFS-based NAS. Home directories are also stored there over NFS, with hourly/weekly/monthly snapshots.
Once a month or so, I plug in two separate 5TB external HDDs and run a backup script that rsync's everything to each one (2 is 1 and 1 is none). These are stored outside my home.
I should probably get some kind of cloud-based / encrypted backup thing going as well. I don't claim that my current backup system is very good.
https://www.rsync.net/ does ZFS receives, so you can send those encrypted / without unlocking.
That's minimum $60 p/m, a bit steep.
There are a couple of different offers. https://www.rsync.net/signup/order.html?code=710b50 is hn discount. There's https://www.rsync.net/products/git-annex-pricing.html and https://www.rsync.net/products/borg.html
> I should probably get some kind of cloud-based / encrypted backup thing going as well. I don't claim that my current backup system is very good.
I recommend Backblaze B2 - $5 / TB, and supports s3's api.
Backing up 100+ gigs of data to Backblaze B2 is PAINFULLY slow. I tried to back up a few terabytes and gave up after a few gigabytes because it was so extremely throttled (paid plan) that a backup would have taken weeks or months to complete. (I have gigabit fiber optic service).
[dead]
I pull <100W idle with a HPE G8, Thinkcenter Tiny, and enterprise routing/switching in my basement. All this is old hardware and you can bring it down with newer stuff. The idea is to size your equipment appropriately and not have a huge rack running just because you got the servers for free.
Also while bandwidth costs less in the cloud compute and storage is much cheaper if you host it locally. If you want a server to host your public website, do it in the cloud. If you want a file server for local use the price and performance benefits quickly overweigh the power cost. There also is the additional factor of having the equipment/data 100% under my control, which is very important to me.
For homelab or self hosting, Power per watt is my favourite measure now.
Depending on your need (many apps just idle most of the time) a usff pc can make an excellent proxmox server.
Check out a Lenovo m920q, Dell Optiplex 7060, HP EliteDesk or ProDesk 800 series. They are easy enough to bump to 64G of ram and stack up as you need. The 8700T cpu is a desktop grade in a small shell and watt footprint and also has vpro and hyperthreading.
It’s not a rack server but it’s easy enough to add a Mac Studio/Mini soon enough for crunching.
I have spent too much time with full rack server gear and using it a can seem like a matter of preference before need. It’s heavy, hungry, noisy, and my better half didn’t like when I brought the leftover data centre stuff home.
The USFF boxes are near silent and sip electricity.
Those are very good options. I considered those for a 3 node proxmox cluster.
In the end I went with HP t630's. They're much less powerful, but they're also much cheaper and very small! Dell Wyse 3040 or 5060's are also fantastic options. I liked the t630 because it has a proper sata SSD slot and will take up to 64GB RAM. The power bricks are also quite small too.
I'm going to use mine as a home lab testing environment for cluster learning. I'm curious what kind of performance I can get by placing 3 kubernetes nodes on each and spreading out the workload across the differnt devices.
Thank you as well for those recommendations. I was looking for some lighter powered and serviceable servers.
As time goes on, for the sake of portability, it seems useful to have one appliance dedicated to the physical house, one for personal/family, and then to the extent of hobby or playing with tech, higher powered servers are useful.
I have been trying to stay with Intel 64 bit to keep things easy but will probably get dragged back towards arm and 32 bit.
Edit: more typos than hn should allow
The M1 Mac mini with linux will probably end up being the best self hosting hardware.
Agreed. Support for packages is improving but still not seamless.
Can ram still be upgraded in Mac minis?
The m2 Mac mini is a workhorse. Exciting times.
They can't be upgraded as its on the SoC. But if you are buying them new, you can just max it out initially. Asahi linux is pretty much complete for server use cases. The majority of what's missing is thunderbolt, suspend, video decoders, all stuff you don't need on a server.
Probably the main issue you will run in to is funding ARM docker images, usually you have to rebuild them yourself.
The cost of maxing out a mini from Apple typically puts it at least double the cost in a computational power per watt model.
It might be feasible to but 2 or 3 usffs to cover one max mini at a fraction of the cost, or stick the mini to doing only certain tasks on 8 or 16gb.
Since the m1 addresses memory differently less ram should go further, but I’m not sure if those efficiencies extend to virtualized machines.
I’ll try to dig up an old spreadsheet and add the the m1’s and m2’s stack up to the above.
The arm docker image is a real deterrent. Forums have more and more workarounds and tweaks so hopefully they’ll become available as time goes on. It’s often not worth fighting with compiling.
One of the advantages of getting the family Togo outside during the warm months is that more of your kWh for self hosted equipment get burned in the winter, where they are offsetting some of your heating costs.
We need to work on a mostly turnkey solution for these things.
I still think another generation or two of raspi and friends and you can build a little cluster of them.
This GitHub share is pure gold. You’re amazing.
Agreed. It's divine compared to the janky Ansible setups I've seen in the wild.
Thank you for the kind words :)
Very inspiring and thank you for sharing. I run GrapheneOS too but I haven't set anything up like a Wireguard VPN. What is the rough idea of how that works?
I plug my cable modem into a server running the OPNsense firewall [0], which has a wireguard plugin.
I set up a wireguard VPN in OPNsense.
Then I downloaded the wireguard app in F-Droid, and pasted my credentials from the wireguard Android app into the wireguard configs on the firewall.
I set the VPN in grapheneOS as "always on," so from my phone's perspective, it always has access to my internal network, even when on LTE. All my phones internet traffic ends up going through my home internet connection as a result.
[0] https://opnsense.org/
Try installing algovpn it’s pretty much a turnkey wireguard installation, lots of tutorials on YouTube.
I would advise against setting up wireguard manually.
Checkout Tailscale for an easy to rollout WireGuard based solution that has a fair free tier
What do you do for backups? If your house gets destroyed in a natural disaster, will all your pictures persist?
I regularly back up to some external HDDs that I keep outside the home.
For pictures specifically, I recently discovered M-Disc [0], which are (allegedly) archival-quality, writable Blu-Ray discs. I'm considering burning an M-Disc of each year's pictures and storing them in jewel cases at a family member's house.
[0] https://www.mdisc.com/
> some external HDDs that I keep outside the home.
Personally, I'm not remotely meticulous enough for that to work. Properly rotating drives sounds like a lot of work if you want to be rigorous about it. You start with running the backup to drive A, then shipping that drive a couple hundred miles away (to be properly location redundant), and then next week, run the backup to drive B, ship that drive a couple hundred miles away, but then at some point, you're going to want drive A back, so you can rotate drives and put a more recent backups on it. How do you retrieve those external drives, and consistently?
And then while the drive is in transit, and hopefully not lost, you don't have access to it, it's not an online (referring to its availability) backup solution.
So I mean, I do perform backups to an external HDs which I also keep offsite, but because that's nowhere near as rigorous as what teams of engineers and data center techs can do with a much larger budget, I supplement my backups with a cloud storage solution. And I encourage you to do so as well (especially considering encrypted backup services), but you do you.
As far as the mdisc; I mean it's interesting, but I'd also consider getting an LTO tape library. They're more purpose build for backing things up, and my personal opinion is they're going to be better for longevity given everything else.