294 comments
  • stonewall3y

    I self-host literally everything (email, calendar/contacts, VOIP, XMPP, you name it) from by basement with used 1U servers from eBay and a cable internet connection.

    It was probably more hassle than most people would want to bother with to get it set up. But, with everything up and running, there's very little maintenance. I probably spend a few hours a month tinkering still, just because I enjoy it.

    I use a stack of Proxmox VMs, FreeIPA for authn/authz, and Rocky Linux for all servers and workstations. My phone runs GrapheneOS with a Wireguard VPN back to the house. I don't expose anything to the public internet unless absolutely necessary.

    I recently anonymized and Ansibilized my entire setup so that others might get some use out of it:

    https://github.com/sacredheartsc/selfhosted

    • xyzzy1233y

      I had fun doing this until I had kids.

      I have a rack with 10gbe, ups, kubernetes a zfs storage server, multiple vlans, 4 unifi APs & locally hosted controller and all sorts of self-hosted stuff.

      My heart breaks slightly as I watch things slowly degrade and break down due to bit-rot and version creep, I now wish I had a synology, flat network and cloud everything possible.

      There are days when the kids can't watch a particular movie and I find out it's because a particular kube component failed (after an hour of root-causing) because I haven't touched it in 2 years. I then have regrets about my life choices. Sometimes the rack starts beeping while I'm working and I realise the UPS batteries are due for replacement because it's been 4 years. I silence the alarm and get back to the production issue at work, knowing it'll beep at me again in 30 days. I'll still be too busy to fix it. It doesn't help that in Australia the ambient can get to 45 degrees C pushing disks and cpus to their limits.

      Just sharing a different perspective...

      • tharkun__3y

        Sounds like a bit of overkill too if you ask me. You can self-host most things that make sense to keep private without going all in on the fun stuff.

        As in, k8s is cool to play with and understand and all but why would I bring that complexity to a simple home setup that can run on a single machine in a corner somewhere?

        You don't have to go to a synology box and give up everything but there are simpler options without going "Cloud everything". Of course you will be giving up some features as well, the more you strip things down, but that can beneficial in and of itself if you ask me.

        Personally I went from being the "Linux from scratch" guy to running Ubuntu LTS. Natural progression and the kids can watch any of their movies at any time they want. Keep the hard drives rotated, do an LTS to LTS upgrade every few years and that's about it. Heck I've been running the exact same Postfix, fetchmail and IMAP setup for probably 20 years now and I don't even remember what all the options I set do any longer. I also don't need to though. It's just rock solid. All the other fun stuff has passed me by and I don't care. Don't get me wrong, it's still fun to play with stuff and we do use k8s at work and it's great. But it's just complete overkill for home.

      • 8fingerlouie3y

        > I had fun doing this until I had kids.

        As i keep telling people, self hosting is fun as long as your user count is 1. When it grows beyond that, you suddenly have a SLA.

        I self hosted almost everything (e-mail is pointless from privay concerns), and when we had kids i moved to a dual Synology setup with a single proxmox server for running services. Fast forward some years and electricity suddenly costs about an arm and a leg, so i had to do "something".

        I completely stopped self hosting anything "publicly" available. Everything moved to the cloud including most file storage, using Cryptomator for privacy where applicable.

        The server got reduced to a small ARM device with the prime task of synchronizing our cloud content locally, and making backups of it, both remote and local. As a side bonus it also runs a Plex server off of a large USB hard drive. All redundancy has been removed, and my 10G network has been switched off, leaving only a single 16 port POE switch for Access Points and cameras.

        The Synology boxes now only comes online a couple of times every week to take a snapshot of all shares, pull a copy from the ARM device, after which it powers down again.

        In the process i reduced my network rack power consumption from just below 300W to 67W, and with electricity prices for the past year averaging around €0.6/kWh that means i save around 2050 kWh/year, which adds up to €1225/year, or just over €100/month.

        Subtract from those savings the €25/month i pay for cloud services and i still come out ahead. On top of that i literally have zero maintenance now. My home network is only accessible from the outside through a VPN. The only critical part is backups, but i use healthchecks.io to alert me if those fail.

        I still kept the network seggregation, so everything "IoT" is on it's separate VLAN, as well as the kids. The only major change was that the "adults" VLAN is now the management VLAN. I have no wired computers, so maintaining a management VLAN over WiFi was more trouble than i could be bothered with :)

        Why are the kids on their own VLAN/WiFi ? Because kids wants to play games with their friends, something the normal Guest network does not support. Kids also brings all sorts of devices with new and exiting exploits/vira, and i didn't feel like doing the maintenance on that. So instead my kids have their very own VLAN with access to just printers, AirPlay devices and the Plex server.

        • Terretta3y

          > As i keep telling people, self hosting is fun as long as your user count is 1. When it grows beyond that, you suddenly have a SLA.

          This is the principle I.T. departments fail to grasp.

          > Kids also brings all sorts of devices with new and exiting exploits/vira...

          Curiosity: while vira is arguably less wrong, hackers of a certain age would have expected viri or virii, which are more wrong:

          https://en.wikipedia.org/wiki/Plural_form_of_words_ending_in...

          From Tom Christiansen, of Perl fame:

          http://www.ofb.net/~jlm/virus.html

          // Meanwhile, in "Kids also brings" – I fully support what you did there!

          • 8fingerlouie3y

            > Meanwhile, in "Kids also brings" – I fully support what you did there!

            It of course also helps that in 2023, literally all school work, for better or for worse, is done through the cloud. I wrote printers above, and yes, they do have access to the printers, but apart from our 3D Printers, the laser/inkjet printers have seen very little use.

            Here the schools use Microsoft, which means assignments are done in Word/Excel, and handed in online either through a school portal, or shared from OneDrive.

            I won't get into the privacy details, but we do have some fairly strict laws concerning kids and identity protection (a thing that recently got Google kicked out from the educational sector), so while not ideal it is probably not as bad as it sounds.

            Apart from school work, their needs are mostly only local peer to peer networking for games, and/or internet access, and all can be accomplished by simply sticking them on a "less restricted" guest network, while at the same time making reasonably sure they're not wiping out the rest of the households computers :)

            The firewall also runs a very small subset of IDS/IPS rules, mostly concerning malware/bot rules, and we use a NextDNS profile per subnet to filter out the worst.

            > Curiosity: while vira is arguably less wrong, hackers of a certain age would have expected viri or virii

            My bad, i used the latin plural form of virus, which is vira. In any case, my network setup should keep most vira, viruses or virii out :)

        • grepfru_it3y

          I hosted email until my email to a college student was rejected with no way of contacting either him or the admins of his school. That was the straw on the camels back.

          I still self host apps today but my hardware is old enough that it costs more in power and cooling than what I get out of it, and the roi on new hardware doesn’t justify the means

          • 8fingerlouie3y

            > and the roi on new hardware doesn’t justify the means

            That was my takeaway as well, considering that a 4 bay synology uses more in electricity than purchasing the same storage in the cloud (up to a certain point, datahoarders need not apply).

            On top of that i then need to purchase new hardware every 3-6 years if i want reasonable assurance that my data is still there, and doing the math on a 5 year TCO, i would end up paying around double what i pay now, and still have worse data integrity.

            I haven't done the math on where the breakaway point is, but i have around 10TB of cloud storage (including backups), as well as DNS services, static web hosting, mail, and a few other curiosities, and i average €25/month on cloud services.

            Comparing that to a 4 bay synology with 4x6TB WD Red drives, you end up with €1276 in hardware costs (current prices here). Over a 5 year period that's €21.2/month for the hardware alone. Assuming the Synology draws 10W, and each WD Red draws an average of 5W, that's 30W of power, totalling around 22 kWh/month, which at €0.6/kWh adds up to an additional €13/month.

            So in total around €35/month to self host what i can host in the cloud (including backups!) for €25/month.

      • Helmut100013y

        This is of course very context dependent and no critique whatsoever. I also have kids and my self-hosting became ever more important since. No Youtube commercials or auto-continuation for kid-videos thanks to invidious, reduced costs due to a lot of cancelled software plans (because everything runs on my rust), I can care better for my parents, e.g. helping with technology, monitoring, burglars (my homelab sits at my parents house, remotely connected via IPSEC), data backup is solid and under my control (ZFS Rz2, & offsite backup with borgmatic & rsync), and most important I have reduced my life dependencies and lock-in effects to worldwide companies.

        Maintenance is 1-2 hours a month: Proxmox, various Docker nested in unprivileged LXC, everything automated (cronjobs, Watchtower, Backups etc.). I also built a pretty big PV-plant to safe energy costs (30wkp). My main strategy was a "minimal" approach, going slowly, thinking carefully _what I really need_ and preferring robustness over new features or software. I usually take 1-2 months of review before deciding to install any new software, most often longer. I am against the "All in One" mentality (e.g. prefer custom bash scripts over third-party automation; or selectively install needed parts instead of the all-in-one alternatives, e.g. nextcloud/all-in-one).

      • stonewall3y

        Your perspective resonates with me! I have 3 kids under 6 years old, and I can definitely see this easily creeping up in my future.

        My family situation is partly why I just went with plain old VMs and a Linux Distro with a 10 year support cycle. Its easy to keep all the moving parts in my head, and I figure I can mostly coast for 10 years and then reevaluate.

        Thanks for reminding me, I also need to replace my UPS battery...

      • hinkley3y

        For procrastination you have to set yourself up for success.

        For instance, what if the alarm sent you the product page for the model of battery you need? You order them, silence the alarm, and when they show up you’re reminded you need to change them. Or if that’s a bad time, when the alarm goes off again.

        I think we’ve only begun to work out how alarms are the wrong solution to the problem and what we need are prompts.

        • ihateolives3y

          Do you have kids? It doesn't work that way. It will never be urgent enough to waste time even for setting up the alarm or prompt. People vastly overestimate free time when you have kids. They somehow manage to eat up every single minute.

          • thr7172723y

            As a dad with more kids than the average around here I feel you.

            For me it has improved slightly lately:

            I have recently started giving my kids bonus allowance if they let me work the hours I need.

            And lately I have also played more card games and board games with them in the evenings.

            That said, I am up at around 0400 to start the day and I have already spent 15 minutes on HN so I need to leave now :-)

            • thr7172723y

              Follow up: it helps that they all sleep through the night now and that the pandemic is over so they are at school or kindergarten during core hours at work.

          • doubled1123y

            A couple of them. They're 5 and 8.

            When they were younger, they slept (sometimes) and I didn't. I've never slept much, so I didn't feel like I was missing out on too much.

            Last spring I noticed I could finally do things in the daytime again, too. Which is great, I really missed guitar. Suddenly they're interested in what I'm doing too.

            Haven't talked either into updating my VM fleet for me, but maybe some day.

            I treat my "alerts" as more of a suggested to do list. The things I'm self-hosting are important to us (we all use them), but not critical. Life will go on until I get to it.

            I've also learned that "boring tech" is the way to go.

          • hinkley3y

            Kids and a partner with health issues. My days are all chopped to hell. If there's a 5 hour window, everyone wants to put an event smack in the middle of it so I have an hour here and an hour there and any time I have 3 hours it's probably going to yard work. If it weren't for reminders or having tasks queued up things would be much, much worse.

            It does get better in highschool, sometimes middle school. Once the idea of autonomy occurs to them they don't need or want you every fifteen minutes. Plus as another responder said, sometimes they want to see you doing things, and once in a while they want to help. Though it's cool when they do and then sad when they change their minds. There was a two week period where mixing compost was the most fun in the world and then they were no longer interested.

      • BLKNSLVR3y

        Also in Aus - I've got a not-quite-as-complex setup, but I do have it all in a purpose-built room in the shed which is fitted out with an old box air-conditioner[0] with a thermostat power controller to keep the room below a certain temperature, which should help to extend the working life of "all the shit in there". Damn it's nice visiting the "cool room" in summer, there isn't enough floor space to sleep in there though.

        Also have kids, and they can be demanding when stuff ain't working.

        Also second guess my life choices, but then again I also still love playing around with this stuff, knowing that I can maintain the full stack.

        [0]: Replacing that old air-con with a (far) more modern small split system could possibly have paid for itself by now in power savings. I think I should look into that.

      • logifail3y

        > I watch things slowly degrade and break down due to bit-rot and version creep [..] > There are days when the kids can't watch a particular movie and I find out it's because a particular kube component failed (after an hour of root-causing) because I haven't touched it in 2 years. I then have regrets about my life choices. [..] > I now wish I had a synology, flat network and cloud everything possible

        No snark intended, but this sounds as though you chose to include a lot of unnecessary complexity into your self-hosting, then discovered that there's almost always a cost to unnecessary complexity(?)

      • aliasxneo3y

        You're not alone :) The only thing I have left at this point is a rather complex network, mostly because it's a pain to undo at this point. Plex went away last year and I just "license" all the kids stuff through Google play now...

    • ryjo3y

      Incredible. The usual response to "should I host my own email" is "don't do it; you'll get hacked."

      Three questions:

      1. Have you heard of this complaint?

      2. Do you use a home ISP connection, or a commercial ISP connection? A "home ISP connection" here usually comes with a dynamic IP address; you can't get your hands on a static address without paying a very large amount monthly or getting a commercial connection.

      3. You say "I don't expose anything to the public internet unless absolutely necessary." Is your ip address via your domain name one of those "necessary" items?

      • stonewall3y

        1. Yes, most people will tell you not to host your own email, because its too complicated/difficult to get your mail delivered reliably.

        A lot of this is FUD. Yes, email is a bit more difficult to get right than say, hosting a web app behind Nginx. It's an old protocol, with many "features" bolted on years later to combat spam.

        I'm not sure how email is easier to "hack," unless there is a zero day in Postfix or something. Back in the day, lots of script kiddies would find poorly configured mail servers that were happy to act as an open relay...maybe the stigma persists?

        To deliver mail reliably, you need 4 things (in my experience):

        - A static, public IP address with a good reputation (ie, not on any spam blacklists)

        - A reverse DNS record that resolves back to your mail server's IP

        - A domain SPF record that says that your mail server is allowed to deliver mail

        - DKIM records and proper signing of outgoing messages (DMARC records help too)

        2. I have a residential cable internet connection, but pay extra for static IPs. You can probably get by with a dynamic IP and some kind of dynamic DNS service, as long as you don't want to send email. You could still receive email locally if your MX recorded pointed to some kind of dynamic DNS record.

        Note that some ISPs explicitly block outbound traffic on port 25 due to spammers. You might need to check with yours.

        3. The only things I expose to the internet are Postfix (to send/receive emails), XMPP (to chat with others), and my web server. Everything else (calendar/contacts, IMAP, Syncthing, etc) stays behind my firewall, accessible only to internal hosts. I use wireguard on my Android phone to access these services seamlessly when I leave the house.

        I've never bothered to conceal my IP address. For awhile, I experimented with using Mullvad VPN for all my egress traffic. Unfortunately I spent all day solving CAPTCHAs...wasn't worth it (for me, anyway).

        EDIT: I should add, that I also have a "normie" email address at one of the usual providers that I use for really important things like bank accounts / utility providers. If I get hit by a bus, I don't want my (very nontechnical) wife to deal with sysadminning on top of my early death.

        For all our personal communications though, we use my selfhosted email domain.

        • glandium3y

          A static, public IP address with a good reputation (ie, not on any spam blacklists)

          Piece of cake /s

          • PuffinBlue3y

            It's not that hard to do. Harder for residential address blocks for sure. But if you do all the other things previously mentioned like SPF/DKIM etc then cleaning up an IP address isn't that hard.

            The only service we've ever had issues with is Outlook as they'll ban whole block for opaque reasons and we just escalate it to the provider and they sort it. We just moved two self-hosted mail servers to new IP addresses and there were only 2 lists to clear them from, which was a fill in form style automated process to resolve.

            There's always SES (or other service of choice) as a backup for sending anyway if you notice something getting blocked. It's easy to switch to that for a day or two whilst you resolve an issue - though I must admit I think we only had to do that once in the last 12 months.

            Maybe I'm breaking some kind of sysadmin code here and I don't realise it's a secret that self-hosting email isn't that hard? Am I supposed to keep up the myth that it is? :-) Any greybeards here please let me know!

            • technothrasher3y

              I played around a bit with sending via SES and Sendgrid. I generally found that deliverability on either of those was actually worse than even one of my slightly dirty IPs.

              • justinclift3y

                Maybe try with smtp2go?

                Previously, I was also using Sendgrid as well. But they seemed to start doing the "growth at any costs" bullshit which for an email sending company means accepting and delivering spam. (Regardless of their PR/weasel-words these places use to deny it, that's what it comes down to). Thus lots of places now just drop all mail that comes from Sendgrid, no workaround.

                When that happened, a friend pointed me to smtp2go, which I've used since personally and we now use at work. We haven't (yet) had anything blocked as spam (less than 10k emails sent a month though), so it seems like they've not done the "growth at any costs" bullshit like Sendgrid.

              • PuffinBlue3y

                You're not the first person I've heard say that. It's interesting that we haven't faced that issue. I wonder if we'll get a nasty surprise the next time we try as it has been a while since the last time we did it.

            • glandium3y

              There are entire datacenters blocked by some blocklist providers. Like, AFAIK, the OVH ones.

        • justinclift3y

          Also note that it's super easy to configure postfix (and likely others) to send all outbound email via a third party service.

          I personally use smtp2go.com, and was on their free tier for ages (now upgraded via work). Can recommend, as it "just works" and avoids all the mucking around with SPF/DKIM/etc.

          Oh, on a similar note, definitely avoid Sendgrid if you want to send email via a third party. They're outright blocked (as a spam source) by way too many places to be considered reliable any more. :(

        • ryjo3y

          Thanks for the info. This all sounds pretty reasonable.

        • Joker_vD3y

          > DKIM records and proper signing of outgoing messages (DMARC records help too)

          I've read somewhere that spammers started to use DKIM (or was it DMARC?) records faster than the legitimate web-mail providers.

          • LeonM3y

            DKIM and DMARC are not anti-spam techniques per se. They are used to verify that the message is authentic, and that sender is authorized to send email on behalf of the domain.

            If the sender is passing as an authorized sender (DMARC aligned), then the receiver has a pretty good indication the email is legit and that the sender was delegated to sent email on behalf of the domain. If the email is then classified as spam (based on its contents), then it is easier for the receiver to choose whether to adjust the reputation of the domain (in case of DMARC alignment), or the IP (if not aligned).

            A DKIM signature and DMARC alignment is no guarantee that the email passes spam filters. The whole point of DMARC is to give the receiver as much information as possible to make a confident decision on the legitimacy of the email, and the reputability of a domain.

            DMARC and DKIM works both ways, if you are sending legit email (not spam), it will improve your deliverability, but if you are in fact spamming then DMARC will reduce your deliverability (as it should).

      • roxgib3y

        I have a $4/month VPS that comes with a static IP address. Any reason you shouldn't use that as a proxy to solve the dynamic IP problem?

        • wankle3y

          I've done it for a couple of years, all traffic comes into the VPS and Wireguard immediately redirects to my home machine VM. I can take the VM down, bring it up on another machine, it calls to my VPS to the Wireguard server, establishes the tunnel and then my email and web are now going to the VM on the new home machine, or whereever in the world I want to bring that VM up. Yet, to any clients hitting my public IP (the cloud VPS), nothing has changed except for a few minutes downtime.

        • toad_master3y

          These IPs are often used by spammers before you get them and have bad reputations, but that's usually a solvable problem.

          • aborsy3y

            But if you own the IP for 6 months with no abuse, wouldn’t that solve the problem?

        • gtaylor3y

          Some providers block or score hit IPs from popular provider blocks due to the amount of spam that comes from them.

        • stonewall3y

          Nope, that would totally work.

      • girvo3y

        > 2. Do you use a home ISP connection, or a commercial ISP connection? A "home ISP connection" here usually comes with a dynamic IP address; you can't get your hands on a static address without paying a very large amount monthly or getting a commercial connection.

        Weirdly, most of the ISP's I've had on the NBN here in Australia were happy to give me a static IPv4 address for free (and my current one will set you up an IPv6 /56 block, but its beta apparently).

    • novok3y

      How much power does it take? I've realized with some services it's cheaper to use it than the electricity and hardware cost.

      • stonewall3y

        I almost certainly don't save any money considering electricity cost. I have a dell r630 for compute and an r730xd that I use as a NAS. Then I have one switch for the rack and a POE switch for the house. Probably 3-5amps total?

        If I started over, I would probably choose more efficient gear.

        That said, I don't mind paying for the electricity too much. I enjoy the warm fuzzies of knowing my data lives under my roof.

        • vineyardmike3y

          > Probably 3-5amps total?

          A raspberry pi draws 2+ amps. Your dual Xeon server is drawing a lot more power. That said, typically you’d want to measure in watts because amps is relative. Eg a RPI is 2A at 5V while a computer is probably 5A at 120V - an order of magnitude more total energy consumed.

        • pmarreck3y

          do you backup offsite? if not, in the event of a fire, your data will live under your "poof!"

          • stonewall3y

            I have some automation that does a weekly archive of everything important to a ZFS-based NAS. Home directories are also stored there over NFS, with hourly/weekly/monthly snapshots.

            Once a month or so, I plug in two separate 5TB external HDDs and run a backup script that rsync's everything to each one (2 is 1 and 1 is none). These are stored outside my home.

            I should probably get some kind of cloud-based / encrypted backup thing going as well. I don't claim that my current backup system is very good.

        • chinaman4253y

          [dead]

      • digitallyfree3y

        I pull <100W idle with a HPE G8, Thinkcenter Tiny, and enterprise routing/switching in my basement. All this is old hardware and you can bring it down with newer stuff. The idea is to size your equipment appropriately and not have a huge rack running just because you got the servers for free.

        Also while bandwidth costs less in the cloud compute and storage is much cheaper if you host it locally. If you want a server to host your public website, do it in the cloud. If you want a file server for local use the price and performance benefits quickly overweigh the power cost. There also is the additional factor of having the equipment/data 100% under my control, which is very important to me.

      • j453y

        For homelab or self hosting, Power per watt is my favourite measure now.

        Depending on your need (many apps just idle most of the time) a usff pc can make an excellent proxmox server.

        Check out a Lenovo m920q, Dell Optiplex 7060, HP EliteDesk or ProDesk 800 series. They are easy enough to bump to 64G of ram and stack up as you need. The 8700T cpu is a desktop grade in a small shell and watt footprint and also has vpro and hyperthreading.

        It’s not a rack server but it’s easy enough to add a Mac Studio/Mini soon enough for crunching.

        I have spent too much time with full rack server gear and using it a can seem like a matter of preference before need. It’s heavy, hungry, noisy, and my better half didn’t like when I brought the leftover data centre stuff home.

        The USFF boxes are near silent and sip electricity.

        • PuffinBlue3y

          Those are very good options. I considered those for a 3 node proxmox cluster.

          In the end I went with HP t630's. They're much less powerful, but they're also much cheaper and very small! Dell Wyse 3040 or 5060's are also fantastic options. I liked the t630 because it has a proper sata SSD slot and will take up to 64GB RAM. The power bricks are also quite small too.

          I'm going to use mine as a home lab testing environment for cluster learning. I'm curious what kind of performance I can get by placing 3 kubernetes nodes on each and spreading out the workload across the differnt devices.

          • j453y

            Thank you as well for those recommendations. I was looking for some lighter powered and serviceable servers.

            As time goes on, for the sake of portability, it seems useful to have one appliance dedicated to the physical house, one for personal/family, and then to the extent of hobby or playing with tech, higher powered servers are useful.

            I have been trying to stay with Intel 64 bit to keep things easy but will probably get dragged back towards arm and 32 bit.

            Edit: more typos than hn should allow

        • Gigachad3y

          The M1 Mac mini with linux will probably end up being the best self hosting hardware.

          • j453y

            Agreed. Support for packages is improving but still not seamless.

            Can ram still be upgraded in Mac minis?

            The m2 Mac mini is a workhorse. Exciting times.

            • Gigachad3y

              They can't be upgraded as its on the SoC. But if you are buying them new, you can just max it out initially. Asahi linux is pretty much complete for server use cases. The majority of what's missing is thunderbolt, suspend, video decoders, all stuff you don't need on a server.

              Probably the main issue you will run in to is funding ARM docker images, usually you have to rebuild them yourself.

              • j453y

                The cost of maxing out a mini from Apple typically puts it at least double the cost in a computational power per watt model.

                It might be feasible to but 2 or 3 usffs to cover one max mini at a fraction of the cost, or stick the mini to doing only certain tasks on 8 or 16gb.

                Since the m1 addresses memory differently less ram should go further, but I’m not sure if those efficiencies extend to virtualized machines.

                I’ll try to dig up an old spreadsheet and add the the m1’s and m2’s stack up to the above.

                The arm docker image is a real deterrent. Forums have more and more workarounds and tweaks so hopefully they’ll become available as time goes on. It’s often not worth fighting with compiling.

      • hinkley3y

        One of the advantages of getting the family Togo outside during the warm months is that more of your kWh for self hosted equipment get burned in the winter, where they are offsetting some of your heating costs.

    • hinkley3y

      We need to work on a mostly turnkey solution for these things.

      I still think another generation or two of raspi and friends and you can build a little cluster of them.

    • beaukin3y

      This GitHub share is pure gold. You’re amazing.

      • scruple3y

        Agreed. It's divine compared to the janky Ansible setups I've seen in the wild.

      • stonewall3y

        Thank you for the kind words :)

    • triyambakam3y

      Very inspiring and thank you for sharing. I run GrapheneOS too but I haven't set anything up like a Wireguard VPN. What is the rough idea of how that works?

      • stonewall3y

        I plug my cable modem into a server running the OPNsense firewall [0], which has a wireguard plugin.

        I set up a wireguard VPN in OPNsense.

        Then I downloaded the wireguard app in F-Droid, and pasted my credentials from the wireguard Android app into the wireguard configs on the firewall.

        I set the VPN in grapheneOS as "always on," so from my phone's perspective, it always has access to my internal network, even when on LTE. All my phones internet traffic ends up going through my home internet connection as a result.

        [0] https://opnsense.org/

      • j453y

        Try installing algovpn it’s pretty much a turnkey wireguard installation, lots of tutorials on YouTube.

        I would advise against setting up wireguard manually.

      • zwilliamson3y

        Checkout Tailscale for an easy to rollout WireGuard based solution that has a fair free tier

    • zamnos3y

      What do you do for backups? If your house gets destroyed in a natural disaster, will all your pictures persist?

      • stonewall3y

        I regularly back up to some external HDDs that I keep outside the home.

        For pictures specifically, I recently discovered M-Disc [0], which are (allegedly) archival-quality, writable Blu-Ray discs. I'm considering burning an M-Disc of each year's pictures and storing them in jewel cases at a family member's house.

        [0] https://www.mdisc.com/

        • zamnos3y

          > some external HDDs that I keep outside the home.

          Personally, I'm not remotely meticulous enough for that to work. Properly rotating drives sounds like a lot of work if you want to be rigorous about it. You start with running the backup to drive A, then shipping that drive a couple hundred miles away (to be properly location redundant), and then next week, run the backup to drive B, ship that drive a couple hundred miles away, but then at some point, you're going to want drive A back, so you can rotate drives and put a more recent backups on it. How do you retrieve those external drives, and consistently?

          And then while the drive is in transit, and hopefully not lost, you don't have access to it, it's not an online (referring to its availability) backup solution.

          So I mean, I do perform backups to an external HDs which I also keep offsite, but because that's nowhere near as rigorous as what teams of engineers and data center techs can do with a much larger budget, I supplement my backups with a cloud storage solution. And I encourage you to do so as well (especially considering encrypted backup services), but you do you.

          As far as the mdisc; I mean it's interesting, but I'd also consider getting an LTO tape library. They're more purpose build for backing things up, and my personal opinion is they're going to be better for longevity given everything else.

  • MuffinFlavored3y

    > Email newsletter tools: Old or new, your pick

    Am I wrong to think that most businesses/people pay for Mailchimp because getting your e-mail actually delivered into the inboxes of your target audience/customers is non-trivial? aka, you're going to end up in "spam" otherwise?

    I find it hard to believe that you can "free-ly" send e-mail to, say, 100,000 e-mails and actually have it get delivered at a high rate? I would love to learn if I'm wrong though.

    This article could've talked about DataDog vs Jaeger/ELK stack I think for tracing/logs.

    • dijit3y

      > I find it hard to believe that you can "free-ly" send e-mail to, say, 100,000 e-mails and actually have it get delivered at a high rate? I would love to learn if I'm wrong though.

      You can do this, I have done this, but honestly it's annoyingly painful and you're always one bad ad campaign away from being nuked to death by people marking your emails as spam.

      There's a lot of rules to follow and even when you follow them you need to ensure that you start emailing a low volume for each new sending IP until the reputation grows over time.

      • Nextgrid3y

        To be fair, if people are marking your emails as spam frequently enough to get your IPs/domains blacklisted then it suggests the system is working as designed and you shouldn’t be sending whatever you’re sending to those people.

        • dijit3y

          Nah, about 2% of my "Thank you for ordering, here is your receipt" mails also get marked as spam.

          Some number of people just smack the "spam" button for nearly everything that is automated, and those "spam" buttons seem to work on absolute numbers not percentages; so if you have a high number of people in the pool then you will be false flagged eventually.

          We had a very explicit double opt-in system, made it super easy to unsubscribe, emailed once a month at most; and we had people still marking our communications as spam. I'm not sure what else we could have done to weed out the people who just smack the spam button honestly.

          That said, there was a lot of variance, emoji in the headline was the campaign that caused 9% of people to mark spam and 20% to unsubscribe, but it was enough to have us blackholed for 2.5 months.

          I think a major issue is that people don't want to even check how to unsubscribe and they see the “mark as spam” button as a “just make this go away” button.

          • nkrisc3y

            > I think a major issue is that people don't want to even check how to unsubscribe and they see the “mark as spam” button as a “just make this go away” button.

            You can thank unscrupulous actors for this. I get so much spam I’m not going to try to figure out what is actually spam or not, nor am I going to risk clicking “unsubscribe” links in emails I assume are malicious spam anyway. If it looks automated and I don’t know what it is or can’t remember why I’m getting it, it’s spam.

            Especially marketing emails. I would never knowingly sign up to receive a marketing email so if I do receive yours it’s either spam or you tricked me into signing up for it, so it’s also spam as far as I’m concerned.

            • kshacker3y

              I do this. And what would help (hey google) is if gmail would remember I requested an unsubscribe and then offer to mark as spam 72 after my request to unsubscribe. As of now, I need to remember who all I tried to unsubscribe and when I get their email 3 days, 3 weeks or 3 months later, I don't want to remember my unsubscribe list.

              • pimlottc3y

                I created an “unsubscribed” label for this. I haven’t bothered to automate the rest of the steps you describe but I’m sure it could be done.

            • ghaff3y

              Here's the thing. If companies never collected and used email addresses in exchange for providing free webinars, reports, developer seminars, books, reports, etc. they'd do far less of those things because digital marketing would be much more just shouting out into the void with often difficult to measure results. And they'd generally be way out-marketed (and out-sold because marketing brings in leads).

              You may be fine with all that but remember that selling pays for engineering salaries.

              • nkrisc3y

                You’re right. And it’s lead us to a place where I mark most email I get as spam.

            • dijit3y

              > Especially marketing emails. I would never knowingly sign up to receive a marketing email

              thats fair, some people do it for the promise of getting some deals, something we actually delivered on often as when we wanted to clear the warehouse we sent discount/clearance emails to the signed up users rather than putting it on the site.

              We used to also trial “own produced” products at discounted rates for people as a sort of beta test.

              • nkrisc3y

                I’ve no doubt that some people do choose to sign up to receive marketing emails.

                I don’t, and yet I still do, so I can’t tell what marketing emails are “legitimate” because a website pre-checked a small box I didn’t notice or are simply spam, so it all gets marked spam.

          • ChainOfFools3y

            People who mark things as junk mail or spam typically have no idea that this action can have an upstream impact on spam filtering algorithms.

            They typically have no idea how any of this stuff works and just assume that the purpose of marking something is spam is to prevent them from seeing any more of it, personally, in the future. It doesn't occur to them that their preference thus exerts a small influence over the experience of potentially millions of other people.

            In the decades past, when preferences weren't so tightly linked to each other among otherwise unaffiliated users, the simple definition of spam as " stuff I'm not interested in seeing in my inbox" was completely sufficient to inform a user's decisions about using the spam button. But today that definition is something closer to "stuff I'm not interested in seeing and that I am fairly certain few if any other people are interested in seeing, either."

            • the_af3y

              I disagree with your modern definition. Spam to me is unsolicited commercial emails. All email "ads" are spam. Newsletters I didn't subscribe to are spam. Anything trying to sell me something I didn't subscribe to is spam.

              You bet I'm going to mark it as spam and I hope it creates trouble for the sender.

              PS: I assume we all agree scams, "Russian singles", chain letters, "little Jessica is 4 and dying of cancer", etc, are all spam. That's a shared common ground.

              • dijit3y

                The annoyance I felt that I was a “good” postmaster and I was punished as a part of being from a tribe of bad postmasters.

                Google et al. can’t tell the difference when you hit spam.

                We never bought or sold any email lists, we went out of our way to ensure you wanted to be on the list- we made it single link with no extra checkbox or button to unsubscribe, we emailed only occasionally and above all we did our absolute best to make the content humorous and engaging.

                You can make the case that there should be “no automated mail trying to sell things” and honestly, thats fine, but why the hell are people marking the receipts for things they bought as spam?

                • Mordisquitos3y

                  > but why the hell are people marking the receipts for things they bought as spam?

                  I'm someone who often marks receipts for things that were bought as spam. Note that I said things that were bought—not things that I bought.

                  I have the rare privilege of having an e-mail address which is ${common_first_name}.${non_rare_last_name}@gmail.com, and I am sick and tired of businesses that do no e-mail verification of the addresses of their customers. I will simply delete good old "Click here to confirm your address", but I have no patience for the hundreds of emails I receive because either businesses keep asking customers with my same name who do not understand what email is for their "email address", or because businesses ignore the email confirmation/verification step out of sheer incompetence or out the product-cult of "conversion" rates. Those I mark as spam, because I want them to pay the price.

                • mr_toad3y

                  > why the hell are people marking the receipts for things they bought as spam?

                  5% seems to be about the noise floor of any human activity. Mistakes, carelessness, ignorance, stupidity, mental illnesses. You can’t assume any rhyme or reason for it.

                  • recursivecaveat3y

                    I've heard this referred to as the "lizardman's constant" at 4%. Ie that 4% of people will respond with a trolling, malicious, or simply accidental wrong answer to any given survey question.

                • the_af3y

                  > but why the hell are people marking the receipts for things they bought as spam?

                  I never thought people did that. That's definitely not spam. It is a one-time interaction confirming an operation you just did. Also not spam: when you buy something and the tracking sends you updates via mail.

                  • Dma54rhs3y

                    It happens often, we sell fairly expensive items and regular confirmation and tracking number emails still get reported.

                    I am certain rising ipv4 prices are dictated by spammers but only availability.

                    No one likes spam but when you have to send legitimate emails you quickly learn the other side of the problem as well.

                  • Spivak3y

                    I would consider both of these are spam unless the user explicitly asked for it. Not "you bought something" but hitting the "yes, I would live to receive my receipt via e-mail" button. You can always allow the user to retrieve their receipts later on your site by logging in or "accountless" by sending a code to their email. In-person interactions seem to have no issue with buttons for "no receipt, email receipt, print receipt, text receipt."

              • Breza3y

                I agree with you. Outlook gives me the option to classify an email as either spam or phishing. Newsletter I didn't ask to receive? Spam. Newsletter I signed up for but am tired of getting? Unsubscribe. Little Jessica is 4 and dying of cancer? Phishing.

              • zamnos3y

                Personally, the effort to sell me something doesn't need to be there for me to consider it noise, and where marking something as spam (or phishing) are the only ways to tell the system something is noise, I'll mark stuff as spam even if it's not an advertisement.

            • hurril3y

              Oh we do. We just don't want to have your shitty newsletter.

          • trimbo3y

            I also used to work on email at scale (XXX million per day), and even then I encouraged everyone to click spam[1] for any email they did not expect to receive and do not want.

            My email address and phone notifications are a direct link to me and are sacred in terms of getting my attention. Yet many companies confuse getting my email address with permission to mail me anything at any frequency. They also believe that "unsubscribe" should be able to redirect me to a website with a bunch of confusing checkboxes. Neither is acceptable.

            So, when I click "unsubscribe", that should immediately unsubscribe me forever with no extra efforts, but so many emailers don't do this that I just gave up and started reporting spam.

            > emoji in the headline was the campaign that caused 9% of people to mark spam and 20% to unsubscribe, but it was enough to have us blackholed for 2.5 months

            A guess with no info, but it sounds like a case of not removing unengaged recipients, then suddenly getting their attention with a different looking subject line or an email that came at an unexpected time.

            If people receive bulk for a long time and they never engage, remove them from the list or pause sending to them to avoid this effect.

            [1] - For those who don't know much about email, two things should happen when you click the spam button in Gmail and select "Unsubscribe and report spam" (it's been a while, correct me if this changed):

            1) List-Unsubscribe should be triggered and this should unsub you from the list. In 2023, I personally expect this to be immediate and receive no more email but I give senders a window of a day or so.

            2) The FBL should give the sender a signal that this specific campaign was considered spam.

          • AviationAtom3y

            Call me crazy, but for a problem folks seem to imply has had everything, including the the kitchen sink thrown at it... why do I have yet to see a single email that has the unsubscribe button at the very top, front and center, the absolute first thing I see?

            • dijit3y

              It is. It literally is.

              Whenever gmail, thunderbird or office365 outlook notices a working unsubscribe link in a message, it puts its own unsubscribe link at the top of the message, right next to the address of the sender's email.

              I’m sure you mean the content of the email, but we dont reach out to double opt-in users or transactional emails with an unsubscribe link, since you chose to be there.

              The unsubscribe link lives near the bottom of the email along with the link to support, in clear text in a font and colour that matches the content.

              • DangitBobby3y

                Funny, I know to hit the spam button at the top and the unsubscribe bottom buried in a link at the email footer. Am I blind? Have I somehow clicked "spam" and "unsubscribe" hundreds of times without seeing an obvious "unsubscribe" button at the top? Very dubious.

                Nope! Just checked Gmail WebView. There is a toolbar at the top with a very prominent"spam" button, and two kebab menus with "filter messages like this", "report spam", "report phishing", but no unsubscribe button.

                • dropofwill3y

                  It only shows up if the email has a List-Unsubscribe header set.

                  In Gmail it will appear as a banner underneath the row of buttons

              • _dain_3y

                >Whenever gmail, thunderbird or office365 outlook notices a working unsubscribe link in a message, it puts its own unsubscribe link at the top of the message, right next to the address of the sender's email.

                I've never seen such a thing. I just checked Thunderbird, on an email that has an unsubscribe link. There was no such button.

              • Izkata3y

                I remember seeing that in Gmail years ago, but haven't seen it in a long time. I thought they removed the feature.

          • chillfox3y

            I don't mark receipts as spam, but I do delete them without opening them and I really can't blame the people who mark them as spam.

            When I buy something, I get a notification from my bank, instantly, that's basically the receipt in a standardized compact form (2 lines of text) with the amount in my local currency. This is much more useful than the receipts online stores send where the relevant details are buried in some huge html template and the amount needs to be converted.

            When I buy something in a physical store they have the decency to ask if I want a receipt. I don't understand why that's such a hard concept for online stores/services to get, ask me if I want a receipt, don't just stick it in my inbox because you have my email address.

            • dijit3y

              > When I buy something in a physical store they have the decency to ask if I want a receipt. I don't understand why that's such a hard concept for online stores/services to get, ask me if I want a receipt

              2 things though.

              1) Your goods arent given to you at point of sale with online transactions.

              what happens then if it doesn't show up? tracking numbers aren’t always generated on the fly (it depends mostly on the postal service being used if the site can pre-generate tracking numbers).

              2) How do you return the item, if you have no proof of purchase?

              You could say “keep it in my account” but then how do we even keep an account for you if we cant even verify your email address for password reset mails?

              • chillfox3y

                1) A lot of the things I buy are electronic, so yes, I get them immediately at point of sale.

                And if that's not the case then how about asking me if I would like an email with a tracking number when it ships?

                2) I have never had to show an email as proof when returning something, usually they know damn well that I bought and paid for it.

                In general it seems like you are conflating all the different kinds of email as being one and the same. They are not. I specifically said I don't want receipts by default because they are just spam in the vast majority of cases. I never said anything about email verification emails or shipping emails with tracking numbers.

                Edit: I just wanted to add that needing a receipt for a refund is a truly alien concept to me. I can’t even imagine how bad the consumer protection laws are where you live.

                Here, even in physical stores we can just give them the card we used to buy the thing and that’s enough to process a refund.

                • dijit3y

                  Im not understanding something here.

                  if you buy something online, IE via an ecommerce site like aliexpress or amazon, how do you get a receipt if you dont have an account.

                  returning an item requires a receipt in the majority of cases unless the product ships directly with a return label.

                  • chillfox3y

                    Is it even possible to buy anything from those two stores without opening an account?

                    Nope, not here. If you have to return something then you contact the company and they will verify from their records that you bought it, usually by looking up the purchase by the card you used.

            • nl3y

              For accounting purposes the bank receipt will rarely have the details needed for tax (eg in Australia that don't include the ABN).

              • chillfox3y

                The vast majority of the things I buy don't need receipts to be kept and reported for tax. It's personal stuff like a streaming subscription, a book or a game.

                And the times I have tried to claim expenses as work related (in Australia) that I bought online I have run into the problem that the receipt provided by the company is ambiguous (not sufficient by itself) because amounts were just specified with a $ symbol instead of USD/AUD.

                • nl3y

                  > have run into the problem that the receipt provided by the company is ambiguous (not sufficient by itself) because amounts were just specified with a $ symbol instead of USD/AUD

                  Claiming expenses is different to tax requirements. With tax you need the receipt as a record, but the general ledger actually records the amount in whatever currency you use.

                  For expense claims every company does it differently.

                  • chillfox3y

                    I mean, the few times I have handed a receipt to an accountant to do my tax they have asked me what currency it's in and I have had to go to the bank records to answer that.

                    Anyway, it's besides the point. The point is that online stores should ask, a tick box would do, as the majority of the time I don't want the receipt and don't need it for anything at all.

          • rationalist3y

            How do you prevent people from entering in the wrong address, and thus a random person receiving your emails?

            If it's just a one-off receipt, I'll delete it. If that business I never had any business with starts spamming me, then I mark it as spam. Second receipt, pisses me off, a third receipt from the same company gets marked as spam etc. If you want to send more than one email, ask for permission.

            Unfortunately I have a few technology-challenged acquaintances still using my common-ish firslast@ gmail, but once I get them switched over, everything that inbox receives will automatically be marked as spam.

            • dijit3y

              > How do you prevent people from entering in the wrong address, and thus a random person receiving your emails?

              Double opt-in.

              You cant just enter an email address to subscribe, I used to send you an email with a link to click to complete the process.

              For transactional email this would be handled by getting people to either create an account or use Paypal for guest checkout. (this was 2012)

          • notatoad3y

            >Nah, about 2% of my "Thank you for ordering, here is your receipt" mails also get marked as spam.

            i see about the same thing. and i'm purely sending transactional receipts (that is, immediately after purchase. not for subscription or pre-order or something disconnected from the user having just entered their email in the "send my receipt to" field) and for non-trivial amounts (averaging around $100). i can't understand how so many people mark them as spam, and although i have no evidence to support it i have to assume that some poorly-configured firewalls are sending spam reports automatically.

          • supertrope3y

            I assume you have transactional and marketing emails coming from separate email addresses. If the email ecosystem made it easier for bulk senders, this would just shift the burden onto recipients.

            Right now the common enemy of illegal spammers externalize the costs of their business onto email providers and users. For every dollar they take they impose 100x that in spam filtering costs. >99.9% of raw email traffic is spam.

            Blame spammers who offer fake unsubscribe links. Or companies who violate CAN-SPAM by requiring login to unsubscribe instead of one click opt-out. Marking as spam always works.

          • SoftTalker3y

            Was your “Unsubscribe” link at the bottom of the message in small print or was it a large button labeled “stop emailing me” at the top?

          • thr7172723y

            Can confirm. A few years ago and my coworker caught our boss using the spam button instead of delete, even on customer mails discussing (I think) an upcoming project.

            He was otherwise highly functional, spoke and wrote fluently in three languages but hadn't noticed the difference between spam and delete.

        • mrmattyboy3y

          One thing to say to this.. I work at a company and have personally setup quite a few mail servers for mass email sending and warming up IPs.. not fun..

          (these are all legitimate interest emails)

          I was in a meeting with a couple of people from the team and a QA engineer mentioned that everytime he's done with an email in gmail, he spams it off... _wut_..

          Whilst yes, we have been blacklisted a handful of times and, based on spam reports (feedback loops), people do mark emails as spam for completely nonsensical reasons... e.g. users signing up, (getting and using the activation email), using the service and then spamming the activation email.

          Edit: I definitely think there's a bell curve for sending your own emails:

          * If you have a very small platform (at least in my experience), reputation doesn't mean that much, emails are generally accepted by providers (assuming IPs that you used haven't been previously used for spammy activity), so self-hosting might make some sense (though a third-party probably wouldn't be too expensive if you did want to).

          * If you start sending 100s-1000s of emails/day, I guess some third party solution would make sense, since running dedicated IPs/domains and servers just for sending emails might not be beneficial.

          * As you go to sending 100K+ emails a day, personally, I think setting up servers starts making more sense

        • safety1st3y

          Is that the "Just So" story that people who don't work with email at scale believe?

          Email deliverability is a full time job. There are so many "potential spam" markers that are interpreted differently (and opaquely) by different ESPs. Getting your email delivered to a lot of people is essentially non deterministic.

          Including a link to a Google Doc in your message body is enough to get you blacklisted by some email providers if you don't have a prior history with them. Yes, there will usually be some process to get off the blacklists and doing it will mostly stick even if you continue to email Google Docs to people. But the key word there is mostly. As I said, deliverability (at least at scale) is a full time job.

        • gscott3y

          It's been my experience that people can't tell the difference between the delete button and the spam button.

        • monsieurbanana3y

          Could be, or it could be that those systems are so aggressively tuned that newcomers have no chance to not be labeled spam while established players are whitelisted.

          (I truly don't know, but I don't think it's as simple as you're saying)

          • nottathrowaway33y

            Email delivery is not purely a protection racket.

            People use Gmail because they legitimately want to filter out the unsolicited spam, marketing, etc. To an anonymous attacker, there is no cost to send these emails. Middlemen like MailChimp and Sendgrid play the role of converting email from a free, publicly exploitable channel into a paid, KYC one.

            Email fbfw is the de facto standard communication channel for almost everything, but by design a single computer can send an unlimited number of emails to other addresses. This maybe was a good enough design originally, but now the role of email has grown so much that, today, it should be a paid KYC channel.

            What is the alternative to spam filtering? Everyone maintains their own allowlist of good senders?

            • me-vs-cat3y

              > What is the alternative to spam filtering?

              Make sending email cost the sender. No, I don't know how. The best ideas I've heard (1) make the sender store the message and (2) have no hope of being widely adopted.

        • samstave3y

          See my other comment below on how IP blocks for IPv4 went through the roof on price and availabilty...

          The global spam market is what caused the hockey-stick rise in IPv4 "shortage"

        • IncRnd3y

          Except, that's not a fair take.

          It only takes a moment for a single person to get your ip or domain balacklisted, not a concerted campaign. There are many blacklists that accept direct submissions from any unauthenticated person for any target domain/ip.

          What's difficult is not to get onto a blacklist but to get off of a blacklist.

      • capableweb3y

        + unsurprisingly, lots of hosting providers disable SMTP/block port 25/ban you if any email sending is being detected coming from your instances, legitimate or not, as the problem with hosting IPs that are sending spam is so annoying (and even illegal in some places).

      • Thaxll3y

        You can't just send 100k emails with a good delivery rate, if you're a nobody Gmail will never trust you.

        You can follow all the rules the want ( dkim, spf etc .. ) somehow it will no be delivered because you don't know exaclty how they rate your IP.

      • Breza3y

        This is the correct answer. I work at a company that sends millions of emails every week from our self-hosted IP range. If you have a high quality list of recipients who actually want to hear from you and warm up your IPs gradually, you can be successful.

      • 3y
        [deleted]
      • djbusby3y

        How does one even know that message are being tagged as spam?

    • gwbrooks3y

      You can get high deliverability -- the keys, whether you're using your own servers or someone else's come down to a clean list that won't generate complaints and staying within the TOS of your mailserver host or third-party SMTP service.

      Host your mail-creation/list-management/analytics stack yourself (I like Mautic and MailWizz but there are other options) and use a third party for SMTP services. Amazon SES charges $1 per 10,000 emails; other services are slightly more expensive but it's all still very affordable.

      • tedivm3y

        I'm not sure why you're getting the downvotes but this is the way for people who want some level of self hosting. I finally gave up hosting my own mail server about two years ago- I had been self hosting email since 2005, but it reached the point where delivery to the big companies was extremely difficult. If someone wants to host their own software but actually have their emails delivered they really do need a third party SMTP service that specialized in deliverability or has a big company behind it.

        • wankle3y

          > I finally gave up hosting my own mail server

          A lot of us haven't.

          > If someone wants to host their own software but actually have their emails delivered they really do need a third party SMTP service that specialized in deliverability or has a big company behind it.

          A lot of us don't.

        • nottathrowaway33y

          You're sending your emails over the internet anyway. You're paying for the reputation of the 3p smtp service and it's a pretty liquid/perfect market.

      • locustous3y

        I've had really poor deliverability from SES. Our emails went straight to spam on many providers. Just trying to do email verification on new signups.

    • luckylion3y

      That's also why the phishing campaigns now use Amazon SES (and amazon happily lets them, as long as they pay, it seems): their email will get delivered.

    • jd33y

      > I find it hard to believe that you can "free-ly" send e-mail to, say, 100,000 e-mails and actually have it get delivered at a high rate? I would love to learn if I'm wrong though.

      The company I work for has an outbox feature which supports this and it was a non-trivial problem to solve. We ended up using sparkpost with a bunch of modifications to isolate potentially bad actors (i.e., clients who pay for our software but send what is basically spam) to an individual sending pool. We also have CSMs that handle this and help to coach clients to not send spam.

      https://support.sparkpost.com/docs/deliverability

    • galdor3y

      You go with Mailchimp (or equivalent) for newsletters because they give you the subscription form, handle email verification, unsubscriptions, GDPR mentions everywhere, provide useful stats and notifications, segmentation and targeting… Getting email delivered is indeed really hard, especially if you send thousands of emails, but building all these other features is insanely time consuming. The cost of Mailchimp is negligible in comparison.

      Same reason why companies use Sendgrid for marketing campaigns.

      • SoftTalker3y

        Anything I get from Mailchimp or similar services is auto-flagged as spam by rule.

    • j453y

      A dedicated IP address can be warmed up to deliver email well enough but it can take some time.

      A mail server software like mdaemon can quickly handle the heavy lifting of improving deliverability. It’s a small price for the deliverability. I’m just a former user of it.

      It’s ok to use an external email provider for outgoing email delivery.

      ESPs (email service providers) are handy because they can separate outgoing transactional emails from marketing ones to ensure deliverability.

    • samstave3y

      The biggest aspect that used to be used in spam detection (from an OSI, not a content reading perspective) was source IP blocks.

      Many people dont realize that spam was the original source for social networking...

      I cant type up all the history I know quickly, but Friendster (who 'invented the social graph', HI5, Tagged, MySpace, were all started as an overlay to email harvesting mechanisms to --> spam....

      They needed to create high value email-lists of valid emails.

      Asking for such, was stupid as most people rejected it.

      Then, they figured out that adding a service (chat and share with your friends, give us your email and their email so we can connect you by sending them invites etc) was the best social-engineering (the 'hacker' meaning) mechanism was to have people validate their personal email, offer a novel e-'service' to 'connect' with your friends within some context - and have you pre-validate the email list based on your invites and contacts... then parlay MLM structure to create better more validated email lists.

      Then you sell the lists on the BM to spammers looking to avoid a high bounce rate based on real emails.

      Then they started nefariously stealing your contacts with auto-opt-in agreements and such....

      Then as the battle btwn spam and socially-interesting services ramped up the spam companies (such as Postini (which was bought by google) became the spam filters (selling their services to BigCorps) began to realize that filtering on the sending IPs was a good measure for determining spam (along with rate-limiting, and other aspects) - such that spammers were getting blocked based on delivery IP blocks.

      This set-off a market incentive for spammers to buy up swaths of IPv4 blocks so they could swap out IPs...

      Then there were many ranges, sources, tracrts etc used to determine senders and ID them as spammers etc....

      So - the spammers invented VPN/Tunneling delivery routes such they could send to a number of various global relays so that they could send from a central source of machines, but be delivered to the endpoints from a variety of global IP blocks.

      There was a market for IPv4 blocks all over the world and spammers were spending big bucks on all aspects, from paying for the IP blocks, relationships with ISP/VPN/etc tech....

      All while attempting to provide what was a thin layer of utility service to the user to keep what was effectively continued access to the growing address books of their users and keep them engaged on the platform such that they could keep knowing if existing or new contacts were valid.

      There were even back-room deals between spammers/tech/isp etc to allow access.

      So, the "social networks" we know know of were birthed literally upon spam.

      -

      Have you ever wondered why as soon as tiktock came out, all of a sudden a fuck-ton of spam was hitting your gmail inbox (previously postini) <-- Because tictock was eating the revenue lunch.

      Zuck literally stated that the entire revenue model for FB was "senator, we sell ads"

      When in an interview with Google, they asked "what kind of company do you think google is "Well, most people think youre a search engine, but youre actually an advertisement correlation engine"

      In an interview with Twitter (dont forget about the infamous ATT room 641A?) - what do you think twitter is: "Twitter is a global sentiment monitering engine" (this was ~2006?8? I cant recall)

      --

      Source: I know these founders and many of the original devops members from the above companies, and other more scary outcomes from the above statements.

      And here we are today with the advanced learning all built upon "consumption" ad algos

  • bsnnkv3y

    My experience has been that after having become comfortable with Nix, self-hosting is the path of least resistance for the majority of "mainstream" tooling where you can pick between paying for a SaaS or self-hosting. So nice to not have to deal with Docker containers to deploy (most things) anymore.

    I see a lot of people suggesting hosting on a VPS, but I feel that a Hetzner Auction box is often much better bang-for-buck and serves as a nice remote dev/build box for projects that need that extra oomph when you aren't working from a capable desktop or laptop.

    [1]: This was the article that finally opened my eyes to the power of Nix for self-hosting https://arne.me/blog/plex-on-nixos, and it is such a huge upgrade from the previous Docker setup I was running[2]

    [2]: https://github.com/madslundt/docker-cloud-media-scripts

    • PuffinBlue3y

      That was a great write up on nix. Nix isn't something I know much about at all so thanks for the link.

      What surprised me the most was learning rclone can mount object storage locally! That's vey interesting to learn :-)

    • simongray3y

      One of the main reasons I use Docker is being able to run the exact same Dockerfiles locally and in prod, with virtualisation taken care of automatically on e.g. Mac.

      Is Nix a viable alternative to that?

      • ParetoOptimal3y

        > Is Nix a viable alternative to that?

        Yes, it even provides stronger reproducibility guarantees that what you build locally and what's on prod are exactly the same.

        You can also build a docker container from the Nix expression, see:

        https://nix.dev/tutorials/building-and-running-docker-images

        If you are interested, I recommend then also checking out https://zero-to-nix.com/

      • bsnnkv3y

        Again, just my experiences as a long-time DevOps person- you can build Dockerfiles on two different machines and get two entirely different results (ie. success vs failure), and especially on macOS, Docker performance is quite poor, even moreso when mounting directories from the host.

        Nix on the other hand will produce the same result every time wherever I run it. This alone for me is enough reason to prefer it over Docker the majority of the time.

    • seqizz3y

      Oh I was looking the thread of the cult :) /s

      As a fellow follower, I can also recommend SNM[0] if anyone wants to self-host their e-mail. Works with zero maintenance, except upgrades which cause few lines to change.

      [0]: https://gitlab.com/simple-nixos-mailserver/nixos-mailserver

  • einhverfr3y

    At PGConf India, one of the keynotes addressed exactly this topic. The largest stock broker firm in India had made the decision to self-host everything. The CTO made a number of points that I think are missed in this discussion and article, namely:

    1. You may think you are a software company, but HR, accounting etc are just as critical to your operations as the customer product. Therefore there isn't really a distinction between core business and non-core business that people like to think, and

    2. By self-hosting you ensure you learn the technology and can therefore respond to problems yourself. In an environment where businesses are increasingly on the hook for defects in their services to the end user, that's a good thing.

    Obviously hiring knowledgeable people is probably the bottleneck but it is still a cost saver and it is important to create an organizational culture where people can learn the technology on the job.

    • sgt3y

      Agreed fully. You might risk slightly more downtime, but the overall benefit of owning it yourself is well worth it long term.

      • einhverfr3y

        Over time, if you come to understand the technology, you can fix things a managed service cannot, so you might actually risk less downtime if you prioritize that.

        At least that's my experience based on fighting weird bugs on managed database services.

    • efields3y

      This is me, a principal frontend engineer and “player-coach” team lead of a few engineers in a pharmaceutical. We do as much as we can for the various digital properties, short of intense design (leaders want a willing agency they can torment).

    • InnerGargoyle3y

      can you link it?

  • linsomniac3y

    Self-hosting is a big operations problem, with few tools to automate it.

    Long ago, I had an associate tell me that he was having some success with setting up Wordpress sites for local political organizations. I said to him: "Oh, that's really neat! What are you doing to ensure that the sites stay up to date with security patches?" His response was completely unrelated to my question, which I figured was my answer and was why there are so many hacked sites out there.

    Anything I deploy needs to have an upgrade plan. Ideally, something that provides a package (either on distro or a repo the package provides), so "apt update" will resolve it. Docker can be a good way as well, Sentry does a pretty good job at this.

    • KronisLV3y

      > Anything I deploy needs to have an upgrade plan. Ideally, something that provides a package (either on distro or a repo the package provides), so "apt update" will resolve it. Docker can be a good way as well, Sentry does a pretty good job at this.

      I wrote an article called "Never update anything": https://blog.kronis.dev/articles/never-update-anything which in truth argued that while updates are necessary, they're also going to break things... a lot. And there isn't always going to be an upgrade path either (e.g. using AngularJS or Clusterpoint).

      In my experience, even containers break things surprisingly often: everything from GitLab, Nextcloud, OpenProject to even things breaking in regular server updates, like a Debian update breaking GRUB or another Debian install automatically launching exim4 which prevented my own mail server from working.

      Perhaps that's because of how we build and package software, because of the fact that we don't separate the runtime from the data enough (e.g. persistent directories) or that we make too many assumptions about the environments...

      Regardless, I can understand why some don't even update working but insecure software: because of the risk to turn it into secure but not working software.

      • XCSme3y

        I agree, I ran many WordPress sites, UXWizz dashboards or even my own tools written in Node.js, and they never ever broke by themselves in 10+ years. The only time there is an issue if I decide to update/change anything. In general, software that has once worked will always work. If I open my PSP (Playstation Portable) from 2005 it will still work as well as in the first day, all the games work the same and the boot time and interface are faster than most consoles nowadays. Why does it still work? Because it once worked and nothing changed.

      • linsomniac3y

        Agreed, updates are probably at some point going to break things, and you're going to have to spend time fixing them, maybe even recovering from backups... As I said, an operations problem.

        Leaving things without upgrades is also a problem as well though, due to security problems.

        • KronisLV3y

          That is very true, that's also why having (working) backups is so important, or even knowing what to back up and restore, so you don't end up overwriting old vendored packages/dependencies but can restore the configuration/data instead.

          I'd go as far as to suggest that updating is something that you must do while you have any sort of a stake in the overall outcome of the thing you're working on, given how destructive breaches and getting hacked could be. That also does mean that you will absolutely need to take the operations overhead into account and plan for it.

      • dicknuckle3y

        Anecdote, I also had a container app fail, from a trusted provider that I assumed weighsy have plenty of testing in place, but maybe it was too synthetic.

        Sonarr from LinuxServer.io briefly changed to a dev build and boned the DB before switching back to a stable build.

        • KronisLV3y

          You know, I thought that I'd actually provide a bit more information about my experiences as well, because while these are anecdotes, they are still something that happened and shouldn't be entirely ignored. These things do happen, sometimes because of the architecture and what's inside of the container in question. For example, that was largely the case with the GitLab Omnibus containers, because of them containing lots of dependencies, or the same with OpenProject.

          GitLab: https://blog.kronis.dev/everything%20is%20broken/gitlab-upda...

          OpenProject: https://blog.kronis.dev/everything%20is%20broken/openproject...

          (I still think that both are good software, but not without their pain points, like updates in this case)

          Then again, even something like Nextcloud or comparatively simpler systems like Grav still run into issues with upgrades, sometimes because going across major versions is too much, other times because the containers have vendor dependencies in volumes/bind mounts, which cause a plethora of issues.

          Nextcloud: https://blog.kronis.dev/everything%20is%20broken/nextcloud-i...

          Grav: https://blog.kronis.dev/everything%20is%20broken/grav-is-bro...

          But it's a little bit odd when ever the operating systems we use have similar issues. Like that mention of a Debian update breaking GRUB prevented the server from booting altogether and that other time the exim4 package getting launched prevented my mail server from starting because of the port being bound.

          Debian breaking GRUB: https://blog.kronis.dev/everything%20is%20broken/debian-and-...

          Debian launching exim4: https://blog.kronis.dev/everything%20is%20broken/debian-upda...

          Other times you get problems with upgrading across libraries, or language runtime versions, which I didn't mention in the original post, but which is no less of an issue. It's even harder than just regular package or software updates, because sometimes it might necessitate a major rewrite.

          Example of Java 8 to Java 11: https://blog.kronis.dev/everything%20is%20broken/upgrading-j...

          That said, I'm not sure how good the idea of documenting every software breakage out there is, because the folder with screenshots and information about things that went wrong just seems to be getting longer and longer for me. But hey, maybe it will serve as an insight into what things were around 2020s some day, when hopefully we've figured out how to upgrade software more safely.

    • x0x03y

      The entire discussion on the link obscures the fact that saas companies are providing a real service. Even if you don't want the product to be updated, staying abreast of security patches, external api changes, OS changes, client changes, browser changes, etc is real work. Self hosting requires the person hosting to do all the ktlo work.

    • chillfox3y

      I have run a lot of WordPress sites, it's easy for a skilled admin to run it securely for a long time with barely any effort and unfortunately it's also easy for a user to make it insecure.

      • linsomniac3y

        Sure, and that was what I floated with that guy, but the impression I got from him is that he installed it and moved on. So, as you say, a skilled admin can do it, given some ongoing attention. Which is exactly what I'm talking about WRT operations.

        • XCSme3y

          I consider myself a "skilled admin", yet I don't feel like I have to do much to keep a WordPress instance running. Now it even updates itself. I often just install WP, a theme, and maybe one or two plugins, and then it runs without issues forever.

  • triyambakam3y

    > which honestly kind of upset me a lot

    I've seen this language more and more frequently: minimized (kind of) + maximized (a lot) qualifiers. No real insight, just interesting.

    • scubbo3y

      In my idiom, at least, "kind of" is not solely deminisher, but can also be an approximater - to say something "kind of upset me" _could_ mean "it upset me, but not a great deal", or it could mean "it had an effect on me which is complicated and difficult to concisely describe, but which can be approximately described as 'upset'". In that reading, this isn't a contradiction at all - "which honestly had an extremely large effect on me which was similar to, but not entirely the same as, being upset".

    • rhaway847733y

      I don’t think the “kind of” here is serving to minimize the “upset ness”. I think it’s describing the fact that the person wasn’t really “upset”, but some other emotion which they can’t express, which was kind of like being upset, but not exactly the same.

    • eointierney3y

      As a modifier it's kind of a mollifier

      Edit: just looked it up and wikipedia has a difinition I didn't know :)

      https://en.m.wikipedia.org/wiki/Mollifier

      However in the colloquial usage 'round these parts mollifier means to soften or make gentle

      https://www.etymonline.com/word/mollify#etymonline_v_17411

    • powersnail3y

      To my non-native speaker ear, “a lot” indicates the strength the emotion (“very upset”), while “kind of” is a defensive wording indicating lack of objectivity or surety (“not saying it’s objectively annoying, but it does upset me”). It shows up a lot, in my experience, when people are talking about something anecdotal or subjective.

    • zeroonetwothree3y

      I think "kind of" is being used as a modal marker here, similar to how "like" is used. In particular, this could have been said as "which honestly like upset me", but because of some negative backlash against excessive use of "like" as a marker, people have switched to using other words.

      This particular use of the word is trying to "soften the blow" of the discomfort that the listener (in this case, reader) would feel at the un-modified phrase. So if you just said "which honestly upset me a lot" that might seem like an extreme reaction for just a price increase of some service (it's not as if someone is dying), so the "kind of" is added to signal to the listener/reader that the speaker acknowledges that this is perhaps too-strong language for the situation.

    • Jedd3y

      In terms of annoying-once-you-notice-it idioms, should readers assume dishonesty on all other statements made by people that prefix only some small subset of their claims with 'Honestly ...' and 'To be honest ...'?

      • zeroonetwothree3y

        It's typically used for statements that might otherwise be interpreted non-literally or hyperbolically. Of course the risk is that over time it becomes so commonplace that it starts to be used itself to connote hyperbole, much like how "literally" has come to mean "figuratively" in many uses. But c'est la vie.

      • cal853y

        No, because they mean "honest" in the sense of "frank/unguarded", not "truthful".

        • Jedd3y

          'To be frank [or candid] ...' seems like a superior phrase, in that case.

      • _dain_3y

        no, it's just a carelessness

    • bitsinthesky3y

      Nice catch. I've been using this construction and I've been oblivious to its hypocrisy until now :) I might start seeing how far I can stretch it to make it obvious how silly it is. "Which honestly did not at all upset me a ridiculous amount." Sounds unhinged.

    • jeppester3y

      This is definitely a thing, and I worry that I'm guilty of it myself.

      I don't know if I should thank you for this insight or if you just cursed me.

    • 3y
      [deleted]
    • creativenolo3y

      This. I've seen a lot using this on its own more and more frequently too.

  • fabianhjr3y

    Its better to design, implement, and use local-first software: https://www.inkandswitch.com/local-first/

    • __MatrixMan__3y

      I'm developing such an app. I'm excited to get to the network connectivity part so I can see how much I've saved by making the client smart.

      I think I'm going to be able to get away with just running the server for 36 minutes a day (three minutes every hour). The client will know to sync data during those time windows. 1hr of latency is fine for a lot of things if the client is smart about what it caches.

      • triyambakam3y

        What is the app?

        • __MatrixMan__3y

          It's a protocol for crowd sourced annotation data. So an example app that uses it would go something like this:

          Suppose you have a food allergy, and you have a reaction to something in a restaurant. You want to leave a note on that menu item "contains allergen XYZ" but you don't want to write on that menu, you want to annotate all such menus.

          You'd take a picture, OCR happens, some algorithm thinks about line wrapping and renders it as a list of strings, and then a rolling hash identifies the "features" among those strings (any substring hashes to a 16-bit integer, the ones where the first 8 bits are off count as features). Then you "paint" the menu entry in the color "contains allergen XYZ". The features that are nearby your "brushstroke" (i.e. text highlighting) are stored in a table for that "color" which is eventually synced with other users.

          Later, someone else who subscribes to that "color" and has a trust relationship with the first user can scan the menu, follow the same process to find the features, which are used as indices to look up the brushstroke. Then they're able to see the annotation left by the other user as an overlay on the image they queried with: supposing they have the same allergen, they now know to avoid that item.

          I'm calling the whole scheme Semantic Paint and the index-friendly-feature-finder Gnize (like cognize now, recognize later).

          It's meant for local-ish use by small-ish communities, so the data you actually have to store on your device is pretty small and restricted to colors that you've chosen and other users you've explicitly (or transitively) trusted . And you're communicating over spans like weeks or months, so it's not a big deal if it takes a few days for one brushstroke to make it to another user's device. It's not like they notice its untimely arrival, it just goes in a feature database for later query.

          I also thing it might have applications in genomics/proteomics, e.g. annotate a gene.

          • triyambakam3y

            Wow, fascinating. Thank you for detailing that. Do you have somewhere to follow its development?

            • __MatrixMan__3y

              I'm tickled to that you're interested. "I have this cool idea involving algorithms on strings" isn't exactly everybody's favorite conversation.

              Here's the old version, hardly even proves the concept: https://github.com/MatrixManAtYrService/gnize

              Python was giving me trouble though, so I'm switching to Nim for performance reasons, and also because I can compile it to C, objective-C, and JavaScript for use by a wider variety of clients. It's just an empty shell right now but that project will end up here: https://github.com/gnize and hopefully soon.

              • triyambakam3y

                > "I have this cool idea involving algorithms on strings" isn't exactly everybody's favorite conversation.

                Haha yeah I get you. I wouldn't have expected it would be for me either but the approach and use case you shared is really cool. Thanks for the links. It will be neat to see the development in Nim, as well.

                • cb3213y

                  You both might be interested in this little Nim program "framed" to frame & digest text for near-duplicate detection:

                      https://github.com/c-blake/ndup/blob/main/framed.nim
                  
                  which does a rolling hash with the late Bog Uzgalis' Buz Hash which is (IMO) about 10x simpler than Rabin fingerprinting. It's really just xor out old, xor in new.

                  In my context of near duplicate detection I worry about false negatives as well as false positives and displaying near-bys to the user & such. So, things are set up for random seeds for very independent framings & digests (to avoid unlucky ones). { This is very different from the more common "backup/rsync/transfer" context of "content-based slicing" aka "content-defined chunking". }

                  • __MatrixMan__3y

                    Oh wow, this is great. We're up against similar problems here, but what I've got as unnamed musings you've got as links to relevant Wikipedia articles. I've got some reading and thinking to do.

                    I'll probably just include both algorithms, make it a parameter, and do a bunch of testing (which I'll be sure to publish). I can't put my finger exactly on why I feel better about Rabin fingerprints, maybe buzhash just needs time to sink in.

                    It's only recently that I've been thinking of this as a framing problem. The right words eluded me because I only care about the frames near the annotation. So instead of "multiple statistically independent framings" I've been thinking of separate "feature channels" (one for each prime polynomial in GF2) which cause different substrings to be identified as features, but I think it's more or less the same thought deep down.

                    I want a more or less uniform distribution of features so that wherever a user wants to put an annotation, there's always something nearby to anchor it to. In the event of a large gap I've imagined schemes to switch to different framings until its filled . Or maybe I show the features to the user and let them click something to "change the channel" until they get a set of features that works for their particular annotation.

                    Putting this as a nearly-duplicate-files problem makes me wonder if it could be used to arrange files into a graph based on how they differ. Like when browsers unhelpfully just increment a counter and redownloads the the file every time you click a link to it. It's hard to keep track of the fact that copy 6 is the one I want, but if they were rendered as a sort of constellation of files, where the edges are longer for more dissimilarity... That would be fun to play with.

                    Anyway, I'll be looking through this more in the future. Thanks for sharing it.

                    • cb3213y

                      You are welcome. Thanks are too rarely offered. :-)

                      You may also be interested in word stemming ( such as used by snowball stemmer in https://github.com/c-blake/nimsearch ) or other NLP techniques, but I don't know how internationalized/multi-lingual that stuff is, but conceptually you might want "series of stemmed words" to be the content fragments of interest.

                      Similarity scores have many applications. Weights on graph of cancelled downloads ranked by size might be one. :)

                      Of course, for your specific "truncation" problem, you might also be able to just do an edit distance against the much smaller filenames and compare data prefixes in files or use a SHA256 of a content-based first slice. ( There are edit distance algos in Nim in https://github.com/c-blake/cligen/blob/master/cligen/textUt.... as well as in https://github.com/c-blake/suggest ). Or, you could do a little program like ndup/sh/ndup to create a "mirrored file tree" of such content-based slices then you could use any true duplicate-file finder (like https://github.com/c-blake/bu/blob/main/dups.nim) on the little signature system to identify duplicates and go from path suffixes in those clusters back to the main filesystem. Of course, a single KV store within one or two files would be more efficient than thousands of tiny files. There are many possibilities.

    • triyambakam3y

      Very cool, and interesting that Martin Kleppmann of DDIA is an author. I am glad to come across this - I was brainstorming such a manifesto, now I can use this as a resource.

      One local first that I recently switched to is migrating from ynab.com to my own Libre Calc spreadsheets. It took a few days to figure out all the formulas, but now I have even more control over how I track my budget.

      • grvdrm3y

        Do you integrate your bank accounts/etc via API or do you pull the data into the sheet manually on some periodic basis? Second question - are you using your own categories or do you rely on bank/card?

        Asking as I’m sort of in the middle of the two. I keep a mostly complete spreadsheet of my expenses but that doesn’t account for things that I purchase regularly like groceries/Amazon/etc. Trying Copilot for a year right now as well.

        • triyambakam3y

          Right now I have a sheet for each account that has columns for date, payee (i.e. the merchant), envelope (e.g. Groceries or Books etc.), memo, outflow, inflow and balance. And I enter the transactions manually. I enjoy that part actually since it keeps me mindful and I only have two accounts (checking and a credit card) that I frequently use.

          Copilot looks very polished and handy. Though I have found that I really like the customized control that a spreadsheet gives me. Though I haven't used it, I could implement most of what I see in Copilot in my spreadsheet, and have already.

      • r1cka3y

        Care to share your Libre Calc spreadsheet formulas?

        • triyambakam3y

          I'd be glad to, but I'm not sure yet how helpful it would be to do so here, without just sharing the full spreadsheet. Which I would need to clean up and anonymize first. It may be very particular to my needs. I drew inspiration from many other "YNAB spreadsheets" (searching reddit etc.) I can say that learning the SUMIFS formula to sum the relevant transactions for a given envelope was a big key to making it work well. The other details are polish around it. I'm afraid that this might not come across very clear. I found that the more that I desired a certain feature and searched for how to do it in Excel or Google Sheets (or Libre Calc, but there is less specific resources for that) then it gave me the next bit that I needed.

      • triyambakam3y

        bordercases - I can't reply to your comment, it's dead.

      • bordercases3y

        Interested in sharing?

  • flakeoil3y

    If you have a family and your family uses all this self-hosted stuff (backup, file storage, email etc), what would happen when you die or get a serious health issue? Do you think your spouse or kids can get the data out themselves? No-one will have a clue where the backups are, how the emails are stored, where those pics are etc. And even more difficult in case you have implemented some clever encryption and 2-factor etc.

    We all think it's simple. It's just to copy the files from "that" folder on the server to your own machine or a USB disk, but it is not so easy in practice for most people. Even for someone competent it can be difficult to sort through the mess if it is not well documented.

    I say this as someone who hosts the backup and all the pics on a NAS.

    • romwell3y

      > Do you think your spouse or kids can get the data out themselves? No-one will have a clue where the backups are, how the emails are stored, where those pics are etc.

      As opposed to what, your spouse having no idea which cloud services you used for file storage, what login credentials you had, and having no ability to access those accounts?

      > Even for someone competent it can be difficult to sort through the mess if it is not well documented.

      The most difficult part is sorting through the mess, however it is documented.

      It's been three years since my father's passing, and I have yet to open his laptop. I can't.

      I don't need any credentials to access the box of old family photos to get them scanned, and yet it's still an item on the to-do list.

  • margorczynski3y

    As for hosting your own apps I found Hetzner VPS or something similar to be very good. Just pack them up into a docker-compose with your CI/CR pushing an image into a repository and you can host a lot of low-medium traffic solutions on a single box with the cost being a fraction of "the Cloud" (especially PaaS). On the box there is a single Nginx acting as a reverse proxy to the exposed compose ports offloading SSL.

    In such a solution you just need to ask yourself should Postgres, Grafana etc. be shared between the apps or put into each one of the compose configs as a service and handled separately. Both have their upsides and downsides.

    • Svarto3y

      Do you know of (or used) any guide to get started? I'm reasonably proficient but struggle to put all the moving pieces together

      • margorczynski3y

        Well it was mostly pieced together information - the one that I already had from experience and the rest pulled from the Internet.

        But which parts do you mean exactly? How to setup nginx to act as a reverse proxy? Or putting together a docker-compose? I'm thinking of doing some kind of guide for this type of thing on my blog - basically from setting up a Hetzner VPS box to running a docker-compose'd app and exposing it on a HTTPS endpoint for your own domain (so setting up DNS probably also).

    • contradictioned3y

      I have something similar, but with traefik instead of nginx. Traefik integrates very nicely with docker using labels, such that the labels configure e.g. domain, path, http-auth etc for the web service running in a container.

      • Witoso3y

        Same here but with caddy-docker-proxy which I found a bit easier than traefik.

      • 3y
        [deleted]
    • e2e43y

      Similar, but am using CapRover for docker images management and setup.

      • XCSme3y

        I also use CapRover. The initial setup was harder than I expected and there definitely is some learning curve and learning how to package your own apps or how to even make some of the pre-made apps to work (it doesn't really work out of the box). Once you get used to it though, it makes it really easy to add new applications and put them live. If I have an idea for a website, I can just click new WordPress instance, point the domain and done. If I want to also add a forum for that website, I just click new Disqus instance.

  • justin_oaks3y

    I thought this article would go into more than a handful of apps.

    What apps do you think work well for self-hosting, even if it limited to us tech folk?

    I've self-hosted Grafana and InfluxDB for monitoring and metrics and found them OK to self host. The authentication and TLS setups were the most annoying.

    I've self hosted a few kinds of wiki software, but I eventually settled on a combination of a single Tiddlywiki file and uploading to S3. It works well for most of my own knowledge storage. I even went so far as to write my own plugin to save the Tiddlywiki file to S3, so I can press a button in Tiddlywiki to upload it.

    I have a self-hosted docker registry, which is just the reference repository provided by Docker. It has required almost no maintenance since I set it up.

    [Edit: for clarity]

    • nitnelave3y

      I got fed up installing OpenLDAP for user management, so I made LLDAP, targeting the Goldilocks zone of the article: simple to setup/manage, but powerful enough for most self-hosting needs.

      • justin_oaks3y

        Awesome! Next time I'm looking to set up some user management stuff, I'll have to try it out. I especially appreciate the sample configurations you give for each service you are trying to integrate with.

      • bityard3y

        Thank you for LLDAP, I'm using it as we speak for my self-hosted stuff.

      • navigate83103y

        Can something like this be used to host phone books for IP PBX?

    • spmurrayzzz3y

      > What apps do you think work well for self-hosting, even if it limited to us tech folk?

      At least once per month I check out https://github.com/awesome-selfhosted/awesome-selfhosted to see what folks have been adding.

      One of my favorites from that list is Focalboard. I used to use a combination of Todoist, Trello, and Notion, but found that moving to FB helped me collapse that all into one tool. The open source and self-hosted aspects were a big bonus, of course.

    • boguscoder3y

      +1 to influxDB (I use older Chronograf instead of Graphana) for home automation/ sensor monitoring, even on Rpi Zero hosting was very easy to start and zero maintenance from there

    • nickstinemates3y

      A lot of tools get mentioned and resources are available in reddits /r/homelab

      • bityard3y

        /r/selfhosted is one of my favorite subreddits

  • dmje3y

    I've been impressed with Yunohost [0]. I only have it setup on an internal box for now but it works well, and super easy to use. Good for people like me who aren't interested in admin.

    [0] https://yunohost.org/en

    • dvisca3y

      Yunohost is great! I have a small Hetzner VPS set up at Yunohost with Baikal, Nextcloud, Outline Wiki, Bitwarden (or Vaultwarden) and Wallabag. I haven't had any major problems since the initial installation about a year ago, although I happily install and uninstall software every now and then, not necessarily ensuring a stable environment.

    • Eumenes3y

      The only time I've seen democratize used correctly in the tech sense

  • cuuupid3y

    Cal.com’s issues have less to do with the stack and more that it just isn’t setup for self hosting , if you try to get it up and running you’ll notice you get quite a few errors where it tries to hit proprietary code and it crashes strangely every few hours. Also uses up an incredulous amount of resources for such a simple service.

  • jrm43y

    Help me out here: I do a lot of this "self-hosting stuff," I have fiber internet.

    And a whole lot of y'all talk about what feels like a TON of overkill. I still don't really get what kubernetes is (20 year Linux user) or the necessity of buying a "server" for this purpose.

    I use (what is relatively old) a $200 office desktop from someones surplus (Dell Optiplex 900-something, I don't even remember because it's in the closet) and Ubuntu+Docker handles everything I throw at it, easy?

    • hermannj3143y

      But Google uses Kubernetes so you have to use it.

      My plumbing can barely scale to handle a dinner party, but apparently my home lab needs to be ready to provision nodes if I ever find 2 billion people in my living room.

    • stefandesu3y

      I agree. I have one of those fairly efficient mini PCs also running Ubuntu+Docker and it's totally fine for my use-case. I don't need 100% uptime (or even 99%) and nightly backups are more than sufficient. I can spin up an equivalent machine in about an hour from backup if necessary. (I could spend some time learning Ansible to automate this further.)

      • jrm43y

        Right? I live in North Florida, meaning "the power will go out 7-10 times a year," so my "uptime" is not important at all, just "the thing in the bios that automatically turns the computer on without pressing the button when it recieves power"

        I don't need access to the music and TV shows with no TV/stereo to play them on :) READ A BOOK, kids.

    • XCSme3y

      Sometimes I still host things on LAMP servers, there are many apps/websites/platforms that just use PHP/MySQL so I can just upload some files or edit them directly on the server.

  • goplayoutside3y

    Pika Pods[1] deserves a mention here. No affiliation.

    Previously on HN: https://news.ycombinator.com/item?id=31284512

    It's essentially "self hosting as a service". They package popular foss projects in docker containers and run them for you (on fly io, iirc) for a dollar or three per month.

    1. https://www.pikapods.com/

    • keybits3y

      +1 for PikaPods - very good value, fast performance and the developer has a track record of solid service offerings such as https://www.borgbase.com/

  • zwilliamson3y

    I love self hosting. Here’s my home setup.

    Hardware (you don’t need much!)

    Mini atx tower, 8TB usable storage, Debian, AMD processor, 8GB memory

    Pfsense Firewall (Tailscale exit node)

    Plume Wi-Fi (would like to replace, owned by comcast now)

    Solution stack:

    Portainer + Docker Compose to manage everything

    Nextcloud

    Photo Prism

    Tailscale (remote WireGuard based access from all my devices. Integrates well with Pfsense)

    Home Assistant (amazing platform for home automation and more). I love the new voice control features and mission!

    Used to self host Email with Helm hardware company (not k8s Helm) but they went out of business. Self hosting email is annoying thanks to the big email providers and their control over the spam filtering world.

    Matrix chat server bridging all the chat interfaces I use. This is managed by an awesome open source Ansible playbook https://github.com/spantaleev/matrix-docker-ansible-deploy

    Pihole

    • hanklazard3y

      That’s a great stack, I have a similar one but also with syncthing.

      It’s really impossible to overstate what a game-changer tailscale has been for my setup. Especially the subnet router feature, allowing me to use all my LAN IP addresses remotely. I feel much better about self-hosting applications when I can keep all my firewall ports completely closed.

    • dicknuckle3y

      Not sure about the email specific issues you may have had, but did you run your outbound through Mailgun or Sendgrid? I've used Mailgun personally and professionally for years, just started using Sendgrid at work and love both.

    • cobertos3y

      Do you find that your Matrix integrations with other services are flaky at all? Or have trouble integrating specific features from other types of chat?

  • rektide3y

    Im fully onboard with the geneneral idea as a target.

    Right now it's for early early adopters. Hosting stuff is still a pain. But we are getting better at hosting stuff! Finding stable patterns, paving the path.

    Hint, it's not doing less, it's not simpler options: it's adopting & making our own industrial scale tooling. https://github.com/onedr0p/home-ops is a great early & still strong demonstration. The up front cost of learning is high, but there's the biggest ecosystem of support you can imagine, and once you recognize the patterns, you can get into flow states, make stuff happen, with extreme leverage far beyond where humanity has ever been.

    Building the empowered individual/operator is happening, and we're using stable good patterns that will mean the individual isnt so off on their own doing ops- they'll have a lot more accrued human experience at their back, their running of services isnt as simple to understand from the start but goes much much further, is much more mature & well supported in the long run. It's a major phase change for home-ops, picking tech because it's strong & good, not just easier to get started. And over time I think the on ramps & the mental models will be ever more visible, that the old cobble it together from pieces ways will fade.

    There's so many rejectionist principles at play. There's many realms of Back-To-The-Land mentality, rejecting modernity & swearing we're better with less, that we free ourselves. Being willing & able to accept help & better starting places will, in my view, certainly take over; there's already huge scope home-ops folks can handily take on well beyond what would have been imagineable, and the share-ability keeps growing, we keep better amplifing each other by having these shared core home cloud platforms. I love the modern growth, embrace of good cloud tech at home.

  • steponlego3y

    As for Google Analytics - who hasn't been blocking that shit for at least a decade? Heck my uMatrix pretty much auto-blocks all telemetry.

  • oaththrowaway3y

    I self host most everything through unRAID. I spent a good amount of money getting a good server setup. The only thing I rely on the cloud for is email.

    I've gone through several iterations of hardware and hard drive capacity over about 7-8 years now. Hard to imagine I'll ever go back.

    It's not even about the monthly subscriptions, I've spent more on hardware I'm sure, plus my monthly VPN and Usenet fees. It's really an exhaustion of SaaS becoming essentially keyloggers of our entire lives. I guess self hosting is the closest we have of opting out, but even then it's not enough.

    • AnthonyMouse3y

      What are you spending a significant amount of money on hardware? A used PC is ~$50, assuming you don't already have one. Spinning rust is ~$10/TB:

      https://www.amazon.com/HGST-Ultrastar-HUH728080ALE604-3-5-In...

      https://www.ebay.com/itm/125797516426

      It can be done for less than $100, done well for less than $300.

      • oaththrowaway3y

        Part of it is that I'm running a gaming VM on it and passing through a GPU to that. Plus I have another GPU for transcoding my media to HEVC. Lots of RAM for all the containers as well.

        I also wrote my own container that I use for all my development so I like to keep it snappy for that.

        I have about 20TB of platter storage (WD Reds) + parity, and 2TB of SSD cache (an additional 2TB SSD passed into the gaming vm)

        It slowly adds up. I started with a Raspberry Pi and a USB hard drive as storage.

  • satvikpendem3y

    I posted about this before but I would recommend Coolify for self hosting applications, it's an open source Heroku alternative that has one-click installation of services like Plausible, NextCloud etc. It works with Herokuish buildpacks as well as Docker + Docker Compose (with Kubernetes support coming soon).

    I personally use a $5 Hetzner server in Northern Virginia which works great, cheaper and faster than the equivalent in DigitalOcean.

    https://coolify.io

  • Axsuul3y

    Loving the "just right" labels. How does n8n compare to Activepieces?

    Also those who are looking to dive deeper into self-hosting should join us at /r/selfhosted on Reddit.

  • teekert3y

    A quick plug for this very excellent podcast on self-hosting all the things (except maybe email ;)): [0]

    They always have good suggestions and point to their repos and (example) code. They explore new things and the podcast gives you an excellent idea of the quality of services. Some things I discovered through them: Paperless NGS, PhotoPrism, Tailscale, Matrix Dendrite, Hugo, Docker-compose, Traefik, NixOS) and they talk about my fav projects a lot (Nextcloud and Home Assistant), which I have been using for years now. Recently they did a nice JellyFin challenge (they were all on Plex) and I learned a lot.

    [0]: https://podverse.fm/podcast/nUl1ZCL76

  • codingninja3y

    I self host a huge amount of stuff on-top of a custom cloud platform I built using a Kubernetes cluster deployed as a tenant of my cloud.

    I run a few servers which all have an "ordlet" installed, akin to the kubelet, which configure network namespaces for isolated tenant networking and boots virtual machines which use a EC2 style metadata server to fetch their boot script, which for this purpose configures a HA kubernetes cluster that then uses ArgoCD to fetch all the manifests from my git repo using an AppSet.

    It's so incredibly over complicated and over engineered, it's a lot of fun :)

    https://github.com/ordiri/ordiri

  • spiderfarmer3y

    https://mailcoach.app/ is another awesome self hosted Mailchimp alternative, especially when you’re developing Laravel applications.

  • college_physics3y

    Why isnt this a more widespread business model? DIY is "fun" but streamlining everything for the average household would have a much bigger audience. Offer a linux distro that is customized to provide from a menu of self-hosted services (with as little pain as possible), along with a subscription for support.

    Could be offered as an extra service by the ISP provider or partners or entirely independent.

    • dewey3y

      Because not a lot of people care about this being a thing, ask anyone outside of HN if they would run Linux and they most likely don't even know what it is.

      There's not much upside for anyone using that over free Gmail/Drive/Photos for everyone except a small group who thinks they need to own their Email and Google is evil. It's a fair standpoint, but unrealistic to anyone not deep into the topic.

      • college_physics3y

        What you are saying is that the opinion of the most informed about a topic does not matter because ignorance rules.

        The analogy is if all doctors were warning about the risks of smoking in their journals but because smokers dont read them and dont see the "upside" of quiting its not going to happen.

        • 3y
          [deleted]
  • ThinkBeat3y

    I have tried a lot of different alternatives to Zapier and in my opinion, they are not even close in terms of productivity.

    I can get things running with little fuzz and it keeps working for the most part.

    Now that just means it works for my needs and the connections I need to make.

    I have stopped using Zapier because they are far too expensive for my budget.

    Now I make do with a few Perl scripts for what I really need and ifttt

  • wenbin3y

    reading these comments, it’s clear that cloud based saas is and will continue to be a great business :)

    self hosting is a cool thing if you have the time & expertise to do it right.

    oftentimes things that look easy inside silicon valley / tech bubble are not so straightforward to others. reminds me of the “you can just set up ftp on linux” comment on the dropbox show hn post :)

  • jojobas3y

    Have been hosting my email/jabber/matrix from a quiet atom machine in my living room, won't have it any other way.

  • ashayh3y

    The article gives a small number of self-hosted examples. Heres some more https://github.com/awesome-selfhosted/awesome-selfhosted

  • doublerabbit3y

    The joy and thrills, heart attacks and anger with colocation; I wouldn't want it any other way. I'm proud of my 4u corner of internet space.

    Knowing its my data, my hardware and my internet; makes me happy. I just wish IPv6 was more of a thing.

  • LelouBil3y

    I currently self host my password manager and a CalDav and cardDav server.

    When I disabled contacts and calendar synchronization on my phone I couldn't feel happier !

    The rest (photos, cloud storage) will hopefully follow when I have the time.

  • ptman3y

    Self-host in the cloud for free https://paul.totterman.name/posts/free-clouds/

  • fareesh3y

    Is there a feasible way to self host a dropbox clone? It doesn't seem possible to match their pricing - I suspect they assume most folks won't use the GB/TB they pay for

    • cobertos3y

      Seafile works well as a Dropbox clone. Been using it for a couple years now without much issue. My files are replicated to my main computer and I push files from my phone to the Seafile instance over WebDAV (automatically with Autosync in Android) and download specific files through the app.

      • fareesh3y

        Yeah I was more interested in cloud backups than syncing

        • cobertos3y

          For that I had to make a custom service that periodically runs rclone. Duplicati does cloud backups, but it has a record of being a bit sus

          • fareesh3y

            Is there a server / host that provides storage at a price cheaper than Dropbox though?

            • cobertos3y

              Backblaze B2 is the same price as Dropbox for 2TB ($0.005/Gb/mo vs Dropbox $10/2TB/mo)

              You can also use rclone to upload to Dropbox as the backup solution. Just wrap it in a crypt target so that Dropbox doesn't actually have the ability to scan your files/lose anything in a security breach

  • tiffanyh3y

    Does “self-host”, also include purchasing (not renting) the server you’re hosting it on and collocating it at a data center or hosting it from your house?

  • cobertos3y

    Is there a good self-hosted Vercel? I _love_ the workflow of Vercel, but I dread the day they change the pricing.

    I know of staticdeploy.io/ but it's not active

  • urbandw311er3y

    I was sad not to see a mention for Sendy in the self-hosted mailing list category. Been using it for years, affordable reliable and well designed.

  • RamblingCTO3y

    Although I self host some stuff (forgejo, gotosocial, blog) I think it's pretty inefficient when it comes to energy and thus emissions.

  • speleding3y

    As a Calendly alternative I would recommend SuperSaaS[1]. Not open source, but free for small users (<50 appointments) and much more customisable / flexible, extend with your own API / web hooks / CSS. (Disclosure: I built it)

    [1] https://www.supersaas.com/

  • jeppester3y

    This is a thing I'm very interested in currently. It seems like the last 10 years tech innovation (especially cloud) should have also made it much easier to run - and maintain - things on-premise.

    Containers, easy to setup SSL, immutable OS's, reverse proxies.

    Those things coupled with cheap and power efficient workstations/nucs seem like a very good match, at least in theory.

    Then we have the GDPR laws which - also in theory - should be much more tangible when you know exactly where your data is - and backups can still easily be stored in the cloud as long as they are encrypted.

    The biggest issue I see is the lack of ECC memory in the machines I mentioned.

    And then that this idea goes against the business model of the cloud providers, who have a great deal of control over where we are heading and what we are talking about.

    Still I cannot help but think there's a lot of opportunity in that area which seems rather untapped so far.

  • gibs0ns3y

    I self host a lot, including some business related infrastructure for my home office (some employees also work from here). However services designed to be accessed by a wider public audience (eg; websites, emails, Nextcloud) are hosted on a rented dedi.

    As other comments have pointed out, kids suck up every minute of your life, and when the internet or Plex don't work I'll know about it real quick (DAAAAD!). The important question for me is; how fast can I rebuild all of this if all the drives were wiped?

    Here is my home setup;

    Hardware:

    * OPNSense - old i5 desktop, 70GB hdd, 20GB RAM

    * TrueNAS - AMD A10-5800K, 16TB hdd, 1TB ssd, 8GB RAM

    * OpenNebula Frontend - Random Intel NUC, 128GB ssd, 4GB RAM

    * OpenNebula KVM Node - Xeon ?, 4TB hdd, 64GB RAM

    * OpenNebula KVM Node - Xeon Silver ?, 2TB ssd, 128GB RAM + GTX 1070Ti

    * Unifi EdgeSwitch ES-48

    * Unifi WAP's

    Software:

    * OPNSense for routing

    * OpenNebula for VM's

    * Nomad for docker containers

    * NetMaker for Wireguard based VPN

    Main Apps:

    * Home Assistant VM

    * Plex (+rtorrent+sonarr+radarr+jackett+ombi) containers

    * FusionPBX container for VoIP

    * Kasm for remote desktops

    * Rundeck container for various infrastructure based automation jobs

    * Zabbix + Grafana containers for monitoring

    * PacketFence VM for NAC

    * Shinobi container for NVR

    * Snapcast container for multi-room audio

    I also have plenty of containers and VM's for various testing apps, or dev projects.

    All of my non-critical containers (eg; Plex) self update daily at early morning hours.

    For the critical stuff, I have tried to automate updates as much as possible.

    The majority of things can be recreated from ansible, docker compose, or Nomad scripts, all of which are backed up to an offsite Nextcloud instance.

    I use a lot of services on Opnsense, but I think one of the most important for me is the Traffic Shaper, allowing for bandwidth control.

    I have about 14 VLAN's, and am in the process of setting up VXLAN's for further isolation.

    I use restic for encrypted backups, stored in Wasabisys.com (no cost to download, unlike Backblaze).

    I will admit there is still one design flaw that I'm yet to spend time overcoming; if everything is powered on at the same time, there is a chance some devices won't get an IP because OpnSense is not yet ready. My current flawless work-around is to boot Opnsense 30 seconds before everything else.

    • Axsuul3y

      Thanks for sharing! Why rundeck when you are already using Nomad? Also what do you use to self update your containers within Nomad?

  • unixhero3y

    Self host with vps

    Use cloudron.io for provisioning

    Profit

    • margorczynski3y

      It looks like a proprietary, closed-source solution so not sure if that's such a great idea in the long run.

      • tweetle_beetle3y

        Maybe not in the long run, but more reliable than community created scripts which may or may not be: up to date, migrate data correctly, etc. They all suffer from it, but it's especially the non-Docker ones like Yunohost that seem to be worse in my experience.

        Anyway even if Cloudron goes under, you still have your own data on your own machine if youve set it up like that.

      • unixhero3y

        Used it for 5 years in production, it is a good idea

        • oarsinsync3y

          I used google reader for 6 years.

          • unixhero3y

            Where did it get you though?

            • oarsinsync3y

              It got me lots of good ideas. Until I gave up following all my RSS feeds when it shutdown. Now I get fewer good ideas.

        • cloakedcode3y

          The original post is about self hosting to get away from vendor lock-in. Using something like this defeats the purpose a little.

  • cdeutsch3y

    no

  • freitzkriesler23y

    Self hosting is great , except it's incredibly frustrating to get a good pipe to your home that has decent upload speeds. Even "business class " is downright awful. Thankfully this is slowly changing but not fast enough!

    Looking to run my own next cloud instance soon.

    • lucb1e3y

      It doesn't suit everything, but 10mbps can already be plenty for self hosting. Apparently youtube's 1080p stream is ~6 mbps¹. Count on some overhead, but I would say that 10 mbps upload should be enough for most types of content so long as it's just you and your friends using it. If it's a text blog (with css, site logo, etc. of course), 10mbps will easily survive the HN homepage at #1 position.

      Perhaps a photography blog, where you don't want to drag the jpg quality down to "looks fine without zooming" levels, might be more of a struggle. Or if your goal is to share flashable images for a raspberry pi or so (that can easily be gigabytes), yeah then this is not going to be a good experience even without concurrent users.

      Definitely you'll be fine to host things like:

      - email

      - a website (blog, CV, hobby, link shortening... can be anything) if you don't overload it with huge CSS/JS bundles

      - chat server, such as Matrix or an IRC bouncer

      - live editing notepad like etherpad, cryptpad, codimd

      - software development stuff, like a unit test server or a git server (maybe not if you're the Linux kernel with gigabytes of history), perhaps a build server depending on the size of the binaries (CLI vs GUI)

      - game servers: most real-time games (e.g. shooters) will run fine at low bandwidth if your latency is stable (let alone turn-based games), presuming it's just you and some friends playing, maybe not if you want to provide commercial game hosting services

      - backup server if are fine driving home for doing restores, especially if you mostly backup when you're at home anyway

      - "client" services like web scraping, e.g. I fetch some game's leaderboards regularly (with permission) and provide statistics for them, and monitor a river for giving me notifications in certain cases, which take negligible amounts of bandwidth

      - home automation that needs to talk to third-party services or you want to use outside of the house

      Probably there are more uses to be thought of. I can only say to not let your dreams be dreams :D

      ¹ https://stackoverflow.com/questions/24198739/what-bitrate-is...

    • kornhole3y

      If you have high bandwidth requirements, you can self-host on a VPS from the many possible providers. I personally have a hybrid setup with my high storage and resource intensive apps such as Nextcloud hosted on a server at home but host services that need high bandwidth, no NAT restrictions, and different security on a VPS. Yes I pay a small subscription to the VPS provider, but it is relatively small.

    • kefirlife3y

      One option to consider if you really want to host something is to get some space at your local transit provider collocation space. You have access to considerably larger amounts of bandwidth without all the complications of getting the path to your home to be sufficiently high bandwidth, and with sufficient capacity for your purposes. If you want something relatively highly available then power redundancy is important, and in my opinion leaning on existing infrastructure for this purpose is an additional benefit of this approach.

      Setting that up will be a lot more in depth and complicated than leveraging a cloud service provider, so you need to consider the cost benefit analysis for yourself. However, if you want to self host and want the bandwidth, I think it is a route worth considering.

      • bruce3434343y

        Not to mention the expense!

        • doublerabbit3y

          Colocation is dirt cheap compared to how expensive "the cloud" can become.

    • WXLCKNO3y

      I recently got 1.5 gigabit internet (1.5 down, 940 up) and it's been amazing.

      The fact that my desktop pc only has a gigabit card is perfect because I'm naturally throttled against using the entire. Obviously I can do this in my router (dream machine which is also gigabit only though) but it leaves a lot of room for everything else that's hosted at home even during peak utilization on my pc.

    • dijit3y

      I guess that depends on where you live, I have almost the same upload vs download speed on my ISP Bahnhof in Sweden.

      Proof: https://www.speedtest.net/result/14437484691.png

      I am always worried about someone deciding to DDoS me though.

      • lucb1e3y

        > I am always worried about someone deciding to DDoS me though.

        I've hosted a Tor exit node and other questionable stuff as a teenager, going from 1mbps upload to 50 mbps today. The site has been on the HN homepage, sometimes get featured on news sites like zdnet (that article gave me clicks for years on end), plus I run a file sharing service where anyone can post literally anything but the links are valid for one day. It has definitely hosted links to phishing and malware in the past (and I combat that when I see it, like replacing the short link with an info page "this was a phishing page" + infos).

        In ~15 years, I never noticed anyone trying to take down the site. But your sentiment keeps being echoed in places like r/selfhosted and moves people to put their services behind some traffic inspection service, reducing the decentralized to a few places where all traffic passes through (often with decryption keys made available to them). It's still good to self host even if you do that, but I do feel a bit conflicted about that and wouldn't do it myself.

      • toast03y

        DDoS risk seems to be related to the type of services you're hosting. If you host openly available game services or adult media, those seem to attract DDoS, and you need a good relationship with your upstream. If you're just hosting personal things, you're not likely to get DDoSed except for people just hitting random IPs, which could get you anyway.

        If it happens, not too much you can do, other than move to real hosting, and let them know upfront, or they'll drop you quick. Note that the first line of DDoS defense at your real hosting is going to be null routing your IP: dropping traffic to that IP, preferably on their upstreams' routers. That's normal and ok, although frustrating for you; doing better has costs.

      • Nextgrid3y

        To be fair, they can do that just fine regardless of whether you’re running externally-available services. Most untargeted, low-effort DDoS relies on filling up all your bandwidth with spam traffic, not exploiting some layer-7 vulnerability in an application you host.

        • charcircuit3y

          Who does untargetted layer 3/4 DDoS? Why would an attacker waste money booting a website that gets 0 visitors?

          • dijit3y

            By "untargetted" the parent almost certainly means "not tailored to take down the site".

            Some people do smart things and find a page that is heavy for your computer to process and hit it a lot.

            Some people decide they don't like your service (or you) and flood you.

  • bernhardjamil3y

    [dead]

  • openpaycard3y

    [flagged]