I share the author's sentiment completely. At my day job, I manage multiple Kubernetes clusters running dozens of microservices with relative ease. However, for my hobby projects—which generate no revenue and thus have minimal budgets—I find myself in a frustrating position: desperately wanting to use Kubernetes but unable to due to its resource requirements. Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.
This limitation creates numerous headaches. Instead of Deployments, I'm stuck with manual docker compose up/down commands over SSH. Rather than using Ingress, I have to rely on Traefik's container discovery functionality. Recently, I even wrote a small script to manage crontab idempotently because I can't use CronJobs. I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.
What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.
> What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.
Depending on how much of the Kube API you need, Podman is that. It can generate containers and pods from Kubernetes manifests [0]. Kind of works like docker compose but with Kubernetes manifests.
This even works with systemd units, similar to how it's outlined in the article.
Podman also supports most (all?) of the Docker api, thus docker compose, works, but also, you can connect to remote sockets through ssh etc to do things.
[0] https://docs.podman.io/en/latest/markdown/podman-kube-play.1...
[1] https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
The docs don't make it clear, can it do "zero downtime" deployments? Meaning it first creates the new pod, waits for it to be healthy using the defined health checks and then removes the old one? Somehow integrating this with service/ingress/whatever so network traffic only goes to the healthy one?
I can't speak on it's capabilities, but I feel like I have to ask: for what conceivable reason would you even want that extra error potential with migrations etc?
It means you're forced to make everything always compatible between versions etc.
For a deployment that isn't even making money and is running on a single node droplet with basically no performance... Why?
> I can't speak on it's capabilities, but I feel like I have to ask: for what conceivable reason would you even want that extra error potential with migrations etc?
It's the default behavior of a kubernetes deployment which we're comparing things to.
> It means you're forced to make everything always compatible between versions etc.
For stateless services, not at all. The outside world just keeps talking to the previous version while the new version is starting up. For stateful services, it depends. Often there are software changes without changes to the schema.
> For a deployment that isn't even making money
I don't like looking at 504 gateway errors
> and is running on a single node droplet with basically no performance
I'm running this stuff on a server in my home, it has plenty of performance. Still don't want to waste it on kubernetes overhead, though. But even for a droplet, running the same application 2x isn't usually a big ask.
GP talks about personal websites on 1vCPU, there's no point in zero downtime then. Apples to oranges.
Zero downtime doesn't mean redundancy here. It means that no request gets lost or interrupted due to a container upgrade.
The new container spins up while the old container is still answering requests and only when the new container is running and all requests to the old container are done, then the old container gets discarded.
You can use firecracker !
Have you seen k0s or k3s? Lots of stories about folks using these to great success on a tiny scale, e.g. https://news.ycombinator.com/item?id=43593269
I tried k3s but even on an immutable system dealing with charts and all the other kubernetes stuff adds a new layer of mutability and hence maintenance, update, manual management steps that only really make sense on a cluster, not a single server.
If you're planning to eventually move to a cluster or you're trying to learn k8s, maybe, but if you're just hosting a single node project it's a massive effort, just because that's not what k8s is for.
I use k3s. With more than more master node, it's still a resource hog and when one master node goes down, all of them tend to follow. 2GB of RAM is not enough, especially if you also use longhorn for distributed storage. A single master node is fine and I haven't had it crash on me yet. In terms of scale, I'm able to use raspberry pis and such as agents so I only have to rent a single €4/month vps.
I'm laughing because I clicked your link thinking I agreed and had posted similar things and it's my comment.
Still on k3s, still love it.
My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).
The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.
The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.
It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.
Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.
It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.
Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.
Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.
I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.
For personal use-cases... basement cloud is where it's at.
> It burns 450Watts on average
To put that into perspective, that's more than my entire household including my server that has an old GPU in it
Water heating is electric yet we still don't use 450W×year≈4MWh of electricity. In winter we just about reach that as a daily average (as a household) because we need resistive heating to supplement the gas system. Constantly 450W is a huge amount of energy for flipping some toggles at home with voice control and streaming video files
That's also only four and a half incandescent lightbulbs. Not enough to heat your house ;)
Remember that modern heating and hot water systems have a >1 COP, meaning basically they provide more heat than the input power. Air-sourced heat pumps can have a COP of 2-4, and ground source can have 4-5, meaning you can get around 1800W of heat out of that 450W of power. That's ignoring places like Iceland where geothermal heat can give you effectively free heat. Ditto for water heating, 2-4.5 COP.
Modern construction techniques including super insulated walls and tight building envelops, heat exchangers, can dramatically reduce heating and cooling loads.
Just saying it's not as outrageous as it might seem.
> Remember that modern heating and hot water systems have a >1 COP, meaning basically they provide more heat than the input power.
Oh for sure! Otherwise we'd be heating our homes directly with electricity.
Thanks for putting concrete numbers on it!
And yet it's far more economical for me than paying for streaming services. A single $30/m bill vs nearly $100/m saved after ditching all the streaming services. And that's not counting the other saas products it replaced... just streaming.
Additionally - it's actually not that hard to put this entire load on solar.
4x350watt panels, 1 small inverter/mppt charger combo and a 12v/24v battery or two will do you just fine in the under $1k range. Higher up front cost - but if power is super expensive it's a one time expense that will last a decade or two, and you get to feel all nice and eco-conscious at the same time.
Or you can just not run the GPUs, in which case my usage falls back to ~100w. I You can drive lower still - but it's just not worth my time. It's only barely worth thinking about at 450W for me.
I'm not saying it should be cheaper to run this elsewhere, I'm saying that this is a super high power draw for the utility it provides
My own server doesn't run voice recognition so I can't speak to that (I can only opine that it can't be worth a constant draw of 430W to get rid of hardware switches and buttons), but my server also does streaming video and replaces SaaS services, so similar to what you mention, at around 20W
Found the European :) With power as cheap as it is in the US, some of us just haven't had to worry about this as much as we maybe should. My rack is currently pulling 800W and is mostly idle. I have a couple projects in the works to bring this down, but I really like mucking around with old enterprise gear and that stuff is very power hungry.
Dell R720 - 125W
Primary NAS - 175W
Friend's Backup NAS - 100W
Old i5 Home Server - 100W
Cisco 2921 VoIP router - 80W
Brocade 10G switch - 120W
Various other old telecom gear - 100W
I care about the cost far less than the environmental impact. I guess that's also a European tell?
Perhaps. Many people in America also claim to care about the environmental impact of a number of things. I think many more people care performatively than transformatively. Personally, I don't worry too much about it. It feels like a lost cause and my personal impact is likely negligible in the end.
Then offsetting that cost to a cloud provider isn't any better.
450W just isn't that much power as far as "environmental costs" go. It's also super trivial to put on solar (actually my current project - although I had to scale the solar system way up to make ROI make sense because power is cheap in my region). But seriously, panels are cheap, LFP batteries are cheap, inverters/mppts are cheap. Even in my region with the cheap power, moving my house to solar has returns in the <15 years range.
> Then offsetting that cost to a cloud provider isn't any better.
Nobody made that claim
> 450W just isn't that much power as far as "environmental costs" go
It's a quarter of one's fair share per the philosophy of https://en.wikipedia.org/wiki/2000-watt_society
If you provide for yourself (e.g. run your IT farm on solar), by all means, make use of it and enjoy it. Or if the consumption serves others by doing wind forecasts for battery operators or hosts geographic data that rescue workers use in remote places or whatnot: of course, continue to do these things. In general though, most people's home IT will fulfil mostly their own needs (controlling the lights from a GPU-based voice assistant). The USA and western Europe have similarly rich lifestyles but one has a more than twice as great impact on other people's environment for some reason (as measured by CO2-equivalents per capita). We can choose for ourselves what role we want to play, but we should at least be aware that our choices make a difference
> My rack is currently pulling 800W and _is mostly idle_.
Emphasis mine. I have a rack that draws 200w continuously and I don't feel great about it, even though I have 4.8kW of panels to offset it.
It absolutely is. Americans dgaf, they're driving gas guzzles on subsidized gas and cry when it comes close to half the cost of normal countries.
In America, taxes account for about a fifth of the price of a unit of gas. In Europe, it varies around half.
The remaining difference in cost is boosted by the cost of ethanol, which is much cheaper in the US due to abundance of feedstock and heavy subsidies on ethanol production.
The petrol and diesel account for a relatively small fraction on both continents. The "normal" prices in Europe aren't reflective of the cost of the fossil fuel itself. In point of fact, countries in Europe often have lower tax rates on diesel, despite being generally worse for the environment.
Good ol 'murica bad' strawmen.
Americans drive larger vehicles because our politicians stupidly decided mandating fuel economy standards was better than a carbon tax. The standards are much laxer for larger vehicles. As a result, our vehicles are huge.
Also, Americans have to drive much further distances than Europeans, both in and between cities. Thus gas prices that would be cheap to you are expensive to them.
Things are the way they are because basic geography, population density, and automotive industry captured regulatory and zoning interests. You really can't blame the average American for this; they're merely responding to perverse incentives.
How is this in any way relevant to what I said? You're just making excuses, but that doesn't change the fact that americans don't give a fuck about the climate, and they objectively pollute far more than those in normal countries.
If you can't see how what I said was relevant, perhaps you should work on your reading comprehension. At least half of Americans do care about the climate and the other half would gladly buy small trucks (for example) if those were available.
It's lazy to dunk on America as a whole, go look at the list of countries that have met their climate commitments and you'll see it's a pretty small list. Germany reopening coal production was not on my bingo card.
I run a similar number of services on a very different setup. Administratively, it’s not idempotent but Proxmox is a delight to work with. I have 4 nodes, with a 14900K CPU with 24 cores being the workhorse. It runs a Windows server with RDP terminal (so multiple users can get access windows through RDP and literally any device), Jellyfin, several Linux VMs, a pi-hole cluster (3 replicas), just to name a few services. I have vGPU passthrough working (granted, this bit is a little clunky).
It is not as fancy/reliable/reproducible as k3s, but with a bunch of manual backups and a ZFS (or BTRFS) storage cluster (managed by a virtualized TrueNAS instance), you can get away with it. Anytime a disk fails, just replace and resilver it and you’re good. You could configure certain VMs for HA (high availability) where they will be replicated to other nodes that can take over in the event of a failure.
Also I’ve got tailscale and pi-hole running as LXC containers. Tailscale makes the entire setup accessible remotely.
It’s a different paradigm that also just works once it’s setup properly.
I have a question if you don't mind answering. If I understand correctly, Metallb on Layer 2 essentially fills the same role as something like Keepalived would, however without VRRP.
So, can you use it to give your whole cluster _one_ external IP that makes it accessible from the outside, regardless of whether any node is down?
Imo this part is what can be confusing to beginners in self hosted setups. It would be easy and convenient if they could just point DNS records of their domain to a single IP for the cluster and do all the rest from within K3s.
Yes. I have configured metalLB with a range of IP addresses on my local LAN outside the range distributed by my DHCP server.
Ex - DHCP owns 10.0.0.2-10.0.0.200, metalLB is assigned 10.0.0.201-10.0.0.250.
When a service requests a loadbalancer, metallb spins up a service on any given node, then uses ARP to announce to my LAN that that node's mac address is now that loadbalancer's ip. Internal traffic intended for that IP will now resolve to the node's mac address at the link layer, and get routed appropriately.
If that node goes down, metalLB will spin up again on a remaining node, and announce again with that node's mac address instead, and traffic will cut over.
It's not instant, so you're going to drop traffic for a couple seconds, but it's very quick, all things considered.
It also means that from the point of view of my networking - I can assign a single IP address as my "service" and not care at all which node is running it. Ex - if I want to expose a service publicly, I can port forward from my router to the configured metalLB loadbalancer IP, and things just work - regardless of which nodes are actually up.
---
Note - this whole thing works with external IPs as well, assuming you want to pay for them from your provider, or IPV6 addresses. But I'm cheap and I don't pay for them because it requires getting a much more expensive business line than I currently use. Functionally - I mostly just forward 80/443 to an internal IP and call it done.
Thank you so much for the detailed explanation!
That sounds so interesting and useful that you've convinced me to try it out :)
450W is ~£100 monthly. It's a luxury budget to host hobby stuff in a cloud.
It’s $30 in my part of the US. Less of a luxury.
We used to pay AU$30 for the entire house which included everything except cooking but it did include a 10 year 1RU rack Mount server. Electricity isn't particularly cheap here.
How do you deal with persistent volumes for configuration, state, etc? That’s the bit that has kept me away from k3s (I’m running Proxmox and LXC for low overhead but easy state management and backups).
Longhorn.io is great.
Yeah, but you have to have some actual storage for it, and that may not be feasible across all nodes in the right amounts.
Also, replicated volumes are great for configuration, but "big" volume data typically lives on a NAS or similar, and you do need to get stuff off the replicated volumes for backup, so things like replicated block storage do need to expose a normal filesystem interface as well (tacking on an SMB container to a volume just to be able to back it up is just weird).
Sure - none of that changes that longhorn.io is great.
I run both an external NAS as an NFS service and longhorn. I'd probably just use longhorn at this point, if I were doing it over again. My nodes have plenty of sata capacity, and any new storage is going into them for longhorn at this point.
I back up to an external provider (backblaze/wasabi/s3/etc). I'm usually paying less than a dollar a month for backups, but I'm also fairly judicious in what I back up.
Yes - it's a little weird to spin up a container to read the disk of a longhorn volume at first, but most times you can just use the longhorn dashboard to manage volume snapshots and backup scheduling as needed. Ex - if you're not actually trying to pull content off the disk, you don't ever need to do it.
If you are trying to pull content off the volume, I keep a tiny ssh/scp container & deployment hanging around, and I just add the target volume real fast, spin it up, read the content I need (or more often scp it to my desktop/laptop) and then remove it.
Do you have documentation somewhere, where you can share ?
I do things somewhat similarly but still rely on Helm/customize/ArgoCD as it's what I know best. I don't have a documentation to offer, but I do have all of it publicly at https://gitlab.com/lama-corp/infra/infrastructure It's probably a bit more involved than your OP's setup as I operate my own AS, but hopefully you'll find some interesting things in there.
You should look into fluxcd this stuff makes a lot of stuff even simpler.
"Basement Cloud" sounds like either a dank cannabis strain, or an alternative British rock emo grunge post-hardcore song. As in "My basement cloud runs k420s, dude."
https://www.youtube.com/watch?v=K-HzQEgj-nU
Or microk8s. I'm curious what it is about k8s that is sucking up all these resources. Surely the control plane is mostly idle when you aren't doing things with it?
There are 3 components to "the control plane" and realistically only one of them is what you meant by idle. The Node-local kubelet (that reports in the state of affairs and asks if there is any work) is a constantly active thing, as one would expect from such a polling setup. The etcd, or it's replacement, is constantly(?) firing off watch notifications or reconciliation notifications based on the inputs from the aforementioned kubelet updates. Only the actual kube-apiserver is conceptually idle as I'm not aware of any compute that it, itself, does only in response to requests made of it
Put another way, in my experience running clusters, in $(ps auwx) or its $(top) friend always show etcd or sqlite generating all of the "WHAT are you doing?!" and those also represent the actual risk to running kubernetes since the apiserver is mostly stateless[1]
1: but holy cow watch out for mTLS because cert expiry will ruin your day across all of the components
I've noticed that etcd seems to do an awful lot of disk writes, even on an "idle" cluster. Nothing is changing. What is it actually doing with all those writes?
Almost certainly it's the propagation of the kubelet checkins rippling through etcd's accounting system[1]. Every time these discussions come up I'm always left wondering "I wonder if Valkey would behave the same?" or Consul (back when it was sanely licensed). But I am now convinced after 31 releases that the pluggable KV ship has sailed and they're just not interested. I, similarly, am not yet curious enough to pull a k0s and fork it just to find out
1: related, if you haven't ever tried to run a cluster bigger than about 450 Nodes that's actually the whole reason kube-apiserver --etcd-servers-overrides exists because the torrent of Node status updates will knock over the primary etcd so one has to offload /events into its own etcd
How hard is it to host a Postgres server on one node and access it from another?
I deployed CNPG (https://cloudnative-pg.io/ ) on my basement k3s cluster, and was very impressed with how easy I could host a PG instance for a service outside the cluster, as well as good practices to host DB clusters inside the cluster.
Oh, and it handles replication, failover, backups, and a litany of other useful features to make running a stateful database, like postgres, work reliably in a cluster.
It’s Kubernetes, out of the box.
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM
I hate sounding like an Oracle shill, but Oracle Cloud's Free Tier is hands-down the most generous. It can support running quite a bit, including a small k8s cluster[1]. Their k8s backplane service is also free.
They'll give you 4 x ARM64 cores and 24GB of ram for free. You can split this into 1-4 nodes, depending on what you want.
[1] https://www.oracle.com/cloud/free/
One thing to watch out for is that you pick your "home region" when you create your account. This cannot be changed later, and your "Always Free" instances can only be created in your home region (the non-free tier doesn't have that restriction).
So choose your home region carefully. Also, note that some regions have multiple availability domains (OCI-speak for availability zones) but some only have one AD. Though if you're only running one free instance then ADs don't really matter.
A bit of a nitpick. You get monthly credit for 4c/24gb on ARM, no matter the region. So even if you chose your home region poorly, you can run those instances in any region and only be on the hook for the disk cost. I found this all out the hard way, so I'm paying $2/month to oracle for my disks.
I don't know the details but I know I made this mistake and I still have my Free Tier instances hosted in a different region then my home. It's charged me a month of $1 already so I'm pretty sure it's working.
the catch is: no commercial usage and half the time you try to spin up an instance itll tell you theres no room left
That limitation (spinning up an instance) only exists if you don't put a payment card in. If you put a payment card in, it goes away immediately. You don't have to actually pay anything, you can provision the always free resources, but obviously in this regard you have to ensure that you don't accidentally provision something with cost. I used terraform to make my little kube cluster on there and have not had a cost event at all in over 1.5 years. I think at one point I accidentally provisioned a volume or something and it cost me like one cent.
> no commercial usage
I think that's if you are literally on their free tier, vs. having a billable account which doesn't accumulate enough charges to be billed.
Similar to the sibling comment - you add a credit card and set yourself up to be billed (which removes you from the "free tier"), but you are still granted the resources monthly for free. If you exceed your allocation, they bill the difference.
Honestly I’m surprised they even let you provision the resources without a payment card. Seems ripe for abuse
A credit card is required for sign up but it won't be set up as a billing card until you add it. One curious thing they do is though, the free trial is the only entry way to create a new cloud account. You can't become a nonfree customer from the get go. This is weird because their free trial signup is horrible. The free trial is in very high demand so understandably they refuse a lot of accounts which they would probably like as nonfree customers.
I would presume account sign up is a loss leader in order to get ~spam~ marketing leads, and that they don't accept mailinator domains
They also, like many other cloud providers, need a real physical payment card. No privacy.com stuff. No virtual cards. Of course they don’t tell you this outright, because obscurity fraud blah blah blah, but if you try to use any type of virtual card it’s gonna get rejected. And if your naïve ass thought you could pay with the virtual card you’ll get a nice lesson in how cloud providers deal with fraud. They’ll never tell you that virtual cards aren’t allowed, because something something fraud, your payment will just mysteriously fail and you’ll get no guidance as to what went wrong and you have to basically guess it out.
This is basically any cloud provider by the way, not specific to Oracle. Ran into this with GCP recently. Insane experience. Pay with card. Get payment rejected by fraud team after several months of successful same amount payments on the same card and they won’t tell what the problem is. They ask for verification. Provide all sorts of verification. On the sixth attempt, send a picture of a physical card and all holds removed immediately
It’s such a perfect microcosm capturing of dealing with megacorps today. During that whole ordeal it was painfully obvious that the fraud team on the other side were telling me to recite the correct incantation to pass their filters, but they weren’t allowed to tell me what the incantation was. Only the signals they sent me and some educated guesswork were able to get me over the hurdle
> send a picture of a physical card and all holds removed immediately
So you're saying there's a chance to use a prepaid card if you can copy it's digits onto a real looking plastic card? Lol
Unironically yes. The (real) physical card I provided was a very cheap looking one. They didn’t seem to care much about its look but rather the physicality of it
Using AWS with virtual debit cards all right. Revolut cards work fine for me. What may also be a differentiator: Phone number used for registration is registered also for an account already having an established track record, and has a physical card for payments. (just guessing)
>No privacy.com stuff. No virtual cards.
I used a privacy.com Mastercard linked to my bank account for Oracle's payment method to upgrade to PAYG. It may have changed, this was a few months ago. Set limit to 100, they charged and reverted $100.
There are tons of horror stories about OCI's free tier (check r/oraclecloud on reddit, tl;dr: your account may get terminated at any moment and you will lose access to all data with no recovery options). I wouldn't suggest putting anything serious on it.
They will not even bother sending you an email explaining why, and you will not be able to ask it, because the system will just say your password is incorrect when you try to login or reset it.
If you are on free tier, they have nothing to lose, only you, so be particular mindful of making a calendar note for changing your CC before expiration and things like that.
It’s worth paying for another company just for the peace of mind of knowing they will try to persuade you to pay before deleting your data.
Are all of those stories related to people who use it without putting any payment card in? I’ve been happily siphoning Larry Ellisons jet fuel pennies for a good year and a half now and have none of these issues because I put a payment card in
Be careful about putting a payment card in too.
https://news.ycombinator.com/item?id=42902190
which links to:
https://news.ycombinator.com/item?id=29514359 & https://news.ycombinator.com/item?id=33202371
Good call out. I used the machines defined here and have never had any sort of issue like those links describe: https://github.com/jpetazzo/ampernetacle
Nope, my payment method was already entered.
IME, the vast majority of those horror stories end up being from people who stay in the "trial" tier and don't sign up for pay-as-you-go (one extra, easy step) and Oracle's ToS make it clear that trial accounts an resources can and do get terminated at any time. And at least some of those people admitted, with some prodding, that they were also trying to do torrents or VPNs to get around geographical restrictions.
But yes, you should always have good backups and a plan B with any hosting/cloud provider you choose.
Can confirm (old comment of mine saying the same https://news.ycombinator.com/item?id=43215430)
I recenlty wrote a guide on how to create a free 3 node cluster in Oracle cloud : https://macgain.net/posts/free-k8-cluster . This guide currently uses kubeadm to create 3 node (1 control plane, 2 worker nodes) cluster.
[dead]
Just do it like the olden days, use ansible or similar.
I have a couple dedicated servers I fully manage with ansible. It's docker compose on steroids. Use traefik and labeling to handle reverse proxy and tls certs in a generic way, with authelia as simple auth provider. There's a lot of example projects on github.
A weekend of setup and you have a pretty easy to manage system.
What is the advantage of traefik over oldschool Nginx?
Traefik has some nice labeling for docker that allows you to colocate your reverse proxy config with your container definition. It's slightly more convenient than NGINX for that usecase with compose. It effectively saves you a dedicated vietualhost conf by setting some labels.
One can read more here: https://doc.traefik.io/traefik/routing/providers/docker/
This obviously has some limits and becomes significantly less useful when one requires more complex proxy rules.
Basically what c0balt said.
It's zero config and super easy to set everything up. Just run the traefik image, and add docker labels to your other containers. Traefik inspects the labels and configures reverse proxy for each. It even handles generating TLS certs for you using letsencrypt or zerossl.
I thought this context was outside of Docker, because they used ansible as docker compose alternative. But maybe I misunderstood.
Ah yeah I guess I wasn't clear. I meant use ansible w/ the docker_container command. It's essentially docker compose - I believe they both use docker.py.
Ah yes, makes much more sense.
I created a script that reads compose annotations and creates config for cloudflare tunnel and zero trust apps. Allows me to reach my services on any device without VPN and without exposing them on the internet.
There's very little advantage IMO. I've used both. I always end up back at Nginx. Traefik was just another configuration layer that got in the way of things.
Traefik is waaay simpler - 0 config, just use docker container labels. There is absolutely no reason to use nginx these days.
I should know, as I spent years building and maintaining a production ingress controller for nginx at scale, and I'd choose Traefik every day over that.
> I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.
But you've already said yourself that the cost of using K8s is too high. In one sense, you're solving those solutions more efficiently, it just depends on the axis you use to measure things.
The original statement is ambiguous. I read it as "problems that k8s already solves -- but k8s is less efficient, so can't be used".
That picture with the almost-empty truck seems to be the situation that he describes. He wants the 18 wheeler truck, but it is too expensive for just a suitcase.
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.
That's more than what I'm paying for far fewer resources than Hetzner. I'm paying about $8 a month for 4 vCPUs and 8GB of RAM: https://www.hetzner.com/cloud
Note that the really affordable ARM servers are German only, so if you're in the US you'll have to deal with higher latency to save that money, but I think it's worth it.
I recently set up an arm64 VPS at netcup: https://www.netcup.com/en/server/arm-server Got it with no location fee (and 2x storage) during the easter sale but normally US is the cheapest.
That's pretty cheap. I have 4 vCPUs, 8GB RAM, 80GB disk, and 20TB traffic for €6. NetCup looks like it has 6VCPU, 8GB RAM, 256 GB, and what looks like maybe unlimited traffic for €5.26. That's really good. And it's in the US, where I am, so SSH would be less painful. I'll have to think about possibly switching. Thanks for the heads up.
Thank you for sharing this. Do you have a referral link we can use to give you a little credit for informing us?
Sure, if you still want it: https://hetzner.cloud/?ref=WwByfoEfJJdv
I guess it gives you 20 euros in credit, too. That's nice.
I've been using Docker swarm for internal & lightweight production workloads for 5+ years with zero issues. FD: it's a single node cluster on a reasonably powerful machine, but if anything, it's over-specced for what it does.
Which I guess makes it more than good enough for hobby stuff - I'm playing with a multi-node cluster in my homelab and it's also working fine.
I think Docker Swarm makes a lot of sense for situations where K8s is too heavyweight. "Heavyweight" either in resource consumption, or just being too complex for a simple use case.
The only problem is Docker Swarm is essentially abandonware after Docker was acquired by Mirantis in 2019. Core features still work but there is a ton of open issues and PRs which are ignored. It's fine if it works but no one cares if you found a bug or have ideas on how to improve something, even worse if you want to contribute.
Yep it's unfortunate, "it works for me" until it doesn't.
OTOH it's not a moving target. Docker historically has been quite infamous for that, we were talking about half-lives for features, as if they were unstable isotopes. It took initiatives like OCI to get things to settle.
K8s tries to solve the most complex problems, at the expense of leaving simple things stranded. If we had something like OCI for clustering, it would most likely take the same shape.
Podman is a fairly nice bridge. If you are familiar with Kubernetes yaml, it is relatively easy to do docker-compose like things except using more familiar (for me) K8s yaml.
In terms of the cloud, I think Digital Ocean costs about $12 / month for their control plane + a small instance.
I found k3s to be a happy medium. It feels very lean and works well even on a Pi, and scales ok to a few node cluster if needed. You can even host the database on a remote mysql server, if local sqlite is too much IO.
NixOS works really well for me. I used to write these kinds of idempotent scripts too but they are usually irrelevant in NixOS where that's the default behavior.
And regarding this part of the article
> Particularly with GitOps and Flux, making changes was a breeze.
i'm writing comin [1] which is GitOps for NixOS machines: you Git push your changes and your machines fetch and deploy them automatically.
[1] https://github.com/nlewo/comin
This is exactly why I built https://canine.sh -- basically for indie hackers to have the full experience of Heroku with the power and portability of Kubernetes.
For single server setups, it uses k3s, which takes up ~200MB of memory on your host machine. Its not ideal, but the pain of trying to wrangle docker deployments, and the cheapness of hetzner made it worth it.
How does it compare to Coolify and Dokploy?
Neither of those use kubernetes unfortunately, the tool has kind of a bad rap, but every company I’ve worked at has eventually migrated on to kubernetes
Sure, I'm looking for more of a personal project use case where it doesn't much matter to me whether it uses Kubernetes or not, I'm more interested in concrete differences.
Ah yeah then I’d say the biggest difference is the fact that it can use to helm to install basically anything in the world to your cluster
I run my private stuff on a hosted vultr k8s cluster with 1 node for $10-$20 a month. All my hobby stuff is running on that "personal cluster" and it is that perfect sweetspot for me that you're talking about
I don't use ingresses or loadbalancers because those cost extra, and either have the services exposed through tailscale (with tailscale operator) for stuff I only use myself, or through cloudflare argo tunnels for stuff I want internet accessible
(Once a project graduates and becomes more serious, I migrate the container off this cluster and into a proper container runner)
It’s been a couple of years since I’ve last used it, but if you want container orchestration with a relatively small footprint, maybe Hashicorp Nomad (perhaps in conjunction with Consul and Traefik) is still an option. These were all single binary tools. I did not personally run them on 2G mem VPSes, but it might still be worthwhile for you to take a look.
It looks like Nomad has a driver to run software via isolated fork/exec, as well, in addition to Docker containers.
The solution to this is to not solve all the problems a billion dollar tech does on a personnal project.
Let it not be idempotent. Let it crash sometimes.
We lived without kubs for years and the web was ok. Your users will survive.
Yeah, unless you're doing k8s for the purpose of learning job skills, it's way overkill. Just run a container with docker, or a web server outside a container if it's a website. Way easier and it will work just fine.
I’ve been using https://www.coolify.io/ self hosted. It’s a good middle ground between full blown k8s and systemd services. I have a home lab where I host most of my hobby projects though. So take that into account. You can also use their cloud offering to connect to VPSs
> I'm stuck with manual docker compose up/down commands over SSH
Out of curiosity, what is so bad about this for smaller projects?
Just go with a cloud provider that offers free control plane and shove a bunch of side projects into 1 node. I end up around $50 a month on GCP (was a bit cheaper at DO) once you include things like private docker registry etc.
The marginal cost of an additional project on the cluster is essentially $0
I've ran K3s on a couple of Raspberry Pi's as a homelab in the past. It's lightweight and ran nice for a few years, but even so, one Pi was always dedicated as controller, which seemed like a waste.
Recently I switched my entire setup (few Pi's, NAS and VM's) to NixOS. With Colmena[0] I can manage/update all hosts from one directory with a single command.
Kubernetes was a lot of fun, especially the declarative nature of it. But for small setups, where you are still managing the plumbing (OS, networking, firewall, hardening, etc) yourself, you still need some configuration management. Might as well put the rest of your stuff in there also.
[0] https://colmena.cli.rs/unstable/
6$/m - will likely bring you peace of mind - Netcup hosting VPS 1000 ARM G11
They also have regular promotions that offer e.g. double the disk space.
There you get
for 6 $ / m - traffic inclusive. You can choose between "6 vCore ARM64, 8 GB RAM" and "4 vCore x86, 8 GB ECC RAM" for the same price. And much more, of course.https://www.netcup.com/en/server/vps
I'm a cheapskate too, but at some point, the time you spend researching cheap hosting, signing up and getting deployed is not worth the hassle of paying a few more $ on bigger boxes.
Have you tried nixOS? I feel like it solves the functional aspect you're looking for.
I am curious why your no revenue projects need the complexity, features and benefits of something like Kubernetes. Why you cannot just to it the archaic way of compiling your app, copy the files to a folder and run it there and never touch it for the next 5 years. If it is a dev environment with many changes, its on a local computer, not on VPS, I guess. Just curious by nature, I am.
The thing is, most of those enterprise-grade container orchestrations probably don't need k8s either.
The more I look into it, the more I think of k8s as a way to "move to micro services" without actually moving to micro services. Loosely coupled micro services shouldn't need that level of coordination if they're truly loosely coupled.
> Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM
To put this in perspective, that’s less compute than a phone released in 2023, 12 years ago, Samsung Galaxy S4. To find this level of performance in a computer, we have to go to
The main issue is that Kubernetes has created good API and primitives for managing cloud stuff, and managing a single server is still kinda crap despite decades of effort.
I had K3S on my server, but replaced with docker + Traefik + Portainer - it’s not great, but less idle CPU use and fewer moving parts
I believe that Kubernetes is something you want to use if you have 1+ SRE full-time on your team. I actually got tired with complexity of kubernetes, AWS ECS and docker as well and just build a tool to deploy apps natively on the host. What's wrong with using Linux native primitives - systemd, crontab, postgresql or redis native package? Whose should work as intended, you don't need them in container.
SSH up/down can be scripted.
Or maybe look into Kamal?
Or use Digital Ocean app service. Got integration, cheap, just run a container. But get your postgres from a cheaper VC funded shop :)
Why not just use something like Cloud Run? If you're only running a microVM deploying it there will probably be at or near free.
I really like `DOCKER_HOST=ssh://... docker compose up -d`, what do you miss about Deployments?
I developed a tiny wrapper around docker compose which work on my use case: https://github.com/daitangio/misterio
It can manage multiple machine with just ssh access and docker install.
Please try https://github.com/skateco/skate, this is pretty much the exact same reason why I built it!
Virtual Kubelet is one step forward towards Kubernetes as an API
https://github.com/virtual-kubelet/virtual-kubelet
Why not minikube or one of the other resource-constrained k8s variants?
https://minikube.sigs.k8s.io/
I use Caprover to run about 26 services for personal projects on a Hetzner box. I like its simplicity. Worth it just for the one-click https cert management.
Have you tried k3s? I think it would run on a tiny vps like that and is a full stack. Instead of etcd it has sqlite embedded.
> I'm constantly reinventing solutions to problems that Kubernetes already solves
Another way to look at this is the Kubernetes created solutions to problems that were already solved at a lower scale level. Crontabs, http proxies, etc… were already solved at the individual server level. If you’re used to running large coordinated clusters, then yes — it can seem like you’re reinventing the wheel.
For $10 you can buy VPS with a lot more resources than that on both Contabo and Ovh
I've used caprover a bunch
What about Portainer? I deploy my compose files via git using it.