116 comments
  • jprafael1y

    I don't get the "its hard to measure throughput" line. I'm using RDS at work. At some point we had 20TB data, with daily 500GB (batch) writes into indexed tables. Same order of magnitude cost, sure. But the combination of RDS instance monitor, Performance Insights, PGadmin dashboard means you have: visual query plan with optional profilling (pgadmin), live tracking of SQL invocations with # invokes per second, avg number of rows per invocation, and sampling based bottleneck analysis (disk reads, locks, cpu, throttling, network reads, sending data to client, etc), you have per disk read/write throughput (MBps), IOPS being used, network throughput, etc. At most times what i felt lacking was the ability to understand why PG was using so much CPU/disk troughput(e.g. inserts into indexed tables) but the disk throughput the instance was under was always very visible.

    The article also doesnt mention anything about using provisioned IO instances. Nor any mention of which architectures have the highest PIOPs ceiling.

    • hobobaggins1y

      I think the article is saying that EBS comes with no throughput guarantees (or even estimates of what to expect)

      • dekhn1y

        IOPS times blocksize is bandwidth in my experience (on modern storage).

        I've built block devices using the highest IOPS (fulfilling all the necessary requirements) at well as extremely large block devices (64TB) using EBS. When maxxed out and tuned to the gills, it's fast and big.

        • pritambarhate1y

          >> When maxxed out and tuned to the gills, it's fast and big.

          Genuine question: How is it cost wise, compared to other solutions you have experience with?

  • tonymet1y

    RDS is very expensive. 3000 iops is included on EBS (EC2) , but costs ~$300/mo on RDS. Additional iops is 20x

    The transition from on-premise DB to cloud DB can be jarring. With on-premise hardware your cpu usage and query throughput are correlated. With RDS your queries will suddenly hang without indication from traditional resource metrics.

    Be meticulous about your iops & cpu needs, and assess whether snapshots & replication config is worth paying 3x for.

    • hobobaggins1y

      And it can be far worse than that, too. Comparing to on-premise DB on real iron, that cost differential could easily be closer to 30x.

      that old saying "but it's opex, not capex" will only take you so far -- especially if you see the pricing for amazing but ten-year-old hardware at your local dedicated server leasing and then you've still got opex instead of capex.

    • Zanfa1y

      > RDS is very expensive. 3000 iops is included on EBS (EC2) , but costs ~$300/mo on RDS. Additional iops is 20x

      I’m not trying to argue that RDS isn’t expensive, but an on-demand multi-az 4 CPU / 16GB instance with 12,000 IOPS / 500MiBps bandwidth is ~$600 / month on RDS.

      https://calculator.aws/#/estimate?id=0d612854fb94107dcb14441...

      • tonymet1y

        It varies by reservation status and engine

  • wpeterson1y

    If they’re optimizing full table scans of 20M+ rows, they probably want an optimized column oriented DB or a data warehousing option like Snowflake.

    • asah1y

      agreed! PostgreSQL is the wrong tool for large data without indexes. It's also the wrong tool for ultra low latency access. And small, non-persistent data. And and and...

      That said, over time PostgreSQL has wildly expanded the range for which it's suitable and if you can and want to bet on the future, it's often a better bet than niche systems.

      It's also important to remember that PostgreSQL is decades ahead of other systems in data virtualization and providing backward-compatibility to applications after changes; pushing down computation near the data and avoiding moving billions of rows into middleware, including a world class query optimizer; concurrent data access; data safety and recovery; and data management and reorganization, including transactional DDL. Leaving this behind feels like returning to the stone age.

      • osigurdson1y

        >> Wrong tool for ultra low latency access

        I'm not sure what you mean by ultra low latency but unfortunate to have to rethink what a tool is good for because of RDS / EBS.

    • spamizbad1y

      And even if you want to stay in the Postgres ecosystem there's options for you there.

      • gregw21y

        For analytics, use a columnar database.

        There are even other AWS Postgres-oriented options (check the pricing first):

        ZeroETL from Aurora Postgres to (postgres-compatible) Redshift (Serverless?)

        • Moto74511y

          Yup. Even gross abuses of Redshift run fine with appropriate roll ups and caching. At a past job we did it “wrong enough” that it took a while for a more state of the art solution to catch up. This is not to say the abuse of Redshift should have been done, but AWS has been abused a lot and the engineers there have found a lot of optimizations for interesting workloads.

          But to pick the wrong DB tool in the first place and bemoan it as “not scalable” is a bit like complaining that S3 made for a poor CDN without looking at how you’re supposed to use it with Cloudfront.

        • okr1y

          Is ZeroETL not in early stages, still? I heard it replicates everything. No filtering yet on parts of the binlog (tables/columns). But other than that, i like the idea.

          (I would like to know, where their ZeroETL originated from, usually AWS picks up ideas somewhere and makes it work for their offerings to cash in. A universal replication tool.)

    • ies71y

      For a mere 20M-70M rows I'd stick with Postgres, index, and materialized view.

      After that is when I'll start migrate to duckdb or clickhouse (or citus if I don't want to move out completely from Postgres)

    • imheretolearn1y

      Came here to say this. If you use a hammer to fasten a screw, it's probably not going to work

      • hobobaggins1y

        Perhaps cockroachdb or titaniumdb would be a better choice.

        • dalyons1y

          I can’t tell if you’re trolling or not, as those are even more terrible options for analytics workloads . You must be

  • menschmanfred1y

    Large? With 20 million?

    I'm lost on the article. Sounds to me they had someone doing this without any DB experience at all.

    I would not have written a blog post about an obvious choice of having some cheap nvme based 'warehouse' server.

    But I do wanna see there explain statement tbh and how they store the data in their columns.

    • dboreham1y

      Article has been submitted four times by the same user, so perhaps some astroturfing being done?

  • web3-is-a-scam1y

    This article matches my experience with RDS. Performance is just absolutey atrocious when even a docker container on my MacBook performs the same queries on the same dataset at 100x the speed.

    M1 with 16gb of memory vs 16 vcpu Xeon with 128g, my laptop absolutey trounces it.

    • 1y
      [deleted]
    • PrimeMcFly1y

      That doesn't sound right at all.

      • ReflectedImage1y

        Sounds right to me. Postgres on AWS doesn't work because relational databases don't work on networked storage.

        • zbentley1y

          Not really. EBS isn't really network storage in the traditional sense. It's closer to iSCSI-attached NVMe over a dedicated low-congestion storage backbone network.

          • kossae1y

            Lol I get what you’re saying but it’s funny your description is essentially “storage that is networked”.

          • ReflectedImage1y

            This is very simple:

            EBS has a latency of around 2 ms.

            SSD has a latency of around of around 0.25 ms.

            A relational database will have around 10x the performance on an SSD compared to EBS because relational databases need to ensure data has been fully written to disk.

            • zbentley1y

              Sometimes. The more expensive varieties of EBS provide comparable latency numbers to local SSDs (by using fancy on-hypervisor volatile storage or something? No idea), for an added cost.

              A separate set of behavior exists for RDS Aurora, which isn't using EBS underneath, so it might be worth looking into the Aurora latency characteristics (vs. cost, of course; Aurora ain't cheap) if you're concerned about the performance impact of EBS-vs-local-disk.

              • ReflectedImage1y

                Ultimately, hosting a relational database on AWS will cost 10x as much regardless of which non-working flavour of it you choose.

                Million dollar database bills do not happen outside of the cloud world.

                • zbentley1y

                  > Million dollar database bills do not happen outside of the cloud world.

                  Laughs in Oracle per-CPU licensing terms

            • magicalhippo1y

              Reminds me of when a customer complained a DB-heavy processing step took about 10x as long as expected, ie 10 minutes instead of 1.

              We used on-prem DB server and fat clients. After a long debugging session I hadn't found anything. Gradping at straws I asked what kind of network they ran.

              Turned out the clients all ran on laptops connected through Wi-Fi... so yeah 10x latency turned into 10x longer job.

        • PrimeMcFly1y

          I wasn't saying anything about Postgres not working on AWS but rather the alleged speed difference.

          • ReflectedImage1y

            In theory, Postgres will be 10x faster on bare metal than AWS. From practical experience that is also what I see.

            It's the difference between using SSD and EBS for your disk storage.

  • Marazan1y

    > When you’re storing a large time-series table (say 20 million rows)

    Stares at 4.4 billion row Aurora Postgres table and thinks.

    • osigurdson1y

      Agree. 20 million rows is nothing for even the most basic Postgres setup. However, at some point ClickHouse (or perhaps more dedicated ts databases) starts to make sense as the 23 byte per row overhead in Postgres weighs. Usually covering indexes are needed as well so it eventually becomes a little too much.

      We were doing ok with about ~10B rows in Postgres before deciding to switch however. Even that might be fine for some workloads but not ours.

      • mfreed1y

        Check out how TimescaleDB adds columnar compression to PostgreSQL, typically saving 95% of storage overhead:

        https://www.timescale.com/blog/building-columnar-compression...

        • tbragin1y

          However if you really want to optimize data currently residing in Postgres for analytical workloads, as the original comment suggests - consider moving to a dedicated OLAP DB like ClickHouse.

          See results from Gitlab benchmarking ClickHouse vs TimescaleDB: https://gitlab.com/gitlab-org/incubation-engineering/apm/apm...

          Key findings:

          * ClickHouse has a much smaller data volume footprint in all cases by almost a factor of 10.

          * There are very few ClickHouse queries that have >1s latency at q95. TimescaleDB has multiple >1s latencies, including a few in the range of 15-25s.

          Disclaimer: I work at ClickHouse

          • osigurdson1y

            What we ended up doing is maintain meta-data in Postgres but time series data is stored in ClickHouse. Thanks for making / working on ClickHouse. I appreciate it very much.

          • mfreed1y

            That PoC benchmark didn't turn on Timescale's columnar compression, which every real deployment uses. So misleading at best.

            (Timescaler)

        • osigurdson1y

          TimeScale was certainly the first choice as we were already using Postgres. However, we could not get it to perform well as times are simulated / non monotonic. We also ultimately need to be able to manage low trillions of points in the long run. InfluxDB was also evaluated but faced a number of issues as well (though I am certain both it and TimeScale would work fine for some use cases).

          I think perhaps because ClickHouse is a little more general purpose, it was easier to map our use case to it. Also, one thing I appreciate about ClickHouse is it doesn't feel like a black box - once you understand the data model it is very easy to reason about what will work and what will not.

          • out_of_protocol1y

            Did you look at something Parquet-based? Different approach, could work on very large time-series-like datasets. E.g. snowflake, Apache Iceberg

    • magicalhippo1y

      Reminds me of when I read the following in the documentation for our SQL database: "For very wide tables (more than 10 columns) [...]"

      I burst out laughing, thinking about our main table which was over 450 columns at that time.

    • mfreed1y

      https://www.timescale.com/blog/how-we-scaled-postgresql-to-3...

      Staring at a >trillion rows in a TimescaleDB hypertable on PostgreSQL.

  • nostrebored1y

    As an aside to anyone who has a question like this for AWS — support is the wrong way to ask.

    You have an account team whether you know it or not. The account team has an SA who will be able to help out or request help from a specialist.

    • arjvik1y

      How does one get to this account team, especially if you're a very small company and using essentially a personal AWS account? (Genuinely asking because there have been times where talking to an account rep would have been incredibly helpful, but all I thought I could do was reach support.)

      • mannyv1y

        People think it's hard to get to AWS people. It isn't. Ask your rep and they'll try and get you to an architect.

        You might have to ask support who your rep is.

      • nostrebored1y

        You can ask the support rep to find your account manager (hit or miss) or get in touch with an SDR (https://aws.amazon.com/contact-us/sales-support/)

      • worik1y

        > especially if you're a very small company

        Probably better not on AWS

  • avereveard1y

    > [we were] storing a large time-series table

    saved you a click

    • starttoaster1y

      People are willing to put in way too much work just to avoid using Prometheus or InfluxDB, aren't they?

      • WJW1y

        Posts with the basic messgage "Use nothing but postgres for everything from pubsub to background job queues, it's the best thing since sliced bread and will solve all your problems" have been a HN staple for at least a decade. It's no surprise that sooner or later people would start believing it.

        • Ozzie_osman1y

          In all fairness, postgres still works for their workload even if RDS didn't. You can get a lot of workloads out of postgres with the right hardware, the right replication, and the right extensions (eg Citus has a columnar extension that probably would have been a pretty good fit for that).

        • tracker11y

          To be fair, we're in an age with servers that can handle hundreds of simultaneous threads on a single system with terabytes of RAM and storage faster than RAM a few generations back.

          You can scale up a lot with a general purpose RDBMS like postgres on a single server, and a read replica today.

          It's not perfect, or even ideal for many workloads or even all environments... But it probably can be good enough for most application needs.

          It's knowing when it isn't, why it isn't, and what to use instead that counts in those instances. But I hold no blame for starting with what is probably one of the better known and understood solutions to start with.

        • eximius1y

          Eh, it's a reaction against people making or reaching for the wrong tools or the right tools but at the wrong scale.

          Postgres is very very good. The vast majority of use cases work with it with very minor effort. People would, in general, be better off investing in thoroughly understanding a general tool like postgres (or similar dbs, just pick one to learn, but there are reasons why you would pick postgres over, say, oracle).

          There are still reasons to use more specialized DBs. But the push for postgres is because very often the people reaching for those specialized DBs do so in error.

          20M rows is practically an in-memory dataset, for example.

      • wenc1y

        Unless the time-series was only for simple monitoring or querying, I would stay away from key-value databases like Prometheus or InfluxDB which have limited joins and limited analytics capabilities.

        A fully relational time-series database like Timescale (a plugin db built on Postgres) gives you full SQL analytics, including aggregations and full relational joins with other data, which is where a lot of the value-add usually is. This also opens up the field to building multivariate machine learning models.

      • golergka1y

        It's always a good idea to use a jack of all trades like PostgreSQL that you know well for a first version and migrate parts of your service to a specialised tool that you have to research later, after you're sure that you have a good product.

      • hobobaggins1y

        It's not that pgsql wasn't appropriate, it's that the neutered AWS RDS managed instance was inappropriate. Whether more appropriate non-pgsql solutions existed seems to have been outside the scope of the article.

      • Spivak1y

        Well yeah, of course. I can't understand why people reach for a bunch of bespoke databases where you need a whole other ecosystem of tooling and libraries to use and monitor, can't have transactions across them, another single point of failure, your ORMs don't mix if you use one of those.

        The amount of work you need to do to make it not worth it is quite high assuming it can be done which they seem to have accomplished pretty easily by DIYing their own provisioned iops rds (not sure why they didn't try that).

      • zilti1y

        Or just TimescaleDB

      • avereveard1y

        Their solution is zfs on the write master can't wait for the next blog post on how they found their data corrupted

        • bfung1y

          Or how after they do a writer failover, they start seeing duplicate data.

        • philkrylov1y

          PostgreSQL does not use SEEK_DATA/SEEK_HOLE so they're ok

        • ikiris1y

          Ok, I'm out of the loop, whats the problem with zfs here?

          • avereveard1y

            https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... the mean time between corruption article on zfs is two years

            • philkrylov1y

              Looking at your search results, there's just one recent ZFS corruption case with SEEK_DATA/SEEK_HOLE (in several HN reflections), a 2-year old Ubuntu-only buggy patch story, and some 2008 [Open]Solaris corruption.

            • ikiris1y

              Most of those links are about bad memory. If you blame bad memory for filesystem issues I don't really know what to tell you. Ignoring the poor state of the native encryption code, ZFS has had 1 corruption bug in like 10 years. Thats one of the best records for modern filesystems. I still wouldn't trust my data to btrfs by comparison.

    • anonzzzies1y

      But 20m rows as example. Come on: we were running that and more for ad networks begin ‘00s on a rack with Postgres and it ran fine for analytics and everything else we needed; how can it be an issue now?

  • Glyptodon1y

    I can't comment overall on the article, and my experience is a a few years out of date, but my experience with Amazon managed DB services was that the DB iops bandwidth and limitations were so bad that you'd be better off colocating your work laptop with a DB for even moderate size projects.

    (It's been a minute, but I think we ended up using an NVMe SSD volume on k8s w/ a PG container to run a stack of low volume services for fractions of what RDS would cost. Next job was married to RDS and we constantly had issues with lack of iops for whatever we were doing, though my memory is that is wasn't anything wild. Just bursty.)

  • Scubabear681y

    Was using Postgres Aurora RDS at a startup client last year, and the Aurora costs were through the roof. We had some moderately inefficient union queries that burned through credits like no tomorrow. Just a handful of users lightly using the system cost about $3,000 a month.

    I loved everything else about it.

    Edit: it was real estate market data, so only about 15 million rows.

    • elteto1y

      You could probably keep 15 million rows in a csv file and still get great performance. Without even mentioning sqlite. Any reason to use such an overkill solution for that problem? Honest question, not passing judgement.

      • Scubabear681y

        I inherited it from the prior CTO and team.

        This was one of the better solutions they had. They also had Talend for ETL and Snowflake for “analysis”. You don’t want to know what Snowflake cost.

        For the same roughly 15 millions rows…

        • bomewish1y

          This is just so incredibly incompetent. What gives? I would expect this in government but in a company incentives are meant to be aligned.

          • Scubabear681y

            The prior CTO came from much larger, highly regulated startups. His only experience was very large scale systems with complex requirements. He took the approach that worked there to this much smaller, mostly unregulated startup in a very different industry.

            It happened because he was the only semi technical person who was an FTE. Everyone else was from a consulting firm that the private equity owners “suggested” they use.

            • bomewish1y

              That explains a lot. It’s still a case of incompetence but this gives it a clearer explanation.

          • gopher_space1y

            Government is not only compatible with competence; it is a profound source of competence. When we recognize our place in an immensity of people and in the passage of ages, when we grasp the intricacy, beauty, and subtlety of public life, then that soaring feeling, that sense of elation and humility combined, is surely civic. The notion that government and competence are somehow mutually exclusive does a disservice to both.

  • bakugo1y

    Is it just standard practice to put an ugly, completely unrelated AI image at the top of every medium article now?

    • viraptor1y

      It was the standard practice to put an image, often unrelated and sometimes ugly on top of blog posts for quite some time now. Long before the AI bit.

      • paulmd1y

        keyword: “hero image”

    • jjgreen1y

      I then assume the article is GPT-gibberish too so don't waste my time reading it.

  • Marazan1y

    That said the EBS bandwidth credits complaint is very very valid.

    Anything beneath a db.r6g.4xlarge get "up to" bandwidth which means credits. The RDS docs are not explicit about this as we didn't get bit by it but we could of if we had been less cautious. And I notice for the r7g instances you need to hit an 8xlarge before you get guaranteed EBS bandwith.

    I'd never be moving to Aurora for performance though, you do it for the magical (but expensive) replication

    • dalyons1y

      Hmm? It’s been for the most part faster than RDS when I’ve used both the Postgres and MySQL versions. Less random slowdowns that’s for sure - the log storage system is a lot more predictable than traditional/ebs ones

    • coredog641y

      ISTR 8xlarge is the threshold for banishing “up to” for most instance limits.

  • frugalmail1y

    Article should accurately be titled "Consequences of bad technical leadership"

    • ReflectedImage1y

      I would say "AWS unsuitable for real world business applications".

      Every business application uses a database and AWS charges $3000 per month for the same database that could run on your MacBook, it's beyond ridicious.

  • itsthecourier1y

    Aurora gave us more performance, but starting charging us for IOPS, where plain RDS wasn't. We are moving out.

    Also self host db and backup is inmensively more cheaper/faster

    • starttoaster1y

      > Also self host db and backup is inmensively more cheaper/faster

      Always has been... the whole point of the AWS managed services is to get them to do a lot of the lifecycle management for updates/backups/restores. It's always been understood to cost more money though.

    • HatchedLake7211y

      > but starting charging us for IOPS

      if high I/O is an issue, AWS announced I/O Optimized just last year https://aws.amazon.com/about-aws/whats-new/2023/05/amazon-au...

      > Also self host db and backup is inmensively more cheaper/faster

      So is running a server in a colocation centre or your own closet. But there's a reason people opt for that less and less these days

      • drdaeman1y

        > people opt for that less and less these days

        Do they? Could be my bubble, but I’m hearing stories how moving away from clouds to bare metal dramatically lowered costs, even accounting for having to hire some sysadmins who know how to deal with this stuff.

        And those who aren’t that brave are still fed up with cloud nonsense and are rebuilding cloud stuff themselves (like setting up replacements for insanely overpriced AWS NAT Gateway.)

        Clouds were definitely the way to go just a few years ago - and still are, but I believe folks are more and more wary of their drawbacks. And they understand they’re not Google so they don’t really need all those insanely complex but highly scalable solutions for their stuff.

        Could be just my bubble, though. I’m most definitely biased here.

        • worik1y

          > moving away from clouds to bare metal

          Moving away from proprietary AWS services

          The "bare metal" can be a VPS.

          The "proprietary AWS services" are pushed hard to get lock in.

          The are trade offs.

          • drdaeman1y

            Yes, this also happens, the second scenario I've mentioned.

            But I've heard about moving away from virtual servers specifically, or only using those for the highly elastic parts when the load fluctuates a lot. Bare metal is simply so much cheaper in the long run, if you know how to work with it. Yes, it's rigid in terms of provisioning, but it's the baseline (all clouds and VPSes have bare metal underneath) so it gives the best bang for the buck.

  • anonzzzies1y

    This is more rds than Postgres; if you run Postgres yourself, you can install plugins (columnar store for this example) that fix the issue.

  • eightnoteight1y

    I don't think it would have reduced the bill much, but generally RDS is 2x costly than EC2, but I'm guessing most of the improvement the article speaks about came from ephemeral disk i.e nvme storage

    its a recent feature but I think RDS Optimized Reads should achieve similar improvement in performance and get the cost from 11K to 4.2K

  • Attummm1y

    Perhaps it's a clickbait title, but while reading the article, it struck me that the obvious point is that the default choice tool is not the best in all situations; engineering is about tradeoffs. It's akin to saying why a machete didn't work for us when cutting bread.

  • declan_roberts1y

    It’s a good article, but rolling your own is usually not advised. Who is going to manage and monitor it? Who will be in charge of upgrading it now? These often come with a sysadmin cost that is offloaded to some software engineer.

  • mannyv1y

    Funny they split read and write instances out late. That should be done by default.

    • mannyv1y

      And 20m rows isn't a lot. Maybe they forgot to put in an index?

  • JaggerFoo1y

    I did't get the article at first. Was the solution self-managed PostgreSQL on EC2 and EBS, it's not stated explicitly, but implied with the WAL-G reference?

    Why not use K-V if your looking for performance?

    • ReflectedImage1y

      PostgreSQL gives the performance if you are not running it on AWS

  • eikenberry1y

    No mention of HA and last I checked Postgres still had no good solution for that... so I'm guessing they don't need HA and can tolerate downtime while they restore the DB cluster?

    • dijit1y

      The issue with “no good solution” is that sometimes things are inherently hard and certain technologies don’t permit lying.

      Good example is async in Rust; people don’t like it but mostly because async is hard.

      Postgres has excellent HA options, if you know what you’re trying to do with your data. CitusDB for data-warehousing storage, TimescaleDB for time-series data, and the traditional replication system for having HA (with a single write-primary) - which is the same method that Elasticsearch and ETCd are doing under the hood. Though in Elasticsearches case they do it by aggressively sharding their data set and splitting write masters across multiple nodes which has huge latency tradeoffs.

      Other multi-master HA systems have trade offs (or, lie about not having tradeoffs *cough* mongo *cough*).

    • viraptor1y

      They mentioned Aurora. That's AWS's solution to database clusters and it can do fancy HA.

    • candiddevmike1y

      Patroni or pg_auto_failover work well enough.

  • b2bsaas001y

    Agree. I run all using virtual machine and using Cloud66 for managed backups and UI. I am also considering switching to Hetzner for bare metal.

  • doctor_eval1y

    Would love to see a comparison with some of the many other vendors out there. Vultr, Supabase, Tembo, …

  • hipadev231y

    clickhouse, influxdb, timescale, rockset, etc are all viable solutions here. a single ec2 box for ~$100/mo would give them ample room to grow. no need to split read/write. This is anything but a hard problem. Literally the defaults on the above DBs and they'd be fine.

    20M rows is so tiny it's laughable.

    • wutwutwat1y

      Sure, but I think most people use RDS specifically to not have to deal with the ops of maintaining a highly available, fault tolerant, snapshotted, clonable, backed by infinite block storage service. A single ec2 instance won’t fly for any company that wants to exist when an availability zone takes a nap or a developer fat fingers the wrong command in a prod psql session.

      • hipadev231y

        Managed solutions for all of the above exist for materially lower TCO than cited in the article. It’s more about using the right tool.

        • wutwutwat1y

          The comment I’m replying to said ec2 instance and that’s what I’m responding to. I’m aware managed services exist for these things, but that’s not what the parent comment said.

  • eezing1y

    AlloyDB on GCP could be a good fit here.

  • mt42or1y

    The main issue here is Postgres which cost lot of more I/O than MySQL