This is awesome, thanks for creating this. I've had to write some absolutely wonky scripts to dump a PostgreSQL database into Parquet, or read a Parquet file into PostgreSQL. Normally some terrible combination of psycopg and pyarrow, which worked, but it was ad-hoc and slightly different every time.
A lot of other commenters are talking about `pg_duckdb` which maybe also could've solved my problem, but this looks quite simple and clean.
I hope for some kind of near-term future where there's some standardish analytics-friendly data archival format. I think Parquet is the closest thing we have now.
(Marco from Crunchy Data)
With PostgreSQL extensions, we find it's most effective to have single-purpose modular extensions.
For instance, I created pg_cron a few years ago, and it's on basically every PostgreSQL service because it does one thing and does it well.
We wanted to create a light-weight implementation of Parquet that does not pull a multi-threaded library into every postgres process.
When you get to more complex features, a lot of questions around trade-offs, user experience, and deployment model start appearing. For instance, when querying an Iceberg table, caching becomes quite important, but that raises lots of other questions around cache management. Also, how do you deal with that memory hungry, multi-threaded query engine running in every process without things constantly falling over?
It's easier to answer those questions in the context of a managed service where you control the environment, so we have a product that can query Iceberg/Parquet/CSV/etc. in S3, does automatic caching, figures out the region of your bucket, can create tables directly from files, and uses DuckDB to accelerate queries in a reliable manner. This is partially powered by a set of custom extensions, partially by other things running on the managed service. https://docs.crunchybridge.com/analytics
However, some components can be neatly extracted and shared broadly like COPY TO/FROM Parquet. We find it very useful for archiving old partitions, importing public and private data sets, preparing data for analytics, and moving data between PostgreSQL servers.
Having the famously crashy DuckDB share a process and memory with PostgreSQL doesn't seem like the most robust setup.
Famously crashy? Any incidents you can share? I’ve only had good experiences is why I ask.
I haven't used it, but have gone through their issue tracker and seen a lot of comments to this effect here and on other internets places. (Unverifiable: also some hearsay from colleagues)
Not saying they're doing it wrong, it just seems they have some different stability vs performance tradeoffs than PG.
I had the exact same reaction
Had a similar thought. Azure Postgres has something similar to pg_parquet (pg_azure_storage), but we're looking into replacing it with pg_duckdb assuming the extension continues to mature.
It would be great if the Postgres community could get behind one good opensource extension for the various columnstore data use cases (querying data stored in an open columnstore format - delta, iceberg, etc. being one of them). pg_duckdb seems to have the best chance at being the goto extension for this.
Fun fact, I created pg_azure_storage :)
I was just wondering if pg_parquet could be combined with pg_azure_storage to write Parquet files to Azure Storage.
I had problems with pg_azure_storage in the past, because the roles pg_read_server_files and pg_write_server_files are unassignable on Azure PostgreSQL databases which makes the use of `COPY {FROM,TO}` impossible.
Azure is not supported as a backend in pg_parquet right now, but shouldn't be hard to add (contributions welcome!)
https://github.com/CrunchyData/pg_parquet
It would not be safe to let any user access object storage. Therefore, pg_parquet has two roles called parquet_object_store_read and parquet_object_store_write that give permission to COPY FROM/TO object storage (but not local file system).
In pg_azure_storage there is a comparable azure_storage_admin role that needs to be granted to users that need Azure Blob Storage permission.
Is pg_azure_storage available on GitHub?