I don't know how many hoops you want to jump through, we use AWS and Athena
to create them.
- Export table as JSON
- Put on AWS S3
- Create JSON table in Athena
- Use the JSON table to create a parquet table
The parquet files will be in S3 as well after the parquet table is
created.
> On Aug 26, 2020, at 1:11 PM, Chris Travers wrote:
>
> For simple exporting, the simplest thing is a single-node instance of Spark.
Thanks.
> You can read parquet files in Postgres using
> https://github.com/adjust/parquet_fdw if you so desire but it does not
> support writing as parquet fil
On Wed, Aug 26, 2020 at 9:00 PM Scott Ribe
wrote:
> I have no Hadoop, no HDFS. Just looking for the easiest way to export some
> PG tables into Parquet format for testing--need to determine what kind of
> space reduction we can get before deciding whether to look into it more.
>
> Any suggestions
I have no Hadoop, no HDFS. Just looking for the easiest way to export some PG
tables into Parquet format for testing--need to determine what kind of space
reduction we can get before deciding whether to look into it more.
Any suggestions on particular tools? (PG 12, Linux)
--
Scott Ribe
scott_