I'm not sure if you've happened to see this tool:
https://github.com/apache/cloudberry-gpbackup-s3-plugin, which is for
putting backup files on S3.

Lirong


tturgum5ekov (via GitHub) <[email protected]> 于2025年10月7日周二 19:40写道:

>
> GitHub user tturgum5ekov edited a discussion: How to implement centralized
> backups in Apache Cloudberry when gpbackup stores each segment’s data
> locally?
>
> Hello everyone 👋
>
> I’m currently configuring backups for an Apache Cloudberry
> 2.0.0-incubating cluster using the gpbackup utility.
> When I run the backup locally, everything works correctly — gpbackup
> creates files on the coordinator and on each segment host without errors.
> However, I want all backups to be stored centrally on a separate backup
> server (connected via SSH).
>
> Question:
> Is there a simple or built-in way to:
> - store all segment backups centrally on a single remote server,
> - or use another tool that can write backups directly to remote storage
> (via SSH, NFS, or S3),
> - so that gprestore can restore everything from one place?
>
>
> Environment details:
> - Apache Cloudberry: 2.0.0-incubating (based on PostgreSQL 14)
> - Cluster setup: 1 coordinator + 2 segment hosts
> - Backup tool: gpbackup v1.2.7-beta1+dev.7
>
>
> Command used:
> gpbackup --dbname postgres --backup-dir /backups/full --jobs 4
> --leaf-partition-data --verbose
>
>
> What I’ve tried:
> - Using rsync to collect each segment’s /backups/full/seg*/... directories
> onto a single server.
> - Writing a centralized backup script that connects via SSH to each node.
> - Attempting to mount a shared NFS directory, but Cloudberry doesn’t
> automatically coordinate the mounts across all segment hosts.
>
>
> GitHub link: https://github.com/apache/cloudberry/discussions/1380
>
> ----
> This is an automatically sent email for [email protected].
> To unsubscribe, please send an email to:
> [email protected]
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to