Re: Support for dates before 4713 BC

2024-10-16 Thread Asad Ali
Hi Nina Richards,

Instead of using PostgreSQL's DATE or TIMESTAMP types, you can store years
as NUMERIC or BIGINT values.
You can manually represent dates before 4713 BC and build custom functions
for date operations like addition, subtraction, or comparison.

To facilitate comparisons or operations on your custom dates (BC/AD), you
can create user-defined functions in PostgreSQL for adding, subtracting, or
comparing dates.

Best Regards,

Asad Ali


On Thu, Oct 17, 2024 at 5:36 AM Tom Lane  wrote:

> David Rowley  writes:
> > It's by no means a trivial thing to do, but it is possible to
> > implement new types in PostgreSQL [1]. If you invented your own type,
> > you could significantly widen the upper and lower bounds when compared
> > with the standard date type.
>
> However, you'd then have to reimplement some large fraction of the
> existing datetime support to have something useful.
>
> We're already inventing freely to use the Gregorian calendar for
> millenia before Pope Gregory lived, so I see no conceptual argument
> not to extend that back even further.
>
> IIRC the stumbling block for not going back past Julian day 0 was
> uncertainty about whether the date2j and j2date algorithms behave
> correctly for negative Julian dates --- which at the time was
> compounded by the fact that C90 was vague about the rounding direction
> for integer division with negative inputs.  Now that we assume C99
> with its well-defined rule for that, at least some of the uncertainty
> is gone.  Somebody would still have to study that code and either
> prove that it's OK or correct it.  And then there would be a
> nontrivial amount of effort to work outwards and fix anything else
> that is assuming that limitation.  So it would take somebody with
> considerable motivation to make it happen, but if such a somebody were
> to appear with a patch, we'd likely take it.  (To be clear, I doubt
> any of the principal current hackers are interested in doing this.)
>
> Now, this would still only get you to a lower-bound date somewhere
> around 30 BC.  If you need to deal with geological or astronomical
> time spans, then yeah you need a new type --- but presumably you would
> not feel a need to tie it to Gregorian calendar dates, so the need to
> reimplement a ton of related logic would not be there.
>
> regards, tom lane
>
>
>


Re: Backup

2024-10-16 Thread Asad Ali
Hi Andy,

I hope you're doing well. Based on your inquiry about PostgreSQL backups
for your 100GB historical database with images, here are some suggestions
that should help you achieve compressed, efficient backups without running
into storage issues.

*1. Use Custom Format with Compression*
A more efficient option would be to use the custom format (-Fc) with
compression. You can also adjust the compression level and make use of your
machine's multiple CPUs by using parallel jobs:

pg_dump -Fc -Z 9 -j 4 --blobs -f /path/to/backup/file.dump
your_database_name

   - -Fc: Custom format (supports compression and flexible restore options).
   - -Z 9: Maximum compression level (0-9 scale).
   - -j 4: Number of parallel jobs (adjust based on CPU cores).
   - --blobs: Includes large objects (important for your images).

This approach should give you a smaller backup file with faster performance.

*2. Splitting Backups into Parts*
If you're concerned about running out of storage space, consider splitting
the backup by table or schema, allowing more control over the backup size:

pg_dump -Fc --table=my_large_table -f /path/to/backup/my_large_table.dump
your_database_name
pg_dump -Fc --table=other_table -f /path/to/backup/other_table.dump
your_database_name

This can be helpful when you archive different tables or sections of data.

*3. External Compression Tools*
If you need additional compression, you can pipe the pg_dump output through
an external compression tool like gzip:

pg_dump -Fc --blobs your_database_name | gzip > /path/to/backup/file.dump.gz

This should further reduce the size of your backups.

*4. Consider Alternative Backup Methods*
- Explore other backup methods like `*pgBackRest` or `WAL-E`*. These tools
are specifically designed for PostgreSQL backups and offer features like
incremental backups and point-in-time recovery
pgbackrest --stanza=your-database --type=full --compress-type=zst
--compress-level=6 --process-max=4 backup

- You can use *pg_basebackup* for PostgreSQL backups, but it has
limitations compared to tools like pgBackRest. While pg_basebackup is easy
to use and built-in with PostgreSQL, it is primarily designed for physical
backups (base backups) and doesn't offer as many advanced features such as
incremental backups, sophisticated compression, or parallelism.

However, it does support basic compression and can be used for full backups.

pg_basebackup -D /path/to/backup/directory -F t -z -Z 9 -P -X stream

   - -D: The destination directory for the backup.
   - -F t: Specifies the tar format for the backup, which is required for
   compression.
   - -z: Compresses the output.
   - -Z 9: Compression level (0–9, where 9 is the highest).
   - -P: Shows the progress of the backup.
   - -X stream: Includes the WAL files needed to make the backup consistent
   (important for recovery).

pg_basebackup -D /backups/db_backup -F t -z -Z 9 -P -X stream
This command will take a full physical backup of the database, compress the
output using gzip, and store the backup in a tarball.

*5. Automating Backups*
Since you need monthly backups, I recommend automating this process with a
cron job. For example, you can set this up to run on the 1st of every month
at 2 AM:

0 2 1 * * /usr/bin/pg_dump -Fc -Z 9 -j 4 --blobs -f
/path/to/backup/file.dump your_database_name

*6. Monitoring Disk Usage & * *Backup Performance*
Finally, it's important to monitor your available storage. You can either
ensure you have enough free space or consider moving older backups to
external or cloud storage to free up space.
Use monitoring tools to track the performance of your backups. This will
help you identify any potential bottlenecks and optimize the backup process.

I hope this helps you create smaller and quicker backups for your
PostgreSQL database. Let me know if you have any questions or need further
assistance!

Best regards,

Asad Ali


On Thu, Oct 17, 2024 at 12:38 AM Andy Hartman 
wrote:

> I am very new to Postgres and have always worked in the mssql world. I'm
> looking for suggestions on DB backups. I currently have a DB used to store
> Historical information that has images it's currently around 100gig.
>
> I'm looking to take a monthly backup as I archive a month of data at a
> time. I am looking for it to be compressed and have a machine that has
> multiple cpu's and ample memory.
>
> Suggestions on things I can try ?
> I did a pg_dump using these parms
> --format=t --blobs lobarch
>
> it ran my device out of storage:
>
> pg_dump: error: could not write to output file: No space left on device
>
> I have 150gig free on my backup drive... can obviously add more
>
> looking for the quickest and smallest backup file output...
>
> Thanks again for help\suggestions
>
>


Re: Pgsql 16 to 14 using builtin logical

2024-10-25 Thread Asad Ali
Hi Durgamahesh,

Yes, you can replicate a PostgreSQL 16 database to PostgreSQL 14 using
built-in logical replication. PostgreSQL's built-in logical replication
allows for replicating data between different versions of PostgreSQL,
provided the source version is higher than or equal to the target version.

Regards,
Asad


On Sat, Oct 26, 2024, 8:28 AM Durgamahesh Manne 
wrote:

> Hi
>
> Can we replicate 16 to 14 using builtin logical similarly pglogical?
>
> Regards
> Durga Mahesh
>


Re: Pgsql 16 to 14 using builtin logical

2024-10-26 Thread Asad Ali
In the typical upgrade scenario, replication flows from the lower (older)
version to the higher (newer) version. Here, the source is the older
version (e.g., PostgreSQL 14), and the target is the newer version (e.g.,
PostgreSQL 16).

Replicating from a higher version (PostgreSQL 16) to a lower one
(PostgreSQL 14) is possible but unconventional and not usually done for
upgrades.


Best Regards,
Asad Ali

On Sat, Oct 26, 2024, 9:10 PM Adrian Klaver 
wrote:

> On 10/25/24 23:02, Asad Ali wrote:
> > Hi Durgamahesh,
> >
> > Yes, you can replicate a PostgreSQL 16 database to PostgreSQL 14 using
> > built-in logical replication. PostgreSQL's built-in logical replication
> > allows for replicating data between different versions of PostgreSQL,
> > provided the source version is higher than or equal to the target
> version.
>
> I am not following or maybe it's just your idea of what is the source
> and what is the target. In the common case of upgrading a database to a
> newer version the logical replication would be from the lower source to
> the higher target.
>
>
> >
> > Regards,
> > Asad
> >
> >
> > On Sat, Oct 26, 2024, 8:28 AM Durgamahesh Manne
> > mailto:maheshpostgr...@gmail.com>> wrote:
> >
> > Hi
> >
> > Can we replicate 16 to 14 using builtin logical similarly pglogical?
> >
> > Regards
> > Durga Mahesh
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Pgsql 16 to 14 using builtin logical

2024-10-26 Thread Asad Ali
I missed to mentioned in the last.

On Sat, Oct 26, 2024, 10:51 PM Adrian Klaver 
wrote:

> On 10/26/24 10:06, Asad Ali wrote:
> >
> > In the typical upgrade scenario, replication flows from the lower
> > (older) version to the higher (newer) version. Here, the source is the
> > older version (e.g., PostgreSQL 14), and the target is the newer version
> > (e.g., PostgreSQL 16).
>
> So how does that fit with your statement?:
>
> "PostgreSQL's built-in logical replication allows for replicating data
> between different versions of PostgreSQL, provided the source version is
> higher than or equal to the target version."
>
> >
> > Replicating from a higher version (PostgreSQL 16) to a lower one
> > (PostgreSQL 14) is possible but unconventional and not usually done for
> > upgrades.
> >
> >
> > Best Regards,
> > Asad Ali
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Customize psql prompt to show current_role

2024-09-23 Thread Asad Ali
Hi Dominique,

There is no direct prompt escape sequence like %n for displaying the
current_role in the psql prompt. However, you can work around this by using
a \set command to define a custom prompt that includes the result of
current_role.
You can use the following command to set your psql PROMPT1 to include both
the session_user and current_role:

Here’s how you can achieve this:

You can define a function in your psqlrc file that captures the current
role.

Use \set to set a custom prompt that includes both the session user (%n)
and the current_role.
Here's an example of how you can do it:

Edit your .psqlrc file to include a custom query and set a prompt:

-- Query to set the current role into a psql variable
\set current_role 'SELECT current_role;'

-- Define a custom prompt with both the session user and the current role
\set PROMPT1 '%n@%/ (%`current_role`)=%# '

Load the .psqlrc or start a new psql session, and your prompt will now show
the session user and current role.

The key here is that \set allows you to run SQL commands within the prompt,
which can be used to extract the current_role.

This approach requires you to execute it manually or include it in your
.psqlrc file for automatic loading with each session.

I hope this helps!

Best regards,
Asad Ali


On Mon, Sep 23, 2024 at 2:31 PM Dominique Devienne 
wrote:

> Hi. I've successfully customized my psql PROMPT1,
> using %n for session_user, but I'd like to see
> current_role as well. And I can't seem to find a way.
>
> I didn't find a direct \x for it.
> I didn't find a %'X' variable for it.
> I didn't find a command to %`X` either.
> (and X = `select current_role` does not work).
>
> Surely there's a way, no? Thanks, --DD
>
>
>