PostgreSQL in-transit compression for a client connection

2023-04-27 Thread Tushar Takate
Hi Team,

Does PostgreSQL support in-transit compression for a client connection?
*If yes*, Then please help me with the below.

   1. 1. What are the different methods?
   2. 2. How to enable/use it?



Thanks & Regards,
Tushar K Takate.


Re: PostgreSQL in-transit compression for a client connection

2023-04-27 Thread Daniel Gustafsson
> On 27 Apr 2023, at 11:18, Tushar Takate  wrote:

> Does PostgreSQL support in-transit compression for a client connection?

No.  Earlier versions supported SSL compression for encrypted connections but
that rarely worked as it was disabled in the vast majority of all OpenSSL
installations.  There has been patches proposed to add compression to libpq but
nothing has been added as of yet.

--
Daniel Gustafsson





Re: PostgreSQL in-transit compression for a client connection

2023-04-27 Thread Laurenz Albe
On Thu, 2023-04-27 at 14:48 +0530, Tushar Takate wrote:
> Does PostgreSQL support in-transit compression for a client connection?

No, not any more.  There used to be compression via SSL, but that was
removed for security reasons, and because most binary distributions of
OpenSSL didn't support it anyway.

Yours,
Laurenz Albe




Re: PostgreSQL in-transit compression for a client connection

2023-04-27 Thread Dominique Devienne
On Thu, Apr 27, 2023 at 11:24 AM Laurenz Albe 
wrote:

> On Thu, 2023-04-27 at 14:48 +0530, Tushar Takate wrote:
> > Does PostgreSQL support in-transit compression for a client connection?
>
> No, not any more.
>

On a related but different subject, as someone who must store ZLIB (from
ZIP files)
and sometimes LZ4 compressed `bytea` values, I often find it's a shame that
I have
to decompress them, send them over the wire uncompressed, to have the
PostgreSQL
backend recompress them when TOAST'ed. That's a waste of CPU and IO
bandwidth...

I wish there was a way to tell the backend via libpq and the v3 (or later)
protocol:
Here's the XYZ compressed value, with this uncompressed size and checksum
(depending on the format used / expected), and skip the
decompression/re-compression
and fatter bandwidth, to store them as-is (in the usual 2K TOAST chunks).

I know this is unlikely to happen, for several reasons. Still, I thought
I'd throw it out there.

PS: BTW, in my testing, on-the-wire compression is rarely beneficial IMHO.
I tested the
break-even bandwidth point in the (industry-specific) client-server
protocol I worked on,
which optionally supports compression, and those bandwidths were quite low.
The CPU cost of
ZLib (~ 4x compression) and even the faster LZ4 (~ 2x compression) and
decompression
at the other end, are high enough that you need quite low bandwidth to
recoup them on IO. FWIW.


Re: PostgreSQL in-transit compression for a client connection

2023-04-27 Thread Laurenz Albe
On Thu, 2023-04-27 at 11:44 +0200, Dominique Devienne wrote:
> as someone who must store ZLIB (from ZIP files)
> and sometimes LZ4 compressed `bytea` values, I often find it's a shame that I 
> have
> to decompress them, send them over the wire uncompressed, to have the 
> PostgreSQL
> backend recompress them when TOAST'ed. That's a waste of CPU and IO 
> bandwidth...

That's not what you were looking for, but why not store the compressed data
in the database (after SET STORAGE EXTERNAL on the column) and uncompress
them after you have received them on the client side?

Yours,
Laurenz Albe




Re:Re: psql:t_mstr.sql:994: ERROR: function to_char(numeric) does not exist

2023-04-27 Thread gzh
Thank you very much for your reply.



I did the following two tests and found that the return value of 
pg_catalog.date and oracle.date are inconsistent.




①the function was created with return type pg_catalog.date




---

CREATE OR REPLACE FUNCTION to_date(str text) RETURNS pg_catalog.date AS $$ 
SELECT $1::pg_catalog.date; $$ LANGUAGE sql IMMUTABLE STRICT;

COMMENT ON FUNCTION public.to_date(text) IS 'Convert string to date';

---




The execution result is as follows:




postgres=# select TO_DATE(TO_CHAR(localtimestamp, '/MM/DD')) as localdate, 
TO_DATE(TO_CHAR(current_timestamp, '/MM/DD')) as currentdate;

 localdate  | currentdate

+-

 2023-04-27 | 2023-04-27




②the function was created with return type oracle.date




---

CREATE OR REPLACE FUNCTION to_date(str text) RETURNS oracle.date AS $$ SELECT 
$1::oracle.date; $$ LANGUAGE sql IMMUTABLE STRICT;

COMMENT ON FUNCTION public.to_date(text) IS 'Convert string to date';

---




The execution result is as follows:




postgres=# select TO_DATE(TO_CHAR(localtimestamp, '/MM/DD')) as localdate, 
TO_DATE(TO_CHAR(current_timestamp, '/MM/DD')) as currentdate;

  localdate  | currentdate

-+-

 2023-04-27 00:00:00 | 2023-04-27 00:00:00




When the return type is set to oracle.date, there are hours, minutes, and 
seconds of the date value in the SQL execution result.

Why is there such a difference and how to solve it?

















At 2023-04-25 20:53:09, "Erik Wienhold"  wrote:
>> On 25/04/2023 13:34 CEST gzh  wrote:
>>
>> >The solution is the same whether you upgrade or not: you need
>> >to adjust your search_path to include the "oracle" schema,
>> >or else explicitly qualify references to orafce functions.
>> Thank you very much for your help.
>>
>> To use the to_date functions of Orafce 3.0.1, we created the following
>> to_date function in the public schema of the old database.
>>
>> -
>> CREATE OR REPLACE FUNCTION to_date(str text) RETURNS date AS $$ SELECT 
>> $1::date; $$ LANGUAGE sql IMMUTABLE STRICT; COMMENT ON FUNCTION 
>> public.to_date(text) IS 'Convert string to date';
>> -
>>
>> To avoid using a to_date function with the same name and parameter in the
>> pg_catalog schema first, the search_path of the old database is set as
>> follows:
>>
>> "$user", public, pg_catalog
>>
>> Make sure that public is searched before pg_catalog.
>> After the database is upgraded, in order to solve the changes in Oracle
>> 3.24, we have added oracle schema to the search_path, as shown below:
>>
>> "$user", public, oracle, pg_catalog
>>
>> The following error occurred when I ran my application.
>>
>> 42P13:ERROR:42P13: return type mismatch in function declared to return
>> pg_catalog.date
>>
>> When I put the oracle schema at the end of the search_path, the problem was
>> solved.
>> The search_path settings without problems are as follows:
>>
>> "$user", public, pg_catalog, oracle
>>
>> Why does it report an error when i put oracle between public and pg_catalog?
>
>When you created function to_date(text) your search_path was probably
>
>   "$user", public, pg_catalog
>
>Thereby the function was created with return type pg_catalog.date and without
>a search_path setting.
>
>The cast to date in the function body, however, is unqualified and thus relies
>on the session search_path.  When adding oracle to the session search_path
>before pg_catalog, the cast will be to oracle.date (orafce defines its own
>date type) instead of pg_catalog.date.  The function return type, however, is
>still declared as pg_catalog.date.
>
>To fix this create the function with an explicit search_path, i.e.
>
>   CREATE FUNCTION to_date(text)
> RETURNS oracle.date
> SET search_path = oracle
> ...
>
>Or write the cast as $1::oracle.date to not rely on the search_path at all.
>
>--
>Erik


Differential Backups in Windows server

2023-04-27 Thread Rajmohan Masa
Hi Team,
Is it possible to take differential Backup in* Windows Server ? *If
possible please explain clearly?
I tried with different scenarios but I'm unable to take *Diff Backup* of
the postgresql database  in *Windows server.*

*Much appreciated for Response.*


*Thank you,*
*Rajamohan M.*


Re: Differential Backups in Windows server

2023-04-27 Thread Ravi Krishna
PG does not have a concept of differential backup since it does not track block level changes. Pgbackrest has implemented a different backup using timestamp of last update in data files. Not sure whether it works in windows.--Sent from phone.From: Rajmohan Masa Sent: Thursday, April 27, 2023, 7:25 AMTo: pgsql-general@lists.postgresql.org Subject: Differential Backups in Windows serverHi Team,Is it possible to take differential Backup in Windows Server ? If possible please explain clearly?I tried with different scenarios but I'm unable to take Diff Backup of the postgresql database  in Windows server.Much appreciated for Response.Thank you,Rajamohan M.





Re: Differential Backups in Windows server

2023-04-27 Thread Erik Wienhold
> On 27/04/2023 13:24 CEST Rajmohan Masa  wrote:
>
> Is it possible to take differential Backup inWindows Server ?If possible
> please explain clearly?

Not to my knowledge.  Postgres itself only allows full backups via pg_basebackup
or pg_dumpall.

Barman[0] allows differential backup via rsync+ssh which is, however,
not supported on Windows.  Barman supports Windows with pg_basebackup.

pgBackRest[1] and pg_rman[2] also provide differential backup but neither one
supports Windows.

I found SQLBackupAndFTP when searching for a Windows solution.  It advertises
with support for differential backups, but that's possible with SQL Server
only. [3]

I went with Barman to have PITR on that one Windows server I have to manage
and accept that it only allows full backups.  My Linux Postgres clusters are
covered by rsync+ssh though.

> I tried with different scenarios but I'm unable to take Diff Backup of the
> postgresql database in Windows server.

What have you tried?

[0] https://pgbarman.org
[1] https://pgbackrest.org/
[2] https://github.com/ossc-db/pg_rman
[3] https://sqlbackupandftp.com/features

--
Erik




Re: Re:Re: psql:t_mstr.sql:994: ERROR: function to_char(numeric) does not exist

2023-04-27 Thread Erik Wienhold
> On 27/04/2023 13:20 CEST gzh  wrote:
>
> When the return type is set to oracle.date, there are hours, minutes, and
> seconds of the date value in the SQL execution result.
> Why is there such a difference and how to solve it?

orafce defines oracle.date as timestamp(0) [0] because Oracle's DATE type has
a precision of one second [1].  That's the point of orafce: to provide Oracle
compatibility.

You can cast oracle.date to pg_catalog.date but then you're in Postgres
territory again.  Depends on what you want to achieve.  If it's just formatting
use oracle.to_char:

SELECT oracle.to_char('2023-04-27'::oracle.date, '-MM-DD');

[0] https://github.com/orafce/orafce/blob/VERSION_3_24_4/orafce--3.24.sql#L343
[1] 
https://oracle-base.com/articles/misc/oracle-dates-timestamps-and-intervals#date

--
Erik




Re: psql:t_mstr.sql:994: ERROR: function to_char(numeric) does not exist

2023-04-27 Thread Tom Lane
gzh   writes:
> I did the following two tests and found that the return value of 
> pg_catalog.date and oracle.date are inconsistent.

Yeah ... that's pretty much the point.  Oracle uses the name "date"
for a data type that Postgres (and the SQL standard) calls a "timestamp".
That's very ancient on their side and I doubt they'll ever change it.

If you're porting a bunch of code written for Oracle to Postgres,
you'd have to run around and change every occurrence of "date" to
"timestamp" ... unless you install orafce, in which case you can
rely on this alias type that orafce creates.  But you do then have
two types named "date" in the system, so you have to be careful
about search_path settings or you'll get more confusion than it's
worth.

regards, tom lane




Postgres query doesn't accept double quote around schema name in query statement

2023-04-27 Thread Michael Xu
Hi,

By default, pgsql accepts double quotes around schema's name in a query,
e.g. select * from "ads"."MyTableName". In our env, it throws
42P01:relation "ads.MyTableName" does not exist. It is okay if no double
quote around schema name, select * from ads."MyTableName".

Is there settings in pssql server or db level to change it back to its
default to allow double quotes around schema.

We're using PostgreSQL 13.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC)
8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit


Thanks!


Michael


Re: Postgres query doesn't accept double quote around schema name in query statement

2023-04-27 Thread Tom Lane
Michael Xu  writes:
> By default, pgsql accepts double quotes around schema's name in a query,
> e.g. select * from "ads"."MyTableName". In our env, it throws
> 42P01:relation "ads.MyTableName" does not exist.

Works for me:

regression=# create schema ads;
CREATE SCHEMA
regression=# create table "ads"."MyTableName" (f1 int);
CREATE TABLE
regression=# select * from "ads"."MyTableName";
 f1 

(0 rows)

I'm wondering a bit about invisible white space in your schema
name; but without a self-contained test case we can do no more
than guess.

regards, tom lane




Re: Postgres query doesn't accept double quote around schema name in query statement

2023-04-27 Thread Christophe Pettus



> On Apr 27, 2023, at 12:40, Michael Xu  wrote:
> In our env, it throws 42P01:relation "ads.MyTableName" does not exist.

The function of double quotes in SQL is to allow you do include characters that 
would otherwise not be legal in an identifier (as well as making the identifier 
case-sensitive).  So, you are telling the database that there is a table named 
"ads.MyTableName", with the "." included as part of the identifier.



Re: Postgres query doesn't accept double quote around schema name in query statement

2023-04-27 Thread Adrian Klaver

On 4/27/23 12:40, Michael Xu wrote:

Hi,

By default, pgsql accepts double quotes around schema's name in a query, 
e.g. select * from "ads"."MyTableName". In our env, it throws 
42P01:relation "ads.MyTableName" does not exist. It is okay if no double 
quote around schema name, select * from ads."MyTableName".


Is there settings in pssql server or db level to change it back to its 
default to allow double quotes around schema.


To repeat what I asked on SO:

https://stackoverflow.com/questions/76123696/postgres-double-quote-around-schema-name-cause-42p01relation-schemana-tablen

Works here.

1) Are you sure whatever client you are using is not doing 
"ads.MyTableName"?


2) Not sure if this is a copy and paste thing but the quotes look like 
typographical quotes, not straight quotes.





We're using PostgreSQL 13.7 on x86_64-pc-linux-gnu, compiled by gcc 
(GCC) 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit



Thanks!


Michael




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Postgres query doesn't accept double quote around schema name in query statement

2023-04-27 Thread Rob Sargent


> On Apr 27, 2023, at 12:40 PM, Michael Xu  wrote:
> 
> 
> Hi,
> 
> By default, pgsql accepts double quotes around schema's name in a query, e.g. 
> select * from "ads"."MyTableName". In our env, it throws 42P01:relation 
> "ads.MyTableName" does not exist. It is okay if no double quote around schema 
> name, select * from ads."MyTableName". 
> 
> Is there settings in pssql server or db level to change it back to its 
> default to allow double quotes around schema. 
> 
> We're using PostgreSQL 13.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 
> 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit 
> 
> Thanks!
> 
> Michael
> 

Any chance the actual name of the schema is not all lowercase ads?




Re: psql:t_mstr.sql:994: ERROR: function to_char(numeric) does not exist

2023-04-27 Thread gzh



>If you're porting a bunch of code written for Oracle to Postgres,

>you'd have to run around and change every occurrence of "date" to

>"timestamp" ... unless you install orafce, in which case you can

>rely on this alias type that orafce creates.  But you do then have

>two types named "date" in the system, so you have to be careful

>about search_path settings or you'll get more confusion than it's

>worth.

When I put the oracle schema in the search_path behind the pg_catalog schema, 

even if I don't make any changes to the to_date(text) functions of the public 
schema, 

the application behaves the same as the old version of the database(PostgreSQL 
12.10 and orafce 3.15). 

Can I understand that when the pg_catalog schema is in front of the oracle 
schema, 

the date type that does not specify the schema in all functions takes the date 
type 

of pg_catalog.date first, so setting the search_path as below is also a 
solution.




postgres=# show search_path;

 search_path

-

 "$user", public, pg_catalog, oracle


At 2023-04-27 22:11:22, "Tom Lane"  wrote:
>gzh   writes:
>> I did the following two tests and found that the return value of 
>> pg_catalog.date and oracle.date are inconsistent.
>
>Yeah ... that's pretty much the point.  Oracle uses the name "date"
>for a data type that Postgres (and the SQL standard) calls a "timestamp".
>That's very ancient on their side and I doubt they'll ever change it.
>
>If you're porting a bunch of code written for Oracle to Postgres,
>you'd have to run around and change every occurrence of "date" to
>"timestamp" ... unless you install orafce, in which case you can
>rely on this alias type that orafce creates.  But you do then have
>two types named "date" in the system, so you have to be careful
>about search_path settings or you'll get more confusion than it's
>worth.
>
>   regards, tom lane


Re: Postgres query doesn't accept double quote around schema name in query statement

2023-04-27 Thread Adrian Klaver

On 4/27/23 17:58, Michael Xu wrote:

Please reply to list also.

Ccing list.

1) Are you sure whatever client you are using is not doing 
"ads.MyTableName"?


Confirm, it is not ads.MyTableName.


1) What client(and it's version) are you using?

2) What I asked was whether the schema and table name where being quoted 
together as "ads.MyTableName"?

Check the Postgres log to see what is actually hitting the database.

3) What is the database encoding?




We also ran the query in pgadmin against similar tables but under 
default public schema, got the same error.

(*MyTableName* is a table name)

select * from "public"."MyTableName", the same error.
select * from public."MyTableName", works.


2) Not sure if this is a copy and paste thing but the quotes look like
typographical quotes, not straight quotes.

sorry for the confusion, it is copied/pasted from OneNote which may have 
brought invalid quote, but the quotes used in pgadmin are correct.



On Thu, Apr 27, 2023 at 12:46 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 4/27/23 12:40, Michael Xu wrote:
 > Hi,
 >
 > By default, pgsql accepts double quotes around schema's name in a
query,
 > e.g. select * from "ads"."MyTableName". In our env, it throws
 > 42P01:relation "ads.MyTableName" does not exist. It is okay if no
double
 > quote around schema name, select * from ads."MyTableName".
 >
 > Is there settings in pssql server or db level to change it back
to its
 > default to allow double quotes around schema.

To repeat what I asked on SO:


https://stackoverflow.com/questions/76123696/postgres-double-quote-around-schema-name-cause-42p01relation-schemana-tablen
 


Works here.

1) Are you sure whatever client you are using is not doing
"ads.MyTableName"?

2) Not sure if this is a copy and paste thing but the quotes look like
typographical quotes, not straight quotes.


 >
 > We're using PostgreSQL 13.7 on x86_64-pc-linux-gnu, compiled by gcc
 > (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit
 >
 >
 > Thanks!
 >
 >
 > Michael
 >
 >

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





pg_class char type should be "char"?

2023-04-27 Thread jian he
Hi,
https://www.postgresql.org/docs/current/catalog-pg-class.html

relkind char
> relpersistence char
> relreplident char


these 3 columns type should be "char" ?