AW: AW: Cast INTEGER to BIT confusion

2023-08-17 Thread [Quipsy] Markus Karg
I am giving up. While even `SELECT current_schemas(true)` correctly prints 
`xxx, pg_catalog` it still uses the original bit type. This is completely NOT 
as described in the documentation, where it is clearly told that pg_catalog 
only is searched immediately if NOT found in the search path. It seems it is 
simply impossible to run this application on PostgreSQL, and we have to stick 
with a different RDBMS. Very sad.
-Markus


-Ursprüngliche Nachricht-
Von: Erik Wienhold  
Gesendet: Dienstag, 15. August 2023 16:28
An: [Quipsy] Markus Karg ; pgsql-general@lists.postgresql.org
Betreff: Re: AW: Cast INTEGER to BIT confusion

> On 15/08/2023 14:02 CEST [Quipsy] Markus Karg  wrote:
>
> I just tried out your proposal on PostgreSQL 15.3 and this is the result:
>
> ERROR:  column "c" is of type bit but expression is of type integer
> LINE 5:   INSERT INTO t VALUES (1);
> ^
> HINT:  You will need to rewrite or cast the expression.
>
> Apparently the search path is ignored?!
>
> -Ursprüngliche Nachricht-
> Von: Erik Wienhold 
> Gesendet: Dienstag, 15. August 2023 13:48
> An: [Quipsy] Markus Karg ; 
> pgsql-general@lists.postgresql.org
> Betreff: Re: Cast INTEGER to BIT confusion
>
> You could create a custom domain if you're only interested in values 0 
> and 1 and don't use bit string functions.  The search path must be 
> changed so that domain bit overrides pg_catalog.bit:
> 
>   =# CREATE SCHEMA xxx;
>   =# CREATE DOMAIN xxx.bit AS int;
>   =# SET search_path = xxx, pg_catalog;
>   =# CREATE TABLE t (c bit);
>   =# INSERT INTO t VALUES (1);
>   INSERT 0 1
>
> But I would do that only if the third-party code cannot be tweaked 
> because the custom domain could be confusing.  It's also prone to 
> errors as it relies on a specific search path order.  Also make sure 
> that regular users cannot create objects in schema xxx that would override 
> objects in pg_catalog.

Hmm, I thought that Postgres resolves all types through the search path, but 
apparently that is not the case for built-in types.  I never used this to 
override built-in types so this is a surprise to me.  (And obviously I haven't 
tested the search path feature before posting.)

Neither [1] or [2] mention that special (?) case or if there's a distinction 
between built-in types and user-defined types.  The USAGE privilege is required 
according to [2] but I was testing as superuser anyway.

[1] https://www.postgresql.org/docs/15/ddl-schemas.html
[2] 
https://www.postgresql.org/docs/15/runtime-config-client.html#GUC-SEARCH-PATH

--
Erik


Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-17 Thread rihad
Hi, all. After calling pg_stat_reset all statistics used by autovacuum 
got zeroed, and started accumulating from scratch. Some tables get acted 
upon properly, some don't.



foo=> select 
relname,n_live_tup,n_dead_tup,cast(round(n_live_tup*cast(current_setting('autovacuum_vacuum_scale_factor') 
as float)+current_setting('autovacuum_vacuum_threshold')::int) as 
int)-n_dead_tup as left, n_mod_since_analyz
e,cast(round(n_live_tup*cast(current_setting('autovacuum_analyze_scale_factor') 
as float)+current_setting('autovacuum_analyze_threshold')::int) as 
int)-n_mod_since_analyze as 
left,n_ins_since_vacuum,last_autovacuum,last_autoanalyze from p
g_stat_all_tables where n_live_tup>0 and 
n_dead_tup>=current_setting('autovacuum_vacuum_threshold')::int order by 
8 nulls first, 4;
relname  | n_live_tup | n_dead_tup |   left   | 
n_mod_since_analyze |  left   | n_ins_since_vacuum | 
   last_autovacuum    |   last_autoanalyze
--+++--+-+-++---+--- 

fooo     |  32781 | 240663 |  -234057 | 
 513265 | -509937 | 270291 | 
  |
fooo                 | 40 |  24891 |   -24833 | 
  49822 |  -49768 |  24931 | 
  |
f    | 46 |  18991 |   -18932 | 
  19099 |  -19044 | 46 | 
  |
fo   |  1 |  12687 |   -12637 | 
  40795 |  -40745 |  1 | 
  |
fo   |   2393 |  5 |   -10586 | 
 137599 | -137310 |   2393 | 
  |
fo   |   9465 |  11919 |    -9976 | 
 352888 | -351892 |   9466 | 
  |
fooo | 26 |   2558 |    -2503 | 
188 |    -135 |   2584 | 
  |
user_sessions    |    118 |   1231 |    -1157 | 
  19114 |  -19052 |    118 | 
  |
fo   | 32 |    562 | -506 | 
226 |    -173 |    594 | 
  |
fooo     | 53 |    537 | -476 | 
644 |    -589 | 53 | 
  |
fo   |    327 |    524 | -409 | 
804 |    -721 |    520 | 
  |
f    | 46 |    104 |  -45 | 
457 |    -402 |    183 | 
  |
foo  | 34 | 93 |  -36 | 
158 |    -105 | 34 | 
  |
f    | 47 | 95 |  -36 | 
364 |    -309 | 47 | 
  |
fo   | 84 | 91 |  -24 | 
177 |    -119 | 84 | 
  |
f    |  290504401 |    9540832 | 48560098 | 
   26663449 | 2387041 |    8319194 | 2023-08-17 
08:03:19.159161+00 | 2023-08-17 07:43:54.982375+00
fo   |  43449 |   3823 | 4917 | 
   4190 | 205 |    377 | 2023-08-17 
08:31:14.5573+00   | 2023-08-17 08:31:15.19454+00
fo   |   3913 |    715 |  118 | 
200 | 241 |  0 | 2023-08-17 
08:44:04.721272+00 | 2023-08-17 08:47:04.875214+00
f    | 73 | 63 |    2 | 
 31 |  26 | 35 | 2023-08-17 
08:45:05.115824+00 | 2023-08-17 08:47:05.032251+00
f    | 790249 | 126240 |    31860 | 
   4149 |   74926 | 119413 | 2023-08-17 
08:46:10.625275+00 | 2023-08-17 08:49:04.867481+00
pg_shdepend  |   1885 |    286 |  141 | 
116 | 122 |    270 | 2023-08-17 
08:47:03.638639+00 | 2023-08-17 08:48:42.903096+00
pg_index |    993 | 79 |  170 | 
 10 | 139 | 72 | 2023-08-17 
08:48:03.67267+00  | 2023-08-17 08:49:03.723851+00
pg_depend    |   9779 |   1027 |  979 | 
130 | 898 |    923 | 2023-08-17 
08:48:03.743417+00 | 2023-08-17 08:49:03.740913+00
foo  |  43699 |   2352 | 6438 | 
   3527 | 893 |   1175 | 2023-08-17 
08:48:03.84116+

Base files compatibility between PG11 and PG15

2023-08-17 Thread Abraham, Danny
Hi,

I have a database on Aurora@RDS.
It Used  V11.9, and then upgraded automatically by AWS to V15.2 AWS states that 
it relies on compatibility provided by the PG community.
We now experience problems with the data.
Has anyone met similar issues? Compatibility of data files?

Thanks

Danny




Sv: Base files compatibility between PG11 and PG15

2023-08-17 Thread Andreas Joseph Krogh


På torsdag 17. august 2023 kl. 11:32:47, skrev Abraham, Danny <
danny_abra...@bmc.com >:
Hi,

I have a database on Aurora@RDS.
It Used V11.9, and then upgraded automatically by AWS to V15.2 AWS states that 
it relies on compatibility provided by the PG community.
We now experience problems with the data.
Has anyone met similar issues? Compatibility of data files?
The data-files are nowhere near compatible between major-versions, as stated 
clearly in all release notes:

https://www.postgresql.org/docs/release/12.0/ 


https://www.postgresql.org/docs/release/13.0/ 


https://www.postgresql.org/docs/release/14.0/ 


https://www.postgresql.org/docs/release/15.0/ 




They all state the same:

“A dump/restore using pg_dumpall or use of pg_upgrade or logical replication 
is required for those wishing to migrate data from any previous release.”






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


Re: Cast INTEGER to BIT confusion

2023-08-17 Thread rob stone
Hello,



On Tue, 2023-08-15 at 08:49 +, [Quipsy] Markus Karg wrote:
> 
> 
> 
> Hello PostgreSQL Community,
>  
> I have 25+ years of experience with some other RDBMS, but I am a
> PostgreSQL starter, so I assume the following is rather a simple
> beginner’s question…:
>  
> I like to store just a single bit but that can be either 1 or 0, so I
> tried to do this:
>  
> CREATE TABLE T (c BIT);
> INSERT INTO T VALUES (1);
> -- I MUST NOT change both lines in any way as these are part of a
> third-party application!
>  
> Unfortunately this tells me:
>  
> column "c" is of type bit but expression is of type integer
>  
> So I logged in as a cluster admin and I tried this:
>  
> CREATE CAST (integer AS bit) WITH INOUT AS IMPLICIT;
>  
> Unfortunately that tells me:
>  
> cast from type integer to type bit already exists
>  
> This is confusing! Apparently PostgreSQL 15.3 comes with the needed
> cast out-of-the-box but it does not apply it? This is confusing!
>  
> What is my fault?
>  
> Thanks!
> -Markus


It works thus:-

postgres 15.4 =# CREATE TABLE T (c BIT);
CREATE TABLE
postgres 15.4 =# insert into T values (B'1');
INSERT 0 1
postgres 15.4 =# select * from t;
 c 
---
 1
(1 row)

See section 9.6 in the doco.

HTH,
Rob





Re: Cast INTEGER to BIT confusion

2023-08-17 Thread Erik Wienhold
> On 17/08/2023 14:57 CEST rob stone  wrote:
>
> It works thus:-
>
> postgres 15.4 =# CREATE TABLE T (c BIT);
> CREATE TABLE
> postgres 15.4 =# insert into T values (B'1');
> INSERT 0 1
> postgres 15.4 =# select * from t;
>  c
> ---
>  1
> (1 row)
>
> See section 9.6 in the doco.

But Markus wrote that he cannot change the CREATE TABLE and INSERT statement.

--
Erik




Re: AW: AW: Cast INTEGER to BIT confusion

2023-08-17 Thread Tom Lane
"[Quipsy] Markus Karg"  writes:
> I am giving up. While even `SELECT current_schemas(true)` correctly prints 
> `xxx, pg_catalog` it still uses the original bit type. This is completely NOT 
> as described in the documentation, where it is clearly told that pg_catalog 
> only is searched immediately if NOT found in the search path.

That's probably because the grammar uses SystemTypeName (effectively
prepending "pg_catalog.") for any type that has special syntax called
out in the SQL standard.  You could get around that in various ways,
but they all involve changing the way the CREATE TABLE command is
written, because just plain "BIT" is a SQL-mandated special case.

> It seems it is simply impossible to run this application on PostgreSQL, and 
> we have to stick with a different RDBMS. Very sad.

You could leave the type alone and instead fool with the properties of
the cast (see [1]).  As superuser:

regression=# create table t (f1 bit);
CREATE TABLE
regression=# insert into t values (1);
ERROR:  column "f1" is of type bit but expression is of type integer
LINE 1: insert into t values (1);
  ^
HINT:  You will need to rewrite or cast the expression.
regression=# select * from pg_cast where castsource = 'integer'::regtype and 
casttarget = 'bit'::regtype;
  oid  | castsource | casttarget | castfunc | castcontext | castmethod 
---+++--+-+
 10186 | 23 |   1560 | 1683 | e   | f
(1 row)

regression=# update pg_cast set castcontext = 'a' where castsource = 
'integer'::regtype and casttarget = 'bit'::regtype;
UPDATE 1
regression=# select * from pg_cast where castsource = 'integer'::regtype and 
casttarget = 'bit'::regtype;
  oid  | castsource | casttarget | castfunc | castcontext | castmethod 
---+++--+-+
 10186 | 23 |   1560 | 1683 | a   | f
(1 row)

regression=# insert into t values (1);
INSERT 0 1

The main disadvantage of this approach is you'd have to remember to
perform that UPDATE in any new database, since pg_dump wouldn't
preserve it for you.

On the whole though I think this project is a lost cause.  If you
are insisting on bug-compatibility with non-SQL-compliant details
of some other DBMS, and you can't adjust the application at all,
there is going to be some new showstopper problem biting you
every day.

regards, tom lane

[1] https://www.postgresql.org/docs/current/catalog-pg-cast.html




Re: Cast INTEGER to BIT confusion

2023-08-17 Thread David G. Johnston
On Tuesday, August 15, 2023, Erik Wienhold  wrote:

> > On 15/08/2023 10:49 CEST [Quipsy] Markus Karg  wrote:
> >
> > Hello PostgreSQL Community,
> >
> > I like to store just a single bit but that can be either 1 or 0, so I
> tried
>
> You could create a custom domain if you're only interested in values 0 and
> 1
> and don't use bit string functions.  The search path must be changed so
> that
> domain bit overrides pg_catalog.bit:


The table using the system bit type already exists and thus the oid for the
data type of the stored catalog column is that of the system bit type.
Creating a user bit domaim is going to have absolutely zero impact on this
situation.

If you really want to make this work and are willing to risk and deal with
side-effects of manual catalog updates you could maybe make the existing
cast implicit.

David J.


Re: AW: AW: Cast INTEGER to BIT confusion

2023-08-17 Thread Erik Wienhold
> On 17/08/2023 09:31 CEST [Quipsy] Markus Karg  wrote:
>
> I am giving up. While even `SELECT current_schemas(true)` correctly prints
> `xxx, pg_catalog` it still uses the original bit type. This is completely
> NOT as described in the documentation, where it is clearly told that
> pg_catalog only is searched immediately if NOT found in the search path.
> It seems it is simply impossible to run this application on PostgreSQL, and
> we have to stick with a different RDBMS. Very sad.
>
> -Ursprüngliche Nachricht-
> Von: Erik Wienhold  
> Gesendet: Dienstag, 15. August 2023 16:28
> An: [Quipsy] Markus Karg ; pgsql-general@lists.postgresql.org
> Betreff: Re: AW: Cast INTEGER to BIT confusion
>
> > On 15/08/2023 14:02 CEST [Quipsy] Markus Karg  wrote:
> >
> > I just tried out your proposal on PostgreSQL 15.3 and this is the result:
> >
> > ERROR:  column "c" is of type bit but expression is of type integer
> > LINE 5:   INSERT INTO t VALUES (1);
> > ^
> > HINT:  You will need to rewrite or cast the expression.
> >
> > Apparently the search path is ignored?!
> >
> > -Ursprüngliche Nachricht-
> > Von: Erik Wienhold 
> > Gesendet: Dienstag, 15. August 2023 13:48
> > An: [Quipsy] Markus Karg ; 
> > pgsql-general@lists.postgresql.org
> > Betreff: Re: Cast INTEGER to BIT confusion
> >
> > You could create a custom domain if you're only interested in values 0 
> > and 1 and don't use bit string functions.  The search path must be 
> > changed so that domain bit overrides pg_catalog.bit:
> >
> > =# CREATE SCHEMA xxx;
> > =# CREATE DOMAIN xxx.bit AS int;
> > =# SET search_path = xxx, pg_catalog;
> > =# CREATE TABLE t (c bit);
> > =# INSERT INTO t VALUES (1);
> > INSERT 0 1
> >
> > But I would do that only if the third-party code cannot be tweaked 
> > because the custom domain could be confusing.  It's also prone to 
> > errors as it relies on a specific search path order.  Also make sure 
> > that regular users cannot create objects in schema xxx that would override 
> > objects in pg_catalog.
>
> Hmm, I thought that Postgres resolves all types through the search path, but 
> apparently that is not the case for built-in types.  I never used this to 
> override built-in types so this is a surprise to me.  (And obviously I 
> haven't tested the search path feature before posting.)
>
> Neither [1] or [2] mention that special (?) case or if there's a distinction 
> between built-in types and user-defined types.  The USAGE privilege is 
> required according to [2] but I was testing as superuser anyway.
>
> [1] https://www.postgresql.org/docs/15/ddl-schemas.html
> [2] 
> https://www.postgresql.org/docs/15/runtime-config-client.html#GUC-SEARCH-PATH

It looks like Postgres resolves standard SQL types without considering the
search path.  This could be a feature (perhaps mandated by the SQL standard?)
and not a bug if the rationale here is to avoid shadowing of standard types
with custom types so that type bit, for example, always behaves as the standard
bit type.

In the following script I create several domains in schema xxx: bit and float
match standard SQL type names, domain foo does not conflict with any built-in
type name, and inet is a built-in type but not a standard type.

BEGIN;

CREATE SCHEMA xxx;
GRANT USAGE ON SCHEMA xxx TO current_user;

CREATE DOMAIN xxx.bit AS int;
CREATE DOMAIN xxx.float AS int;
CREATE DOMAIN xxx.foo AS int;
CREATE DOMAIN xxx.inet AS int;
CREATE DOMAIN pg_catalog.foo AS int;
\dD *.(bit|float|foo|inet)

SET LOCAL search_path = xxx, pg_catalog;
SELECT current_schemas(true);

CREATE TABLE public.t (f1 bit, f2 float, f3 inet, f4 foo);

SET LOCAL search_path = '';
\d public.t

ROLLBACK;

We see that table t is created with the standard SQL types instead of our
custom domains.  Only xxx.inet and xxx.foo are resolved according to the search
path.

BEGIN
CREATE SCHEMA
GRANT
CREATE DOMAIN
CREATE DOMAIN
CREATE DOMAIN
CREATE DOMAIN
CREATE DOMAIN
List of domains
   Schema   | Name  |  Type   | Collation | Nullable | Default | Check
+---+-+---+--+-+---
 pg_catalog | foo   | integer |   |  | |
 xxx| bit   | integer |   |  | |
 xxx| float | integer |   |  | |
 xxx| foo   | integer |   |  | |
 xxx| inet  | integer |   |  | |
(4 rows)

SET
 current_schemas
--
 {xxx,pg_catalog}
(1 row)

CREATE TABLE
SET
 

Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-17 Thread Sai Teja
Hi Team,

Even I used postgreSQL Large Objects by referring this link to store and
retrieve large files (As bytea not working)
https://www.postgresql.org/docs/current/largeobjects.html

But even now I am unable to fetch the data at once from large objects

select lo_get(oid);

Here I'm getting the same error message.

But if I use select data from pg_large_object where loid = 49374
Then I can fetch the data but in page wise (data splitting into rows of
each size 2KB)

So, here how can I fetch the data at single step rather than page by page
without any error.

And I'm just wondering how do many applications storing huge amount of data
in GBs? I know that there is 1GB limit for each field set by postgreSQL. If
so, how to deal with these kind of situations? Would like to know about
this to deal with real time scenarios.

We need to store large content (huge volume of data) and retrieve it.
Currently It is not happening due to limit of field size set by postgreSQL.

Would request to share your insights and suggestions on this to help me for
resolving this issue.


Thanks & Regards,
Sai Teja

On Tue, 15 Aug, 2023, 8:53 am Tom Lane,  wrote:

> Sai Teja  writes:
> > I got to know the field size limit for the bytea datatype column is
> limited
> > to 1 GB in postgreSQL. Then how can we increase this?
>
> You can't.  That limit is wired-in in many ways.  Think about how to
> split your data across multiple table rows.
>
> regards, tom lane
>


Re: AW: AW: Cast INTEGER to BIT confusion

2023-08-17 Thread Erik Wienhold
> On 17/08/2023 15:18 CEST Tom Lane  wrote:
>
> That's probably because the grammar uses SystemTypeName (effectively
> prepending "pg_catalog.") for any type that has special syntax called
> out in the SQL standard.  You could get around that in various ways,
> but they all involve changing the way the CREATE TABLE command is
> written, because just plain "BIT" is a SQL-mandated special case.

The answer I was looking for.  Thanks Tom.

--
Erik




Aw: Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-17 Thread Karsten Hilbert
 
Even I used postgreSQL Large Objects by referring this link to store and 
retrieve large files (As bytea not working)
https://www.postgresql.org/docs/current/largeobjects.html
 
But even now I am unable to fetch the data at once from large objects
 
select lo_get(oid);
 
Here I'm getting the same error message.
 
But if I use select data from pg_large_object where loid = 49374
Then I can fetch the data but in page wise (data splitting into rows of each 
size 2KB)
 
So, here how can I fetch the data at single step rather than page by page 
without any error.
 
And I'm just wondering how do many applications storing huge amount of data in 
GBs? I know that there is 1GB limit for each field set by postgreSQL. If so, 
how to deal with these kind of situations? Would like to know about this to 
deal with real time scenarios.



https://github.com/lzlabs/pg_dumpbinary/blob/master/README.md
might be of help

Karsten




Re: Base files compatibility between PG11 and PG15

2023-08-17 Thread Adrian Klaver

On 8/17/23 02:32, Abraham, Danny wrote:

Hi,

I have a database on Aurora@RDS.
It Used  V11.9, and then upgraded automatically by AWS to V15.2 AWS states that 
it relies on compatibility provided by the PG community.
We now experience problems with the data.


This needs more information:

1) Describe the upgrade process

2) Describe the data problems.


Has anyone met similar issues? Compatibility of data files?

Thanks

Danny




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Dynamically accessing columns from a row type in a trigger

2023-08-17 Thread Lorusso Domenico
Well, some weeks ago, I read an article about that.

The more efficient approach (in pgplsql) is to use hstore.

With a similar topic, at the end, I created a group of functions that store
in an internal table, data structure for each table under the same generic
trigger.
I also store, primary key.
The goal of this approach is to compare easley old and new values,
discarding automatic and special (for my project) fields, so I can manage
historical versions, audit information etc...



Il giorno sab 12 ago 2023 alle ore 22:16 Rhys A.D. Stewart <
rhys.stew...@gmail.com> ha scritto:

> Greetings all,
>
> I am writing a trigger and would like to know how to dynamically
> access a column from the "OLD" variable. pk_col is the column name
> from the table.
>
> I've come up with either doing this:
> EXECUTE format('SELECT $1.%1$I', pk_col) INTO pk_val USING OLD;
>
> which looks a bit excessive, or this
>
> pk_val = to_jsonb(OLD.*)->pk_col
>
> which looks cleaner, but then I am having to incur a little overhead
> by using the to_jsonb function. Ideally, something like this would be
> great:
>
> pk_val = OLD[pk_col]
>
> but evidently we can't subscript ROW types.
>
> Am I missing out on a simpler or more elegant solution?
>
> Rhys
> Peace & Love | Live Long & Prosper
>
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Schema renaming cascade

2023-08-17 Thread Lorusso Domenico
Hello guys,
I need to rename a schema, including each reference to it (also for
functions)

I mean:
I've 2 schemas called "schema1" and "schema2".
In schema1 there are tables, composite types, and functions
Functions call other functions in the same schema

In schema2
Also in schema2 there are tables, composite types, and functions
Plus many function and tables refer to object of schema1


I need to rename schema1 to schema 3 substituting any reference to schema1
with schema3
Is there a way to to that?

I found this answer

that
clone a schema (that solve partially the problem) but is it still working
for v15?

At the end, I need to replace schema1 with schema3 for each function
(because table and type are automatically update)

Any suggestions?


shared buffers

2023-08-17 Thread Marc Millas
Hi,

to my understanding it use to be "common sense" to limit shared buffers,
maybe around 32 GB.
due to ressources consumption of managing said cache.

I would like to know if, on a v15 with 256 GB RAM, setting shared buffers
to say 96 GB would benefit large BI queries ?
Or to say it differently, what could be advantages/disadvantages of such a
setup ?

thanks

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


Re: Schema renaming cascade

2023-08-17 Thread Alvaro Herrera
On 2023-Aug-17, Lorusso Domenico wrote:

> Hello guys,
> I need to rename a schema, including each reference to it (also for
> functions)

Maybe you should consider removing schema name references in function
source code, and instead refer to the tables (and other functions, etc)
by their unqualified names, and at the same time attach a "SET
search_path" clause to the function.  That way, when you need to rename
the schema you don't need to change the function source code, just
update its "SET search_path" clause.

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/




Re: Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-17 Thread Pavel Stehule
Hi

čt 17. 8. 2023 v 16:48 odesílatel Karsten Hilbert 
napsal:

>
> Even I used postgreSQL Large Objects by referring this link to store and
> retrieve large files (As bytea not working)
> https://www.postgresql.org/docs/current/largeobjects.html
>
> But even now I am unable to fetch the data at once from large objects
>
> select lo_get(oid);
>
> Here I'm getting the same error message.
>
> But if I use select data from pg_large_object where loid = 49374
> Then I can fetch the data but in page wise (data splitting into rows of
> each size 2KB)
>
> So, here how can I fetch the data at single step rather than page by page
> without any error.
>

SQL functionality is limited by 1GB

You should to use \lo_import or \lo_export commands

or special API https://www.postgresql.org/docs/current/lo-interfaces.html

regards

Pavel


> And I'm just wondering how do many applications storing huge amount of
> data in GBs? I know that there is 1GB limit for each field set by
> postgreSQL. If so, how to deal with these kind of situations? Would like to
> know about this to deal with real time scenarios.
>
>
>
> https://github.com/lzlabs/pg_dumpbinary/blob/master/README.md
> might be of help
>
> Karsten
>
>
>


Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-17 Thread Rob Sargent

On 8/17/23 07:35, Sai Teja wrote:

Hi Team,

Even I used postgreSQL Large Objects by referring this link to store 
and retrieve large files (As bytea not working)

https://www.postgresql.org/docs/current/largeobjects.html

But even now I am unable to fetch the data at once from large objects

select lo_get(oid);

Here I'm getting the same error message.

But if I use select data from pg_large_object where loid = 49374
Then I can fetch the data but in page wise (data splitting into rows 
of each size 2KB)


So, here how can I fetch the data at single step rather than page by 
page without any error.


And I'm just wondering how do many applications storing huge amount of 
data in GBs? I know that there is 1GB limit for each field set by 
postgreSQL. If so, how to deal with these kind of situations? Would 
like to know about this to deal with real time scenarios.


We need to store large content (huge volume of data) and retrieve it. 
Currently It is not happening due to limit of field size set by 
postgreSQL.


Would request to share your insights and suggestions on this to help 
me for resolving this issue.




My first attempt at handling large payload was to use Java Selector 
directly in my app.  This worked but manually chunking the data was 
tricky.  I switched to using Tomcat and it handles large http(s) 
payloads seamlessly.







Looking for PostgreSQL Tuning Workshop Presenter

2023-08-17 Thread Rumpi Gravenstein
Hi,

I am the coordinator for a Cleveland Ohio user group interested in bringing
in a presenter to do a deep dive on PostgreSQL plans and tuning.  I'm
relatively new to the community so don't know who to ask or who might be
interested. If you know of someone that has deep experience and would be
willing to come to Cleveland, I'd be interested in discussing this
opportunity with them.

Best Regards,

-- 
Rumpi Gravenstein


Re: AW: AW: Cast INTEGER to BIT confusion

2023-08-17 Thread GF
Erik,
The could be a possible solution, if you could play with search_path...
You could mirror all the original schema onto another one with views that
have the original table names.
For all tables that need some special treatment you can then write
instead-of triggers, while all other views are updatable.

E.g., let's say they gave you a schema like:
create schema original;
create table original.t(f1 bit);
create table original.u(f2 int);

You can mirror it in this way:
create schema mirror;
create view mirror.t as select f1::int from original.t;
create view mirror.u as select * from original.u;
create function mirror.ins_t() returns trigger as $$ begin insert into
original.t(f1) values((new.f1)::bit); return null; end; $$ language plpgsql;
create trigger ins_t instead of insert on mirror.t for each row execute
function mirror.ins_t();
-- And something like that for upd and del...

The user that you use to connect must be granted the rights on mirror views
and on original tables that need conversion. Its default search_path must
be altered to the mirror schema (if you are not already able to do so on
the connection string):
create user foo;
alter user foo set search_path to mirror;
grant usage on schema original, mirror to foo;
grant insert, update, delete on original.t, mirror.t, mirror.u to foo;  --
you need original.t but not original.u...

And now you can do:
psql -U foo -c "insert into t values(1)" -c "insert into u values(42)"

Best,
Giovanni


Re: AW: AW: Cast INTEGER to BIT confusion

2023-08-17 Thread GF
On Thu, 17 Aug 2023 at 19:09, GF  wrote:

>
> Erik,
> The could be a possible solution, if you could play with search_path...
>
>
  (sorry, Erik, obviously my message was intended to Markus)


Re: Base files compatibility between PG11 and PG15

2023-08-17 Thread Bruce Momjian
On Thu, Aug 17, 2023 at 07:36:20AM -0700, Adrian Klaver wrote:
> On 8/17/23 02:32, Abraham, Danny wrote:
> > Hi,
> > 
> > I have a database on Aurora@RDS.
> > It Used  V11.9, and then upgraded automatically by AWS to V15.2 AWS states 
> > that it relies on compatibility provided by the PG community.
> > We now experience problems with the data.
> 
> This needs more information:
> 
> 1) Describe the upgrade process
> 
> 2) Describe the data problems.

Plus it is Aurora, which is not community Postgres.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Only you can decide what is important to you.




To create or not to create that index

2023-08-17 Thread Stefan Knecht
Hello

Why does this happen?

profile_aggregates=> create index concurrently foo_idx on agg (status,
foots, created_ts);


^CCancel request sent
ERROR:  canceling statement due to user request
profile_aggregates=>
profile_aggregates=>  create index concurrently foo_idx on agg (status,
foots, created_ts);
ERROR:  relation " foo_idx" already exists

Are these operations not atomic ?

Cheers

Stefan


Re: To create or not to create that index

2023-08-17 Thread David G. Johnston
On Thu, Aug 17, 2023 at 8:37 PM Stefan Knecht 
wrote:

> Hello
>
> Why does this happen?
>
> profile_aggregates=> create index concurrently foo_idx on agg (status,
> foots, created_ts);
>
>
> ^CCancel request sent
> ERROR:  canceling statement due to user request
> profile_aggregates=>
> profile_aggregates=>  create index concurrently foo_idx on agg (status,
> foots, created_ts);
> ERROR:  relation " foo_idx" already exists
>
> Are these operations not atomic ?
>
>
No, being atomic would interfere with doing things concurrently.  Per the
docs:

In a concurrent index build, the index is actually entered as an “invalid”
index into the system catalogs in one transaction, then two table scans
occur in two more transactions.
...
If a problem arises while scanning the table, such as a deadlock or a
uniqueness violation in a unique index, the CREATE INDEX command will fail
but leave behind an “invalid” index.

"Problem" includes you forcibly killing it while it is running.

https://www.postgresql.org/docs/current/sql-createindex.html

David J.


Re: To create or not to create that index

2023-08-17 Thread Stefan Knecht
But that "invalid" index is being used by queries

On Fri, Aug 18, 2023 at 10:41 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Aug 17, 2023 at 8:37 PM Stefan Knecht 
> wrote:
>
>> Hello
>>
>> Why does this happen?
>>
>> profile_aggregates=> create index concurrently foo_idx on agg (status,
>> foots, created_ts);
>>
>>
>> ^CCancel request sent
>> ERROR:  canceling statement due to user request
>> profile_aggregates=>
>> profile_aggregates=>  create index concurrently foo_idx on agg (status,
>> foots, created_ts);
>> ERROR:  relation " foo_idx" already exists
>>
>> Are these operations not atomic ?
>>
>>
> No, being atomic would interfere with doing things concurrently.  Per the
> docs:
>
> In a concurrent index build, the index is actually entered as an “invalid”
> index into the system catalogs in one transaction, then two table scans
> occur in two more transactions.
> ...
> If a problem arises while scanning the table, such as a deadlock or a
> uniqueness violation in a unique index, the CREATE INDEX command will fail
> but leave behind an “invalid” index.
>
> "Problem" includes you forcibly killing it while it is running.
>
> https://www.postgresql.org/docs/current/sql-createindex.html
>
> David J.
>
>


Re: To create or not to create that index

2023-08-17 Thread David G. Johnston
On Thu, Aug 17, 2023 at 8:43 PM Stefan Knecht 
wrote:

> But that "invalid" index is being used by queries
>
>>
>>
Please don't top-post.

If it is used by queries it isn't invalid and thus its existence shouldn't
be surprising. So I'm not sure what you are saying.

David J.


Re: To create or not to create that index

2023-08-17 Thread Stefan Knecht
Ah no it is not. Something else was changed at the same time. Sigh.

Thanks for clarifying David

On Fri, Aug 18, 2023 at 10:42 AM Stefan Knecht 
wrote:

> But that "invalid" index is being used by queries
>
> On Fri, Aug 18, 2023 at 10:41 AM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Thu, Aug 17, 2023 at 8:37 PM Stefan Knecht 
>> wrote:
>>
>>> Hello
>>>
>>> Why does this happen?
>>>
>>> profile_aggregates=> create index concurrently foo_idx on agg (status,
>>> foots, created_ts);
>>>
>>>
>>> ^CCancel request sent
>>> ERROR:  canceling statement due to user request
>>> profile_aggregates=>
>>> profile_aggregates=>  create index concurrently foo_idx on agg (status,
>>> foots, created_ts);
>>> ERROR:  relation " foo_idx" already exists
>>>
>>> Are these operations not atomic ?
>>>
>>>
>> No, being atomic would interfere with doing things concurrently.  Per the
>> docs:
>>
>> In a concurrent index build, the index is actually entered as an
>> “invalid” index into the system catalogs in one transaction, then two table
>> scans occur in two more transactions.
>> ...
>> If a problem arises while scanning the table, such as a deadlock or a
>> uniqueness violation in a unique index, the CREATE INDEX command will fail
>> but leave behind an “invalid” index.
>>
>> "Problem" includes you forcibly killing it while it is running.
>>
>> https://www.postgresql.org/docs/current/sql-createindex.html
>>
>> David J.
>>
>>


Bryn is retiring. Last day at work Friday 18-Aug-2023

2023-08-17 Thread Bryn Llewellyn
I’m about to hang up my working boots. My last day of employment at Yugabyte, 
Inc (or anywhere at all) is imminent. My bryn-at-yugabyte-dot-com email address 
will stop working at 17:00 US/Pacific, Friday 18-Aug-2023. I’ll unsubscribe  
from this and all other postgresql-dot-org lists a few hours before that 
cut-off time.

Thanks very much to everybody who’s answered my questions. I know that I’ve 
annoyed many of you. But, from my side, the exchanges have been enormously 
helpful. It would have been so nice to have been able to ask my questions 
face-to-face so the whomever I talked to could see at a glance why I just 
didn’t get something that they said and thought was obvious. But those days are 
long gone now. The Brave New World has totally and irrevocably replaced that 
old regime—albeit somewhat later than Huxley forecast way back in 1931.

Who knows… I might re-subscribe presently using my private email address.

Regards, Bryn Llewellyn