Postgres error

2020-02-17 Thread Loai Abdallatif
Hello Guys,

I have case of two Postgres version 11 ( replication using repmgr) and and
its work fine as below:
-bash-4.2$ sudo su - postgres -c " /usr/pgsql-11/bin/repmgr -f
/var/lib/pgsql/repmgr/repmgr.conf cluster show"
 ID | Name  | Role| Status| Upstream | Location | Priority |
Connection string
+---+-+---+--+--+--+--
 1  | pgdb1 | standby |   running | pgdb2| default  | 100  |
host=pgdb1 user=repmgr dbname=repmgr
 2  | pgdb2 | primary | * running |  | default  | 100  |
host=pgdb2 user=repmgr dbname=repmgr



but when I installed pgpool 4.1 on third  server to loadbalace the traffic
., from command line I see everything ok like below:

-bash-4.2$ psql -U pgpool  --dbname=postgres --host pgpool -c "show
pool_nodes"
 node_id | hostname | port | status | lb_weight |  role   | select_cnt |
load_balance_node | replication_delay | replication_state |
replication_sync_state | last_statu
s_change
-+--+--++---+-++---+---+---++---
--
 0   | pgdb1| 5432 | up | 0.50  | standby | 0  |
false | 0 |   |
   | 2020-02-17
 11:03:31
 1   | pgdb2| 5432 | up | 0.50  | primary | 3  |
true  | 0 |   |
   | 2020-02-17
 11:03:31





the problem is appears on when i installed pgpooladmin and tried to stop
one node ( master os slave ) I got the below error on pgserver ( not
pgpool)

2020-02-17 10:49:39.118 IST [2051] ERROR:  function pgpool_pgctl(unknown,
unknown) does not exist at character 8
2020-02-17 10:49:39.118 IST [2051] HINT:  No function matches the given
name and argument types. You might need to add explicit type casts.
2020-02-17 10:49:39.118 IST [2051] STATEMENT:  SELECT pgpool_pgctl('stop',
's')

what I can do please


Compiling via LLVM and active LTO

2020-02-17 Thread Sapd
Hello,

I want to do some performance measures and have problems compiling postgres
with LTO activated. I tried it with postgresql 12.0 and 12.2, yielding both
the same result. The errors seem to relate to languages and
encoding-functions. The linker simply seems unable to find them. Also it
does not matter, wether I use the thin LTO or the full LLVM LTO.

Normal compilation (without any additional CFLAGS) works.

 System data
Mac OS X Catalina
llvm 8
Apple clang version 11.0.0 (clang-1100.0.33.17)
Target: x86_64-apple-darwin19.3.0

 Configuration
./configure --prefix=../dist CFLAGS="-flto=thin -O3"

 Error message (created by linker)
undef: _pg_mic2ascii
undef: _check_encoding_conversion_args
undef: _pg_ascii2mic
Undefined symbols for architecture x86_64:
  "_pg_mic2ascii", referenced from:
  _mic_to_ascii in lto.o
  "_check_encoding_conversion_args", referenced from:
  _ascii_to_mic in lto.o
  _mic_to_ascii in lto.o
  "_pg_ascii2mic", referenced from:
  _ascii_to_mic in lto.o

undef: _LocalToUtf
undef: _check_encoding_conversion_args
undef: _UtfToLocal
Undefined symbols for architecture x86_64:
  "_LocalToUtf", referenced from:
  _koi8r_to_utf8 in lto.o
  _koi8u_to_utf8 in lto.o
  "_check_encoding_conversion_args", referenced from:
  _utf8_to_koi8r in lto.o
  _koi8r_to_utf8 in lto.o
  _utf8_to_koi8u in lto.o
  _koi8u_to_utf8 in lto.o
  "_UtfToLocal", referenced from:
  _utf8_to_koi8r in lto.o
  _utf8_to_koi8u in lto.o




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-17 Thread Nick Renders
The problem is that I don't know which column is corrupt. But I found a 
solution: by simply copying the record into another variable, the values 
are parsed and the TOAST errors are thrown.


In case anyone's interested, here's my code, based on an example from 
http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html



DO $f$
DECLARE
rContent1 record;
rContent2 record;
iCounter integer DEFAULT 1;
iValue integer;
pTableName varchar := 'f_gsxws_transaction';
pFieldName varchar := 'gwta_number';
BEGIN
	FOR iValue IN EXECUTE 'SELECT ' || pFieldName || ' FROM ' || 
pTableName::regclass || ' ORDER BY ' || pFieldName LOOP

BEGIN
			EXECUTE 'SELECT * FROM ' || pTableName::regclass || ' WHERE ' || 
pFieldName || ' = $1'

INTO rContent1
USING iValue;
rContent2 := rContent1;
EXCEPTION WHEN OTHERS THEN
			RAISE NOTICE 'data for %.% % is corrupt', pTableName, pFieldName, 
iValue;

END;
IF iCounter % 10 = 0 THEN
RAISE NOTICE '% % records checked', iCounter, 
pTableName;
END IF;
iCounter := iCounter+1;
END LOOP;
END;
$f$;


Cheers,

Nick


On 14 Feb 2020, at 16:14, Tom Lane wrote:


"Nick Renders"  writes:
I thought I would write a little PL script that would loop through 
all
the data and report any inconsistencies. However, I can't get it to 
work

properly.
...
1) The function has no problem executing the SELECT statement. It is
only when "rcontents" is returned, that the function fails. This is a
problem, because the ultimate goal is to loop through all records and
only return/alert something in case of an error.
2) The function never enters the EXCEPTION clause. Instead, when it 
hits
the RETURN command, it breaks and shows the same message as in 
pgAdmin:

missing chunk number 0 for toast value 8289525 in pg_toast_5572299.


I think what's happening there is that the function doesn't try to
dereference the value's TOAST pointer during SELECT INTO.  It just 
stores

that pointer into a variable, and only sometime later when the actual
content of the value is demanded, do you see the error raised.

The solution to that is to do something that uses the contents of the
busted column right away while still inside the EXCEPTION block, 
perhaps

along the lines of "select md5(mycolumn) into local_variable from..."

A close reading of

https://www.postgresql.org/docs/current/storage-toast.html

would probably help you understand what's happening here.

regards, tom lane





Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-17 Thread Nick Renders

Hi Jeremy,

This happend on PostgreSQL v9.6 which crashed 2 weeks ago.
Since then we have upgraded and restored our server, but my example is 
from the older, corrupt database.


Nick


On 15 Feb 2020, at 5:30, Jeremy Schneider wrote:


On Feb 14, 2020, at 04:39, Nick Renders  wrote:

I get the following message:

  ERROR:  missing chunk number 0 for toast value 8289525 in 
pg_toast_5572299


What version of PostgreSQL are you running? I’ve seen this a number 
of times the past couple years; curious if the lurking bug is still 
observed in latest versions.


-Jeremy

Sent from my TI-83





Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Jeremy Finzel
Good morning!

We are a little bit puzzled because running the following command on a 9.6
cluster is apparently requiring a table rewrite, or at least a very long
operation of some kind, even though the docs say that as of 9.2:


   -

   Increasing the length limit for a varchar or varbit column, or removing
   the limit altogether, no longer requires a table rewrite. Similarly,
   increasing the allowable precision of a numeric column, or changing a
   column from constrained numeric to unconstrained numeric, no longer
   requires a table rewrite. Table rewrites are also avoided in similar cases
   involving the interval, timestamp, and timestamptz types.

I have a table foo with 100 million rows, and a column:

   - id character varying(20)

The following command is the one that we expect to execute very quickly (we
are not seeing any locking), but it is instead taking a very long time:

   - ALTER TABLE foo ALTER COLUMN id TYPE varchar(100);

I attempted the same operation instead cast to text - same problem.

Why do the docs seem wrong in our case?  I have a guess: if the table was
created prior to version 9.2, perhaps they are not binary coercible to text
after 9.2?  In any case, I would be very grateful for an explanation!


Thank you!
Jeremy


Re: Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Tom Lane
Jeremy Finzel  writes:
> I have a table foo with 100 million rows, and a column:
>- id character varying(20)
> The following command is the one that we expect to execute very quickly (we
> are not seeing any locking), but it is instead taking a very long time:
>- ALTER TABLE foo ALTER COLUMN id TYPE varchar(100);

Hm, the code is supposed to avoid a table rewrite, but I wonder if
there's something else that's not being avoided, such as an index
rebuild or foreign-key verification.  Could we see the whole table
definition, eg from psql \d+ ?

regards, tom lane




Re: Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Jeremy Finzel
On Mon, Feb 17, 2020 at 8:21 AM Tom Lane  wrote:

> Jeremy Finzel  writes:
> > I have a table foo with 100 million rows, and a column:
> >- id character varying(20)
> > The following command is the one that we expect to execute very quickly
> (we
> > are not seeing any locking), but it is instead taking a very long time:
> >- ALTER TABLE foo ALTER COLUMN id TYPE varchar(100);
>
> Hm, the code is supposed to avoid a table rewrite, but I wonder if
> there's something else that's not being avoided, such as an index
> rebuild or foreign-key verification.  Could we see the whole table
> definition, eg from psql \d+ ?
>
> regards, tom lane
>

Based on your feedback, I quickly identified that indeed, the following
index is causing the re-type to be slow:

"id_idx" btree ("substring"(id::text, 4, 7))

I'm still not sure why a rebuild of this index would be required,
technically speaking.  But perhaps in any case the docs should have
something to the effect that expression indexes may require rebuild under
specific circumstances?

Thanks!
Jeremy


Re: Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Ron

On 2/17/20 9:01 AM, Jeremy Finzel wrote:
On Mon, Feb 17, 2020 at 8:21 AM Tom Lane > wrote:


Jeremy Finzel mailto:finz...@gmail.com>> writes:
> I have a table foo with 100 million rows, and a column:
>    - id character varying(20)
> The following command is the one that we expect to execute very
quickly (we
> are not seeing any locking), but it is instead taking a very long time:
>    - ALTER TABLE foo ALTER COLUMN id TYPE varchar(100);

Hm, the code is supposed to avoid a table rewrite, but I wonder if
there's something else that's not being avoided, such as an index
rebuild or foreign-key verification.  Could we see the whole table
definition, eg from psql \d+ ?

                        regards, tom lane


Based on your feedback, I quickly identified that indeed, the following 
index is causing the re-type to be slow:


"id_idx" btree ("substring"(id::text, 4, 7))

I'm still not sure why a rebuild of this index would be required, 
technically speaking.  But perhaps in any case the docs should have 
something to the effect that expression indexes may require rebuild under 
specific circumstances?


How much faster would "it" be if you dropped the index, ran ALTER and 
rebuilt the index?  Or is it too late?


--
Angular momentum makes the world go 'round.


Re: Cannot connect to postgresql-11 from another machine after boot

2020-02-17 Thread Jason Swails
On Sun, Feb 16, 2020 at 8:51 AM Peter J. Holzer  wrote:

> On 2020-02-13 21:03:48 -0800, Adrian Klaver wrote:
> > On 2/13/20 9:02 PM, Adrian Klaver wrote:
> > > On 2/13/20 7:54 PM, Jason Swails wrote:
> > > > The problem is that after my machine boots, I'm unable to connect to
> > > > the server from anywhere except localhost.  Running a simple
> > > > "systemctl restart postgresql" fixes the problem and allows me to
> > > > connect from anywhere on my LAN.  Here is an example of this
> > > > behavior:
> [...]
> > > >
> > > > So the first connection attempt fails.  But when I restart the
> > > > service and try again (doing nothing else in between), the
> > > > connection attempt succeeds.  My workaround has been to simply
> > > > restart the service every time my machine reboots, but I'd really
> > > > like to have a more reliable startup.
> > > >
> > > > Any ideas how to start hunting down the root cause?  I think this
> > > > started happening after I moved the data directory to another drive.
> > >
> > > I would start by looking in the system log to see what it records when
> > > the service tries to start on reboot.
> >
> > Hit send to soon. At a guess the Postgres service is starting before the
> > drive is mounted.
>
> I don't think this has anything to do with the drive. If the drive
> wasn't mounted he couldn't connect from localhost either.
>
> What is probably happening is that postgresql is configured to listen on
> localhost and the IP address of the ethernet interface and is starting
> before the etherned interface is ready. So it is listening only on
> localhost (there should be an error message regarding the other address
> in the log). When he restarts postgresql some time later, the interface
> is ready.
>
> It should be possible to solve this by adding the right dependencies to
> systemd.
>

I actually think the problem was both of these.  The postgresql.conf file
was on the non-root drive that probably wasn't mounted before postgresql
started up -- I think the "default" listen_addresses when no conf file is
available is just "localhost".  To fix this, I added "After=home.mount" to
the postgresql systemd service.  Once I did that, I started seeing the
error message regarding the other address in the log, so I suspected
exactly what you mentioned here.

I then added "network.target", "networking.service", and
"network-online.target" to the After line of the postgresql.service systemd
file, but it still didn't fix the problem.  I ultimately had to change
listen_addresses from "localhost,192.168.1.3" to "*".  It's certainly not
my favorite approach as the former is stricter and therefore more secure.
But I don't have port forwarding set up for the postgres port, so my router
should serve as a suitable firewall for my small-scale home database setup.

Thanks,
Jason

-- 
Jason M. Swails


DBI && INSERT

2020-02-17 Thread Matthias Apitz


Hello,

I spend today some hours to nail down and insert problem into our
database with DBI like:

   my $rc = $my_dbh->do($my_sqlstatement);

which returns 1 in $rc (which the following flow in our script took as an
error). If one fired up the same string in $my_sqlstatement with pgsql
the result was always

   INSERT 0 1

At the end of the day I watched with strace the communication on the
network between the DBI client and the PG server:

...
13576 sendto(3, 
"6f7374656e20676573636872696562656e3a303a0a726561645f656469666163745f66696c65733a3a6d61696e20456e646520616d2031372e30322e3230323020756d2031363a30303a3234205568720a6c6f67206265656e6465742c206a65747a74207769726420696e7365727420646573206c6f677320696e206163715f7661726461746120766572737563687420286e757220696d207472616365207369636874626172290a',
 \n  1)\0", 359, MSG_NOSIGNAL, NULL, 0) = 359
13576 poll([{fd=3, events=POLLIN|POLLERR}], 1, 4294967295) = 1 ([{fd=3, 
revents=POLLIN}])
13576 recvfrom(3, "C\0\0\0\17INSERT 0 1\0Z\0\0\0\5I", 32768, 0, NULL, NULL) = 22
13576 write(4, "<- do= ( 1 ) [1 items] at read_edifact_files.pl line 
8831\n", 62) = 62
13576 write(4, "-> $DBI::errstr (&) FETCH from lasth=HASH\n", 46) = 46
13576 write(4, "<- $DBI::errstr= undef\n", 27) = 27
...

i.e. the response on the network was (always) also 'INSERT 0 1' and DBI
misinterprets this as an error condition with 'undef' $DBI::errstr
string.

matthias
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: DBI && INSERT

2020-02-17 Thread Gianni Ceccarelli
On Mon, 17 Feb 2020 16:49:27 +0100
Matthias Apitz  wrote:
> I spend today some hours to nail down and insert problem into our
> database with DBI like:
> 
>my $rc = $my_dbh->do($my_sqlstatement);
> 
> which returns 1 in $rc (which the following flow in our script took
> as an error).

The DBI docs say::

  $rows = $dbh->do($statement) or die $dbh->errstr;

so when ``do`` returns 1, it means it worked, not that it failed.

In particular:

Returns the number of rows affected or "undef" on error. A return
value of "-1" means the number of rows is not known, not
applicable, or not available.

-- 
Dakkar - 
GPG public key fingerprint = A071 E618 DD2C 5901 9574
 6FE2 40EA 9883 7519 3F88
key id = 0x75193F88





Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-17 Thread Jeremy Schneider
FWIW, Bertrand blogged an even faster way to do this about a month ago - using 
pageinspect and processing blocks instead of rows

https://bdrouvot.wordpress.com/2020/01/18/retrieve-postgresql-variable-length-storage-information-thanks-to-pageinspect/

-J

Sent from my TI-83

> On Feb 17, 2020, at 03:32, Nick Renders  wrote:
> 
> The problem is that I don't know which column is corrupt. But I found a 
> solution: by simply copying the record into another variable, the values are 
> parsed and the TOAST errors are thrown.
> 
> In case anyone's interested, here's my code, based on an example from 
> http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html
> 
> 
> DO $f$
> DECLARE
>rContent1 record;
>rContent2 record;
>iCounter integer DEFAULT 1;
>iValue integer;
>pTableName varchar := 'f_gsxws_transaction';
>pFieldName varchar := 'gwta_number';
> BEGIN
>FOR iValue IN EXECUTE 'SELECT ' || pFieldName || ' FROM ' || 
> pTableName::regclass || ' ORDER BY ' || pFieldName LOOP
>BEGIN
>EXECUTE 'SELECT * FROM ' || pTableName::regclass || ' WHERE ' || 
> pFieldName || ' = $1'
>INTO rContent1
>USING iValue;
>rContent2 := rContent1;
>EXCEPTION WHEN OTHERS THEN
>RAISE NOTICE 'data for %.% % is corrupt', pTableName, pFieldName, 
> iValue;
>END;
>IF iCounter % 10 = 0 THEN
>RAISE NOTICE '% % records checked', iCounter, pTableName;
>END IF;
>iCounter := iCounter+1;
>END LOOP;
> END;
> $f$;
> 
> 
> Cheers,
> 
> Nick
> 
> 
>> On 14 Feb 2020, at 16:14, Tom Lane wrote:
>> 
>> "Nick Renders"  writes:
>>> I thought I would write a little PL script that would loop through all
>>> the data and report any inconsistencies. However, I can't get it to work
>>> properly.
>>> ...
>>> 1) The function has no problem executing the SELECT statement. It is
>>> only when "rcontents" is returned, that the function fails. This is a
>>> problem, because the ultimate goal is to loop through all records and
>>> only return/alert something in case of an error.
>>> 2) The function never enters the EXCEPTION clause. Instead, when it hits
>>> the RETURN command, it breaks and shows the same message as in pgAdmin:
>>> missing chunk number 0 for toast value 8289525 in pg_toast_5572299.
>> 
>> I think what's happening there is that the function doesn't try to
>> dereference the value's TOAST pointer during SELECT INTO.  It just stores
>> that pointer into a variable, and only sometime later when the actual
>> content of the value is demanded, do you see the error raised.
>> 
>> The solution to that is to do something that uses the contents of the
>> busted column right away while still inside the EXCEPTION block, perhaps
>> along the lines of "select md5(mycolumn) into local_variable from..."
>> 
>> A close reading of
>> 
>> https://www.postgresql.org/docs/current/storage-toast.html
>> 
>> would probably help you understand what's happening here.
>> 
>>regards, tom lane
> 
> 


parsing xml with PG 9.2.4

2020-02-17 Thread Mario Vlahovic
Hello Developers,
I hope you can help me. I'm having troubles parsing some data from my psql
table, which I need for further manipulation.
So my query:

select program_information.description FROM program_information WHERE id =
8768787;

GIves me:



  
Zla smrt
Pet prijateljev, starih nekaj čez dvajset let,
v samotni koči najde Knjigo mrtvih. S posnetka, ki so ga napravili
arheologi, izvedo, da je bilo starodavno besedilo odkrito med kandarijskimi
ruševinami sumerske civilizacije.


  
PG
  


























*  
Bruce
Campbell
  
Ellen
  Sandweiss
  
Betsy
  Baker
  
Sam
Raimi
*

  
1981
  

  
  

  2

  


What I need is parsed data from , GivenName + FamilyName for
all entries. I know it should be doable with xpath but I just can't get it
to work :/.

Please help.

Thanks,


Re: Cannot connect to postgresql-11 from another machine after boot

2020-02-17 Thread Adrian Klaver

On 2/17/20 7:17 AM, Jason Swails wrote:



On Sun, Feb 16, 2020 at 8:51 AM Peter J. Holzer > wrote:


On 2020-02-13 21:03:48 -0800, Adrian Klaver wrote:
 > On 2/13/20 9:02 PM, Adrian Klaver wrote:
 > > On 2/13/20 7:54 PM, Jason Swails wrote:
 > > > The problem is that after my machine boots, I'm unable to
connect to
 > > > the server from anywhere except localhost.  Running a simple
 > > > "systemctl restart postgresql" fixes the problem and allows me to
 > > > connect from anywhere on my LAN.  Here is an example of this
 > > > behavior:
[...]
 > > >
 > > > So the first connection attempt fails.  But when I restart the
 > > > service and try again (doing nothing else in between), the
 > > > connection attempt succeeds.  My workaround has been to simply
 > > > restart the service every time my machine reboots, but I'd really
 > > > like to have a more reliable startup.
 > > >
 > > > Any ideas how to start hunting down the root cause?  I think this
 > > > started happening after I moved the data directory to another
drive.
 > >
 > > I would start by looking in the system log to see what it
records when
 > > the service tries to start on reboot.
 >
 > Hit send to soon. At a guess the Postgres service is starting
before the
 > drive is mounted.

I don't think this has anything to do with the drive. If the drive
wasn't mounted he couldn't connect from localhost either.

What is probably happening is that postgresql is configured to listen on
localhost and the IP address of the ethernet interface and is starting
before the etherned interface is ready. So it is listening only on
localhost (there should be an error message regarding the other address
in the log). When he restarts postgresql some time later, the interface
is ready.

It should be possible to solve this by adding the right dependencies
to systemd.


I actually think the problem was both of these.  The postgresql.conf 
file was on the non-root drive that probably wasn't mounted before 
postgresql started up -- I think the "default" listen_addresses when no 
conf file is available is just "localhost".  To fix this, I added 


Without a conf file the server will not start(the ok notwithstanding):

sudo /etc/init.d/postgresql12 start
Starting PostgreSQL: ok

postmaster: could not access the server configuration file 
"/usr/local/pgsql12/data/postgresql.conf": No such file or directory


ps ax | grep postmaster

Returns nothing

Whereas with conf file:

sudo /etc/init.d/postgresql12 start
Starting PostgreSQL: ok

--2020-02-17 08:35:05.026 PST-0LOG:  starting PostgreSQL 12.1 on 
x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 7.4.1 20190905 
[gcc-7-branch revision 275407], 64-bit
--2020-02-17 08:35:05.026 PST-0LOG:  listening on IPv4 address 
"0.0.0.0", port 5432
--2020-02-17 08:35:05.026 PST-0LOG:  listening on IPv6 address "::", 
port 5432
--2020-02-17 08:35:05.075 PST-0LOG:  listening on Unix socket 
"/tmp/.s.PGSQL.5432"
--2020-02-17 08:35:05.131 PST-0LOG:  redirecting log output to logging 
collector process
--2020-02-17 08:35:05.131 PST-0HINT:  Future log output will appear in 
directory "pg_log".



ps ax | grep postmaster

/usr/local/pgsql12/bin/postmaster -D /usr/local/pgsql12/data


"After=home.mount" to the postgresql systemd service.  Once I did that, 
I started seeing the error message regarding the other address in the 
log, so I suspected exactly what you mentioned here.



What is the actual error message?



I then added "network.target", "networking.service", and 
"network-online.target" to the After line of the postgresql.service 
systemd file, but it still didn't fix the problem.  I ultimately had to 
change listen_addresses from "localhost,192.168.1.3" to "*".  It's 
certainly not my favorite approach as the former is stricter and 
therefore more secure. But I don't have port forwarding set up for the 
postgres port, so my router should serve as a suitable firewall for my 
small-scale home database setup.


You can also use pg_hba.conf to restrict access:

https://www.postgresql.org/docs/11/auth-pg-hba-conf.html



Thanks,
Jason

--
Jason M. Swails



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Adrian Klaver

On 2/17/20 7:01 AM, Jeremy Finzel wrote:
On Mon, Feb 17, 2020 at 8:21 AM Tom Lane > wrote:


Jeremy Finzel mailto:finz...@gmail.com>> writes:
 > I have a table foo with 100 million rows, and a column:
 >    - id character varying(20)
 > The following command is the one that we expect to execute very
quickly (we
 > are not seeing any locking), but it is instead taking a very long
time:
 >    - ALTER TABLE foo ALTER COLUMN id TYPE varchar(100);

Hm, the code is supposed to avoid a table rewrite, but I wonder if
there's something else that's not being avoided, such as an index
rebuild or foreign-key verification.  Could we see the whole table
definition, eg from psql \d+ ?

                         regards, tom lane


Based on your feedback, I quickly identified that indeed, the following 
index is causing the re-type to be slow:


"id_idx" btree ("substring"(id::text, 4, 7))

I'm still not sure why a rebuild of this index would be required, 
technically speaking.  But perhaps in any case the docs should have 
something to the effect that expression indexes may require rebuild 
under specific circumstances?


How about?:

https://www.postgresql.org/docs/9.6/sql-altertable.html

"Adding a column with a DEFAULT clause or changing the type of an 
existing column will require the entire table and its indexes to be 
rewritten. As an exception when changing the type of an existing column, 
if the USING clause does not change the column contents and the old type 
is either binary coercible to the new type or an unconstrained domain 
over the new type, a table rewrite is not needed; but any indexes on the 
affected columns must still be rebuilt. Adding or removing a system oid 
column also requires rewriting the entire table. Table and/or index 
rebuilds may take a significant amount of time for a large table; and 
will temporarily require as much as double the disk space."




Thanks!
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgres error

2020-02-17 Thread Adrian Klaver

On 2/17/20 1:50 AM, Loai Abdallatif wrote:

Hello Guys,

I have case of two Postgres version 11 ( replication using repmgr) and 
and its work fine as below:
-bash-4.2$ sudo su - postgres -c " /usr/pgsql-11/bin/repmgr -f 
/var/lib/pgsql/repmgr/repmgr.conf cluster show"
  ID | Name  | Role    | Status    | Upstream | Location | Priority | 
Connection string

+---+-+---+--+--+--+--
  1  | pgdb1 | standby |   running | pgdb2    | default  | 100      | 
host=pgdb1 user=repmgr dbname=repmgr
  2  | pgdb2 | primary | * running |          | default  | 100      | 
host=pgdb2 user=repmgr dbname=repmgr




but when I installed pgpool 4.1 on third  server to loadbalace the 
traffic ., from command line I see everything ok like below:


-bash-4.2$ psql -U pgpool  --dbname=postgres --host pgpool -c "show 
pool_nodes"
  node_id | hostname | port | status | lb_weight |  role   | select_cnt 
| load_balance_node | replication_delay | replication_state | 
replication_sync_state | last_statu

s_change
-+--+--++---+-++---+---+---++---
--
  0       | pgdb1    | 5432 | up     | 0.50  | standby | 0 
  | false             | 0                 |                   | 
                | 2020-02-17

  11:03:31
  1       | pgdb2    | 5432 | up     | 0.50  | primary | 3 
  | true              | 0                 |                   | 
                | 2020-02-17

  11:03:31





the problem is appears on when i installed pgpooladmin and tried to stop 
one node ( master os slave ) I got the below error on pgserver ( not 
pgpool)


2020-02-17 10:49:39.118 IST [2051] ERROR:  function 
pgpool_pgctl(unknown, unknown) does not exist at character 8
2020-02-17 10:49:39.118 IST [2051] HINT:  No function matches the given 
name and argument types. You might need to add explicit type casts.
2020-02-17 10:49:39.118 IST [2051] STATEMENT:  SELECT 
pgpool_pgctl('stop', 's')


what I can do please


Probably best to ask here:

https://www.pgpool.net/mailman/listinfo/pgpool-general









--
Adrian Klaver
adrian.kla...@aklaver.com




Re: parsing xml with PG 9.2.4

2020-02-17 Thread Jimmy Angelakos
Hi Mario,

First off, as you will be aware, 9.2 is quite an old Postgres version
and is currently unsupported - for security reasons alone, you should
upgrade ASAP.

Regardless, this query should work for you:

SELECT xpath('/ProgramInformation/BasicDescription/CreditsList',
program_information.description, NAMESPACE_ARRAY) FROM
program_information WHERE id = 8768787;

where NAMESPACE_ARRAY needs to contain your definition for prefix
mpeg7, otherwise you'll get "Namespace prefix is not defined" errors
when parsing.

You will find more Xpath guidance here:
https://www.postgresql.org/docs/9.2/functions-xml.html#FUNCTIONS-XML-PROCESSING

Best regards,
Jimmy


On Mon, 17 Feb 2020 at 16:32, Mario Vlahovic  wrote:
>
> Hello Developers,
> I hope you can help me. I'm having troubles parsing some data from my psql 
> table, which I need for further manipulation.
> So my query:
>
> select program_information.description FROM program_information WHERE id = 
> 8768787;
>
> GIves me:
>
> 
> 
>   
> Zla smrt
> Pet prijateljev, starih nekaj čez dvajset let, v 
> samotni koči najde Knjigo mrtvih. S posnetka, ki so ga napravili arheologi, 
> izvedo, da je bilo starodavno besedilo odkrito med kandarijskimi ruševinami 
> sumerske civilizacije.
> 
> 
>   
> PG
>   
> 
> 
>   
> 
>   Bruce
>   Campbell
> 
>   
>   
> 
>   Ellen
>   Sandweiss
> 
>   
>   
> 
>   Betsy
>   Baker
> 
>   
>   
> 
>   Sam
>   Raimi
> 
>   
> 
> 
>   
> 1981
>   
> 
>   
>   
> 
>   2
> 
>   
> 
>
> What I need is parsed data from , GivenName + FamilyName for all 
> entries. I know it should be doable with xpath but I just can't get it to 
> work :/.
>
> Please help.
>
> Thanks,




Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-17 Thread Jeremy Schneider
On 2/17/20 08:23, Jeremy Schneider wrote:
> FWIW, Bertrand blogged an even faster way to do this about a month ago -
> using pageinspect and processing blocks instead of rows
> 
> https://bdrouvot.wordpress.com/2020/01/18/retrieve-postgresql-variable-length-storage-information-thanks-to-pageinspect/
> 
> 
>> On Feb 17, 2020, at 03:32, Nick Renders  wrote:
>>
>> The problem is that I don't know which column is corrupt. But I found
>> a solution: by simply copying the record into another variable, the
>> values are parsed and the TOAST errors are thrown.
>>
>> In case anyone's interested, here's my code, based on an example from
>> http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html

Apologies - sent that slightly rushed email while I was taking the bus
to the office this morning and linked the wrong blog post :D

Here's the right one, showing a much faster way to identify which tuple
links to a bad toast row:

https://bdrouvot.wordpress.com/2020/01/04/get-toast-chunk_id-from-the-user-table-tuples-or-from-the-toast-index-thanks-to-pageinspect/

-Jeremy

-- 
http://about.me/jeremy_schneider




Re: Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Tom Lane
Adrian Klaver  writes:
> On 2/17/20 7:01 AM, Jeremy Finzel wrote:
>> I'm still not sure why a rebuild of this index would be required, 
>> technically speaking.  But perhaps in any case the docs should have 
>> something to the effect that expression indexes may require rebuild 
>> under specific circumstances?

> [ it is already ]

Yeah.  In principle, we wouldn't need to rebuild the indexes in this
case, since there's no semantic difference between a value sourced
from a varchar(N) column and a varchar(some-other-N) column.  In general,
though, ALTER COLUMN TYPE doesn't know whether that's true; and there
are definitely binary-compatible cases where it *does* matter.
(An example is that coercing integer to OID is allowed without a
table rewrite, but an index rebuild is needed because the sort
order is different.)  So right now, any index mentioning the altered
column has to be rebuilt.

Maybe someday we'll figure out how to do better.  I'm kind of wondering
whether it wouldn't be safe to assume that changes that only change the
typmod and not the type OID don't require index rebuilds.

regards, tom lane




Re: Cases where alter table set type varchar(longer length) still needs table rewrite

2020-02-17 Thread Jeremy Finzel
On Mon, Feb 17, 2020 at 10:46 AM Adrian Klaver 
wrote:

>
> How about?:
>
> https://www.postgresql.org/docs/9.6/sql-altertable.html
>
> "Adding a column with a DEFAULT clause or changing the type of an
> existing column will require the entire table and its indexes to be
> rewritten. As an exception when changing the type of an existing column,
> if the USING clause does not change the column contents and the old type
> is either binary coercible to the new type or an unconstrained domain
> over the new type, a table rewrite is not needed; but any indexes on the
> affected columns must still be rebuilt. Adding or removing a system oid
> column also requires rewriting the entire table. Table and/or index
> rebuilds may take a significant amount of time for a large table; and
> will temporarily require as much as double the disk space."
>
> >
> > Thanks!
> > Jeremy
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


You mean the part "any indexes on the affected columns must still be
rebuilt"?  Yes, I guess that is pretty clear.  Thanks,

Jeremy


Re: Postgres error

2020-02-17 Thread Loai Abdallatif
Thanks Adrian, I will

On Mon, Feb 17, 2020 at 6:53 PM Adrian Klaver 
wrote:

> On 2/17/20 1:50 AM, Loai Abdallatif wrote:
> > Hello Guys,
> >
> > I have case of two Postgres version 11 ( replication using repmgr) and
> > and its work fine as below:
> > -bash-4.2$ sudo su - postgres -c " /usr/pgsql-11/bin/repmgr -f
> > /var/lib/pgsql/repmgr/repmgr.conf cluster show"
> >   ID | Name  | Role| Status| Upstream | Location | Priority |
> > Connection string
> >
> +---+-+---+--+--+--+--
> >   1  | pgdb1 | standby |   running | pgdb2| default  | 100  |
> > host=pgdb1 user=repmgr dbname=repmgr
> >   2  | pgdb2 | primary | * running |  | default  | 100  |
> > host=pgdb2 user=repmgr dbname=repmgr
> >
> >
> >
> > but when I installed pgpool 4.1 on third  server to loadbalace the
> > traffic ., from command line I see everything ok like below:
> >
> > -bash-4.2$ psql -U pgpool  --dbname=postgres --host pgpool -c "show
> > pool_nodes"
> >   node_id | hostname | port | status | lb_weight |  role   | select_cnt
> > | load_balance_node | replication_delay | replication_state |
> > replication_sync_state | last_statu
> > s_change
> >
> -+--+--++---+-++---+---+---++---
> > --
> >   0   | pgdb1| 5432 | up | 0.50  | standby | 0
> >   | false | 0 |   |
> > | 2020-02-17
> >   11:03:31
> >   1   | pgdb2| 5432 | up | 0.50  | primary | 3
> >   | true  | 0 |   |
> > | 2020-02-17
> >   11:03:31
> >
> >
> >
> >
> >
> > the problem is appears on when i installed pgpooladmin and tried to stop
> > one node ( master os slave ) I got the below error on pgserver ( not
> > pgpool)
> >
> > 2020-02-17 10:49:39.118 IST [2051] ERROR:  function
> > pgpool_pgctl(unknown, unknown) does not exist at character 8
> > 2020-02-17 10:49:39.118 IST [2051] HINT:  No function matches the given
> > name and argument types. You might need to add explicit type casts.
> > 2020-02-17 10:49:39.118 IST [2051] STATEMENT:  SELECT
> > pgpool_pgctl('stop', 's')
> >
> > what I can do please
>
> Probably best to ask here:
>
> https://www.pgpool.net/mailman/listinfo/pgpool-general
>
> >
> >
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


pgstattuple extension - Can pgstattuple_approx support toast tables?

2020-02-17 Thread Jean-Marc Lessard
Hello,

The pgstattuple extension is pretty useful to monitor table bloat.

I have few large tables with bytea column that represent TB of data that I 
would need to monitor.
I ran pgstattuple() on the associated toast tables but it takes hours to run.

pgstattuple_approx() would be a nice alternative but it does not accept toast 
tables.
It can be run on pg_largeobject which is very similar to the toast table 
structures.
Why pgstattuple_approx() cannot be run on toast tables?

Regards,

Jean-Marc Lessard
Administrateur de base de donn?es / Database Administrator

[cid:Grey_159x30_4bbe7521-befd-4728-9ef7-b43f42916b2e.png]

Ultra Electronics Forensic Technology Inc.
T +1 514 489 4247 x4164
www.ultra-forensictechnology.com



policies and extensions

2020-02-17 Thread Marc Munro
I tried to define a policy within an extension but the policy does not
seem to belong to the extension.  Is this the way it is supposed to be?

This is postgres 9.5.21

Here is the relevant code from the extension:

create table rls2 (
  username text not null,
  details  text not null
);

create policy only_owner on rls2
  for all
  to session_user
  using (username = user);

The table, as expected, depends on the extension but the policy does
not (determined by querying pg_depend).

Am I missing something special about policies or is this an oversight?

__
Marc




Re: policies and extensions

2020-02-17 Thread Adrian Klaver

On 2/17/20 2:46 PM, Marc Munro wrote:

I tried to define a policy within an extension but the policy does not
seem to belong to the extension.  Is this the way it is supposed to be?

This is postgres 9.5.21

Here is the relevant code from the extension:

 create table rls2 (
   username text not null,
   details  text not null
 );

 create policy only_owner on rls2
   for all
   to session_user
   using (username = user);

The table, as expected, depends on the extension but the policy does
not (determined by querying pg_depend).

Am I missing something special about policies or is this an oversight?


?:
https://www.postgresql.org/docs/9.5/sql-createpolicy.html

"The CREATE POLICY command defines a new row-level security policy for a 
table. Note that row-level security must be enabled on the table (using 
ALTER TABLE ... ENABLE ROW LEVEL SECURITY) in order for created policies 
to be applied."




__
Marc





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: policies and extensions

2020-02-17 Thread Tom Lane
Marc Munro  writes:
> I tried to define a policy within an extension but the policy does not
> seem to belong to the extension.  Is this the way it is supposed to be?

Yeah, I would expect that.

https://www.postgresql.org/docs/current/extend-extensions.html

says:

The kinds of SQL objects that can be members of an extension are shown
in the description of ALTER EXTENSION.  ...
Also notice that while a table can be a member of
an extension, its subsidiary objects such as indexes are not directly
considered members of the extension.  ...

An RLS policy is a table "subsidiary object" so it only depends indirectly
on the extension that owns the table.

regards, tom lane