Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Laurenz Albe
Ron wrote:
> > > However, one or more of our big (and schema-identical) prod databases 
> > > (which are each on a different server)
> > > it is finicky and tends to just "sit" at a random one of the CREATE OR 
> > > REPLACE FUNCTION statements.
> > > 
> > > The "list all blocking queries" I run doesn't show that anything is 
> > > blocking it (though it blocks
> > > everything else), and neither top(1) nor iotop(1) show any activity.
> > > 
> > > If it matters, this script is fed to the databases via the JDBC driver, 
> > > and it works fine when I run it via psql.
> > > (I'd gladly run the scripts manually, but these are child databases, and 
> > > a parent db must be updated
> > > at the same time by a canned application.)
> > > 
> > > Where in Postgres can I look to see why it's just sitting there?
> >
> > select * from pg_stat_activity;
> > might shed some light?
>  
> That (plus pg_locks)  is the heart of the "list all blocking queries" 
> statement I copied
> from https://wiki.postgresql.org/wiki/Lock_Monitoring.

If there is nothing with "granted" set to FALSE in "pg_locks", you are not 
blocked by
a database lock.

What is the "state" of the hanging database session in "pg_stat_activity"?

If it is "idle" or "idle in transaction", then the lock must be in your Java 
process.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-05, Ron wrote:

> That (plus pg_locks)  is the heart of the "list all blocking queries"
> statement I copied from https://wiki.postgresql.org/wiki/Lock_Monitoring.

On that page there's a note about 9.6.  Did you see the referenced
commit
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=52f5d578d6c29bf254e93c69043b817d4047ca67
?  Maybe see about using the "pg_blocking_pids(int) returns int[]"
function instead.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Trouble Upgrading Postgres

2018-11-06 Thread Daniel Verite
Charles Martin wrote:

>  but the second one returned this:
> 
> 0 "623140"
> 1 "53"
> 2 "12"
> 3 "10"
> 4 "1"
> 5 "1"
> 7 "1"
> [null] "162"
> 
> Not quite sure what that means, but if there is just a small number of
> overly-large records, I might be able to delete them. If I can find them.

The query was:

  SELECT octet_length(docfilecontents)/(1024*1024*100),
   count(*)
   FROM docfile
   GROUP BY octet_length(docfilecontents)/(1024*1024*100);

The results above show that there is one document weighing over 700 MB
(the first column being the multiple of 100MB), one between 500 and
600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
so on.

The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size .
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic.

A quick and dirty way of getting rid of these contents would be to
nullify them. For instance, nullify anything over 400MB:

UPDATE docfile SET docfilecontents=NULL
  WHERE octet_length(docfilecontents) > 1024*1024*400;

Or a cleaner solution would be to delete them with the application if
that's possible. You may turn the above query into a SELECT that
retrieve the fields of interest (avoid SELECT * because of the huge
column).


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Ron

On 11/06/2018 05:05 AM, Laurenz Albe wrote:

Ron wrote:

However, one or more of our big (and schema-identical) prod databases (which 
are each on a different server)
it is finicky and tends to just "sit" at a random one of the CREATE OR REPLACE 
FUNCTION statements.

The "list all blocking queries" I run doesn't show that anything is blocking it 
(though it blocks
everything else), and neither top(1) nor iotop(1) show any activity.

If it matters, this script is fed to the databases via the JDBC driver, and it 
works fine when I run it via psql.
(I'd gladly run the scripts manually, but these are child databases, and a 
parent db must be updated
at the same time by a canned application.)

Where in Postgres can I look to see why it's just sitting there?

select * from pg_stat_activity;
might shed some light?
  
That (plus pg_locks)  is the heart of the "list all blocking queries" statement I copied

from https://wiki.postgresql.org/wiki/Lock_Monitoring.

If there is nothing with "granted" set to FALSE in "pg_locks", you are not 
blocked by
a database lock.

What is the "state" of the hanging database session in "pg_stat_activity"?

If it is "idle" or "idle in transaction", then the lock must be in your Java 
process.


Good question.  I'll look at that the next time we try it.

--
Angular momentum makes the world go 'round.



Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Ron

On 11/06/2018 05:34 AM, Alvaro Herrera wrote:

On 2018-Nov-05, Ron wrote:


That (plus pg_locks)  is the heart of the "list all blocking queries"
statement I copied from https://wiki.postgresql.org/wiki/Lock_Monitoring.

On that page there's a note about 9.6.  Did you see the referenced
commit
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=52f5d578d6c29bf254e93c69043b817d4047ca67
?  Maybe see about using the "pg_blocking_pids(int) returns int[]"
function instead.


I did see it, but the https://wiki.postgresql.org/wiki/Lock_Monitoring query 
seems to work (seeing that it regularly shows locks).


Is this query from https://stackoverflow.com/a/43363536/1543618 adequate to 
the task?


|selectpid,usename,pg_blocking_pids(pid)asblocked_by,query asblocked_query 
frompg_stat_activity wherecardinality(pg_blocking_pids(pid))>0;|




--
Angular momentum makes the world go 'round.


Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Ron wrote:

> On 11/06/2018 05:34 AM, Alvaro Herrera wrote:

> I did see it, but the https://wiki.postgresql.org/wiki/Lock_Monitoring query
> seems to work (seeing that it regularly shows locks).
> 
> Is this query from https://stackoverflow.com/a/43363536/1543618 adequate to
> the task?
> 
> |selectpid,usename,pg_blocking_pids(pid)asblocked_by,query asblocked_query
> frompg_stat_activity wherecardinality(pg_blocking_pids(pid))>0;|

Seems a bit short on details ... I would add the queries being run by
those other PIDs, just to understand what might be going on.  Now, if
that query returns empty when the CREATE is blocked, then this may be
a red herring.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



PgAgent on Windows

2018-11-06 Thread Marcio Meneguzzi
Hello,

I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
My version of PostgreSQL is 9.5.1.14
Install and configure pgAgent with sucess, but, when I try run a Job,
status is Failed with a message bellow:

*"Couldn't create the primary connection (attempt 1): fe_sendauth: no
password supplied"*

My %APPDATA%\postgresql\pg_pass.conf is:
localhost:5490:*:postgres:my_pass

Please, any help?

Thank´s,
Márcio


[image: Mailtrack]

Remetente
notificado por
Mailtrack

06/11/18
11:36:31


Re: PgAgent on Windows

2018-11-06 Thread Ron

On 11/06/2018 07:36 AM, Marcio Meneguzzi wrote:

Hello,

I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
My version of PostgreSQL is 9.5.1.14
Install and configure pgAgent with sucess, but, when I try run a Job, 
status is Failed with a message bellow:
*"Couldn't create the primary connection (attempt 1): fe_sendauth: no 
password supplied"

*

My %APPDATA%\postgresql\pg_pass.conf is:
localhost:5490:*:postgres:my_pass

06/11/18 11:36:31   



What do you mean by "run a job"?  Can you interactively log in?


--
Angular momentum makes the world go 'round.


Re: Running pg_upgrade Version 11

2018-11-06 Thread Adrian Klaver

On 11/5/18 9:27 PM, rob stone wrote:

Hello,

Trying to run pg_upgrade from version 10 to version 11 on the test
server and pulling the following error:-

could not open version file: /home/postgres/testing/data_v10/PG_VERSION
Failure, exiting

O/S is:- Debian 4.18.10-2 (2018-11-02) x86_64 GNU/Linux

Running pg_upgrade as user postgres.

These are the permissions on the version 10 file:-

-rw--- 1 postgres postgres 3 Mar 23  2018 PG_VERSION

These are the permissions on the version 11 file:-

-rw--- 1 postgres postgres 3 Nov  6 14:50 PG_VERSION

This is the command being run:-

/usr/lib/postgresql/11/bin/pg_upgrade -b /usr/lib/postgresql/10/bin -B
/usr/lib/postgresql/11/bin -d /home/postgres/testing/data_v10 -D
/home/postgres/testing/data_v11


Logged in as user postgres and postgres owns the files created by
initdb, so is this a permissions problem or am I having a brain fade?


Is:

/home/postgres/testing/data_v10/

the correct path to the data directory?

What happens if you add?:

-U postgres

Maybe there is a PGUSER env variable set that is not postgres.




TIA,
Rob








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Trouble Upgrading Postgres

2018-11-06 Thread Charles Martin
Thanks, Daniel.

Using your idea, I found the records over 400MB, and deleted them in the
application. The largest two were in inactive matters, and the third is
still available elsewhere if needed. I'll try pg_dump again after work
hours and see if it works now. Hopefully it will, now that I've

Adrian, I'll try changing shared_buffers the next time I can restart
postgres, at least if deleting the largest records and adding VM hasn't
worked.


On Tue, Nov 6, 2018 at 6:47 AM Daniel Verite 
wrote:

> Charles Martin wrote:
>
> >  but the second one returned this:
> >
> > 0 "623140"
> > 1 "53"
> > 2 "12"
> > 3 "10"
> > 4 "1"
> > 5 "1"
> > 7 "1"
> > [null] "162"
> >
> > Not quite sure what that means, but if there is just a small number of
> > overly-large records, I might be able to delete them. If I can find them.
>
> The query was:
>
>   SELECT octet_length(docfilecontents)/(1024*1024*100),
>count(*)
>FROM docfile
>GROUP BY octet_length(docfilecontents)/(1024*1024*100);
>
> The results above show that there is one document weighing over 700 MB
> (the first column being the multiple of 100MB), one between 500 and
> 600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
> so on.
>
> The hex expansion performed by COPY must allocate twice that size,
> plus the rest of the row, and if that resulting size is above 1GB, it
> will error out with the message you mentioned upthread:
> ERROR: invalid memory alloc request size .
> So there's no way it can deal with the contents over 500MB, and the
> ones just under that limit may also be problematic.
>
> A quick and dirty way of getting rid of these contents would be to
> nullify them. For instance, nullify anything over 400MB:
>
> UPDATE docfile SET docfilecontents=NULL
>   WHERE octet_length(docfilecontents) > 1024*1024*400;
>
> Or a cleaner solution would be to delete them with the application if
> that's possible. You may turn the above query into a SELECT that
> retrieve the fields of interest (avoid SELECT * because of the huge
> column).
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: Trouble Upgrading Postgres

2018-11-06 Thread Adrian Klaver

On 11/6/18 3:47 AM, Daniel Verite wrote:

Charles Martin wrote:


  but the second one returned this:

0 "623140"
1 "53"
2 "12"
3 "10"
4 "1"
5 "1"
7 "1"
[null] "162"

Not quite sure what that means, but if there is just a small number of
overly-large records, I might be able to delete them. If I can find them.


The query was:

   SELECT octet_length(docfilecontents)/(1024*1024*100),
   count(*)
FROM docfile
GROUP BY octet_length(docfilecontents)/(1024*1024*100);

The results above show that there is one document weighing over 700 MB
(the first column being the multiple of 100MB), one between 500 and
600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
so on.

The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size .
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic.


To me that looks like a bug, putting data into a record you cannot get out.



A quick and dirty way of getting rid of these contents would be to
nullify them. For instance, nullify anything over 400MB:

UPDATE docfile SET docfilecontents=NULL
   WHERE octet_length(docfilecontents) > 1024*1024*400;

Or a cleaner solution would be to delete them with the application if
that's possible. You may turn the above query into a SELECT that
retrieve the fields of interest (avoid SELECT * because of the huge
column).


Best regards,




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PgAgent on Windows

2018-11-06 Thread Marcio Meneguzzi
Run a Job is:
In PgAdminit´s right click over the name of job and select option "Run Now".
After, when i open windows log aplication´s, a message like this "*"Couldn't
create the primary connection (attempt 1): fe_sendauth: no password
supplied" is generated.*
*Thank´s.*



[image: Mailtrack]

Remetente
notificado por
Mailtrack

06/11/18
13:23:59

Em ter, 6 de nov de 2018 às 11:54, Ron  escreveu:

> On 11/06/2018 07:36 AM, Marcio Meneguzzi wrote:
>
> Hello,
>
> I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
> My version of PostgreSQL is 9.5.1.14
> Install and configure pgAgent with sucess, but, when I try run a Job,
> status is Failed with a message bellow:
>
> *"Couldn't create the primary connection (attempt 1): fe_sendauth: no
> password supplied" *
>
> My %APPDATA%\postgresql\pg_pass.conf is:
> localhost:5490:*:postgres:my_pass
>
> 06/11/18 11:36:31
>
>
> What do you mean by "run a job"?  Can you interactively log in?
>
>
> --
> Angular momentum makes the world go 'round.
>


Re: PgAgent on Windows

2018-11-06 Thread Stéphane Dunand

Le 06/11/2018 à 14:36, Marcio Meneguzzi a écrit :

Hello,

I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
My version of PostgreSQL is 9.5.1.14
Install and configure pgAgent with sucess, but, when I try run a Job, 
status is Failed with a message bellow:
*"Couldn't create the primary connection (attempt 1): fe_sendauth: no 
password supplied"

*

My %APPDATA%\postgresql\pg_pass.conf is:
localhost:5490:*:postgres:my_pass

Please, any help?

Thank´s,
Márcio


Mailtrack 
 
	Remetente notificado por
Mailtrack 
 
06/11/18 11:36:31 	



Hello,

check
1) the user account who launch pgagent service. %appdata% is relative to 
this user

2) pg_pass.conf must be in folder %appdata%\roaming\postgresql
3) host parameter in pgagent service command line must match hostname in 
pg_pass.conf


Stéphane


Re: Trouble Upgrading Postgres

2018-11-06 Thread Daniel Verite
Adrian Klaver wrote:

> > So there's no way it can deal with the contents over 500MB, and the
> > ones just under that limit may also be problematic.
> 
> To me that looks like a bug, putting data into a record you cannot get out.

Strictly speaking, it could probably get out with COPY in binary format,
but pg_dump doesn't use that.

It's undoubtedly very annoying that a database can end up with
non-pg_dump'able contents, but it's not an easy problem to solve.
Some time ago, work was done to extend the 1GB limit
but eventually it got scratched. The thread in [1] discusses
many details of the problem and why the proposed solution
were mostly a band aid. Basically, the specs of COPY
and other internal aspects of Postgres are from the 32-bit era when 
putting the size of an entire CDROM in a single row/column was not
anticipated as a valid use case.
It's still a narrow use case today and applications that need to store
big pieces of data like that should slice them in chunks, a bit like in
pg_largeobject, except in much larger chunks, like 1MB.

[1] pg_dump / copy bugs with "big lines" ?
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr

Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: PgAgent on Windows

2018-11-06 Thread Marcio Meneguzzi
Thank´s Stéphane,
My pg_pass.conf in database_name parameter was postgres, but when I change
for *, then run successful.
Thank´s again,

Márcio.



[image: Mailtrack]

Remetente
notificado por
Mailtrack

06/11/18
14:32:39

Em ter, 6 de nov de 2018 às 14:15, Stéphane Dunand 
escreveu:

> Le 06/11/2018 à 14:36, Marcio Meneguzzi a écrit :
>
> Hello,
>
> I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
> My version of PostgreSQL is 9.5.1.14
> Install and configure pgAgent with sucess, but, when I try run a Job,
> status is Failed with a message bellow:
>
> *"Couldn't create the primary connection (attempt 1): fe_sendauth: no
> password supplied" *
>
> My %APPDATA%\postgresql\pg_pass.conf is:
> localhost:5490:*:postgres:my_pass
>
> Please, any help?
>
> Thank´s,
> Márcio
>
>
> [image: Mailtrack]
> 
>  Remetente
> notificado por
> Mailtrack
> 
>  06/11/18
> 11:36:31
>
> Hello,
>
> check
> 1) the user account who launch pgagent service. %appdata% is relative to
> this user
> 2) pg_pass.conf must be in folder %appdata%\roaming\postgresql
> 3) host parameter in pgagent service command line must match hostname in
> pg_pass.conf
>
> Stéphane
>


Re: Trouble Upgrading Postgres

2018-11-06 Thread Adrian Klaver

On 11/6/18 8:27 AM, Daniel Verite wrote:

Adrian Klaver wrote:


So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic.


To me that looks like a bug, putting data into a record you cannot get out.


Strictly speaking, it could probably get out with COPY in binary format,
but pg_dump doesn't use that.

It's undoubtedly very annoying that a database can end up with
non-pg_dump'able contents, but it's not an easy problem to solve.
Some time ago, work was done to extend the 1GB limit
but eventually it got scratched. The thread in [1] discusses
many details of the problem and why the proposed solution
were mostly a band aid. Basically, the specs of COPY
and other internal aspects of Postgres are from the 32-bit era when
putting the size of an entire CDROM in a single row/column was not
anticipated as a valid use case.
It's still a narrow use case today and applications that need to store
big pieces of data like that should slice them in chunks, a bit like in
pg_largeobject, except in much larger chunks, like 1MB.



Should there not be some indication of this in the docs here?:

https://www.postgresql.org/docs/11/datatype-binary.html



[1] pg_dump / copy bugs with "big lines" ?
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr

Best regards,




--
Adrian Klaver
adrian.kla...@aklaver.com



why select count(*) consumes wal logs

2018-11-06 Thread Ravi Krishna
PG 10.5

I loaded 133 million rows to a wide table (more than 100 cols) via COPY. The 
table has
no index at this time. Since I am the only user I don't see any other activity.
Now when I run select count(*) on the table where I just loaded data, it runs 
for ever, 
more than 10min and still running. Intrigued, I checked locks and saw nothing.  
Then I noticed something
strange.  When select count(*) runs, PG is writing to wal_logs, and that too a 
large amount. Why?  
I suspect vaccum is getting triggered, but this is a brand new table with no 
updates. So it should not.

Is there a SQL to peek into what PG is doing to write so much to WAL logs ?




Re: why select count(*) consumes wal logs

2018-11-06 Thread Michael Nolan
On Tue, Nov 6, 2018 at 11:08 AM Ravi Krishna  wrote:

> PG 10.5
>
> I loaded 133 million rows to a wide table (more than 100 cols) via COPY.
>

It's always a good idea after doing a large scale data load to do a vacuum
analyze on the table (or the entire database.)
--
Mike Nolan


Re: why select count(*) consumes wal logs

2018-11-06 Thread Ravi Krishna
Must be something to do with Vaccum as the second time I ran the SQL, it did 
not consume WAL logs.



Re: why select count(*) consumes wal logs

2018-11-06 Thread Ron

On 11/06/2018 11:12 AM, Michael Nolan wrote:
On Tue, Nov 6, 2018 at 11:08 AM Ravi Krishna > wrote:


PG 10.5

I loaded 133 million rows to a wide table (more than 100 cols) via COPY.


It's always a good idea after doing a large scale data load to do a vacuum 
analyze on the table (or the entire database.)




I understand the need to ANALYZE (populate the histograms needed by the 
dynamic optimizer), but why VACUUM (which is recommended after updates and 
deletes).


Thanks

--
Angular momentum makes the world go 'round.


RE: why select count(*) consumes wal logs

2018-11-06 Thread Kumar, Virendra
I concord.
Why VACUUM when there is no update or deletes.

Regards,
Virendra

From: Ron [mailto:ronljohnso...@gmail.com]
Sent: Tuesday, November 06, 2018 12:20 PM
To: pgsql-general@lists.postgresql.org
Subject: Re: why select count(*) consumes wal logs

On 11/06/2018 11:12 AM, Michael Nolan wrote:

On Tue, Nov 6, 2018 at 11:08 AM Ravi Krishna 
mailto:srkrish...@aol.com>> wrote:
PG 10.5

I loaded 133 million rows to a wide table (more than 100 cols) via COPY.

It's always a good idea after doing a large scale data load to do a vacuum 
analyze on the table (or the entire database.)


I understand the need to ANALYZE (populate the histograms needed by the dynamic 
optimizer), but why VACUUM (which is recommended after updates and deletes).

Thanks
--
Angular momentum makes the world go 'round.



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


Re: why select count(*) consumes wal logs

2018-11-06 Thread Tom Lane
Ravi Krishna  writes:
> I loaded 133 million rows to a wide table (more than 100 cols) via COPY. The 
> table has
> no index at this time. Since I am the only user I don't see any other 
> activity.
> Now when I run select count(*) on the table where I just loaded data, it runs 
> for ever, 
> more than 10min and still running. Intrigued, I checked locks and saw 
> nothing.  Then I noticed something
> strange.  When select count(*) runs, PG is writing to wal_logs, and that too 
> a large amount. Why?  

That represents setting the yes-this-row-is-committed hint bits on the
newly loaded rows.  The first access to any such row will set that bit,
whether it's a select or a VACUUM or whatever.

regards, tom lane



Re: Trouble Upgrading Postgres

2018-11-06 Thread Tom Lane
Adrian Klaver  writes:
> On 11/6/18 8:27 AM, Daniel Verite wrote:
>> Adrian Klaver wrote:
>>> To me that looks like a bug, putting data into a record you cannot get out.

>> Strictly speaking, it could probably get out with COPY in binary format,
>> but pg_dump doesn't use that.

Another possibility, seeing that the problematic data is bytea, is that
it might depend on whether you use hex or escape bytea_output format.
Hex format is reliably twice the size of the stored data, but escape
format could be anywhere from the same size as the stored data to four
times the size, depending on the contents.  pg_dump is agnostic about this
and will just dump using the prevailing bytea_output setting, so you might
be able to get it to work by changing that setting.

regards, tom lane



Re: why select count(*) consumes wal logs

2018-11-06 Thread Ravi Krishna
>That represents setting the yes-this-row-is-committed hint bits on the
>newly loaded rows.  The first access to any such row will set that bit,
>whether it's a select or a VACUUM or whatever.

yes now I recollect reading this in a blog.  Thanks Tom.



Re: Fwd: Log file

2018-11-06 Thread Igor Korot
Hi, guys,
For some reason this reply was in my "Spam" folder (gmail service).
I guess moving to the new mailer service is not completely safe for "Spam".

So replying now - apologies to be late.

On Wed, Oct 31, 2018 at 2:00 AM Laurenz Albe  wrote:
>
> Igor Korot wrote:
> > Now is there a command to flush the log - delete the content of it?
>
> No, managing the logs is outside of PostgreSQL's responsibility.
>
> But it shouldn't be a problem to do this outside the database.
> Of course you could write a funtion in PostgreSQL that uses one
> of the "untrusted" procedural languages to do it for you.

Ok.
I guess I will have to write such function.

>
> > All I'm looking for in the log are DDL commands - CREATE/ALTER/DELETE ones.
>
> You mean DROP, right?

Yes, sorry.

>
> You can set "log_statement = 'ddl'" for that.

That's what I did.
But now I need to find a way to read the log file by the regular user
and not the "postgres" one.

Thank you.

>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>



Re: why select count(*) consumes wal logs

2018-11-06 Thread Michael Nolan
On Tue, Nov 6, 2018 at 11:40 AM Tom Lane  wrote:

>
> That represents setting the yes-this-row-is-committed hint bits on the
> newly loaded rows.  The first access to any such row will set that bit,
> whether it's a select or a VACUUM or whatever.
>
> Tom, does that include ANALYZE?
--
Mike Nolan


Re: why select count(*) consumes wal logs

2018-11-06 Thread Tom Lane
Michael Nolan  writes:
> On Tue, Nov 6, 2018 at 11:40 AM Tom Lane  wrote:
>> That represents setting the yes-this-row-is-committed hint bits on the
>> newly loaded rows.  The first access to any such row will set that bit,
>> whether it's a select or a VACUUM or whatever.

> Tom, does that include ANALYZE?

Yes, but remember that ANALYZE doesn't scan the whole table; it'll only
set the bit on rows it visits.

(I forget at the moment if it's guaranteed to set the bit on all rows
in each page it examines, or only on the rows it selects to sample.
But in any case it will not examine every page in the table.)

regards, tom lane



RE: Trouble Upgrading Postgres

2018-11-06 Thread bend
As someone pointed out, there is a limit with bytea (Blob's).To test if it is bytea, use a COPY with a select statement :COPY ( select A, B,C ,D ...etc FROM table ) TO 'outfile' ;Leaveing out the bytea column.If this works, then then one of the bytea columns is way to big.Ben Duncan - Business Network Solutions, Inc. 336 Elton Road  Jackson MS, 39212"Never attribute to malice, that which can be adequately explained by stupidity"- Hanlon's Razor


 Original Message 
Subject: Re: Trouble Upgrading Postgres
From: Tom Lane 
Date: Tue, November 06, 2018 11:53 am
To: Adrian Klaver 
Cc: Daniel Verite ,Charles Martin
,pgsql-general


Adrian Klaver  writes:
> On 11/6/18 8:27 AM, Daniel Verite wrote:
>> Adrian Klaver wrote:
>>> To me that looks like a bug, putting data into a record you cannot get out.

>> Strictly speaking, it could probably get out with COPY in binary format,
>> but pg_dump doesn't use that.

Another possibility, seeing that the problematic data is bytea, is that
it might depend on whether you use hex or escape bytea_output format.
Hex format is reliably twice the size of the stored data, but escape
format could be anywhere from the same size as the stored data to four
times the size, depending on the contents.  pg_dump is agnostic about this
and will just dump using the prevailing bytea_output setting, so you might
be able to get it to work by changing that setting.

			regards, tom lane







Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Ondřej Bouda wrote:

> So we dumped and restored all our databases. After that, the crash on DELETE
> never occurred (before, it was several times a day). However, the crash on
> UPDATE still occurs on specific rows. We are quite certain no ALTER TABLE
> statement was executed on the table after the restore.
> There are two AFTER INSERT OR UPDATE constraint triggers and one BEFORE
> INSERT OR UPDATE trigger on the table, all of which are implemented in
> plpgsql. Multiple physical servers, on separate databases with identical
> schema, crash on the same type of UPDATE query (different just in concrete
> values to be updated). The same code worked perfectly on 10.x.
> 
> See the attached backtrace below. Can we do something else to catch the bug?
> Or can we hope for this bug to be already fixed and released in the upcoming
> version?

Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
that one.  Can you share more details on this?  I think the failing
update is on table with oid=557732818, but I might be wrong.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Trouble Upgrading Postgres

2018-11-06 Thread Charles Martin
The column is a bytea.

I'm working out a way to limit the size on the front end.

Chuck


On Tue, Nov 6, 2018 at 1:44 PM  wrote:

> As someone pointed out, there is a limit with bytea (Blob's).
> To test if it is bytea, use a COPY with a select statement :
>
> COPY ( select A, B,C ,D ...etc FROM table ) TO 'outfile' ;
> Leaveing out the bytea column.
> If this works, then then one of the bytea columns is way to big.
>
> Ben Duncan - Business Network Solutions, Inc. 336 Elton Road Jackson MS,
> 39212
> "Never attribute to malice, that which can be adequately explained by
> stupidity"
> - Hanlon's Razor
>
>
>  Original Message 
> Subject: Re: Trouble Upgrading Postgres
> From: Tom Lane 
> Date: Tue, November 06, 2018 11:53 am
> To: Adrian Klaver 
> Cc: Daniel Verite , Charles Martin
> , pgsql-general
> 
>
> Adrian Klaver  writes:
> > On 11/6/18 8:27 AM, Daniel Verite wrote:
> >> Adrian Klaver wrote:
> >>> To me that looks like a bug, putting data into a record you cannot get
> out.
>
> >> Strictly speaking, it could probably get out with COPY in binary format,
> >> but pg_dump doesn't use that.
>
> Another possibility, seeing that the problematic data is bytea, is that
> it might depend on whether you use hex or escape bytea_output format.
> Hex format is reliably twice the size of the stored data, but escape
> format could be anywhere from the same size as the stored data to four
> times the size, depending on the contents. pg_dump is agnostic about this
> and will just dump using the prevailing bytea_output setting, so you might
> be able to get it to work by changing that setting.
>
> regards, tom lane
>
>


Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

> Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
> that one.  Can you share more details on this?  I think the failing
> update is on table with oid=557732818, but I might be wrong.

That's exactly the table, public.schedulecard.
We issue an UPDATE changing some of its columns. E.g.,

UPDATE public.schedulecard SET ext_ident=null, 
rotates=false,period_num=1,period_day=2 WHERE id=3817


lets the server crash. See the main log:

2018-11-06 17:29:40.031 CET [30202] LOG:  server process (PID 29879) was 
terminated by signal 11: Segmentation fault
2018-11-06 17:29:40.031 CET [30202] DETAIL:  Failed process was running: 
UPDATE public.schedulecard SET ext_ident=null, 
rotates=false,period_num=1,period_day=2 WHERE id=3817;

select * from schedulecard where id = 3817
2018-11-06 17:29:40.031 CET [30202] LOG:  terminating any other active 
server processes


The query is reproducible - it always lets the server segfault. It 
crashes on multiple rows on that table -- actually, I haven't found any 
non-failing row yet.


I thought triggers should be suspected. However, even when all the three 
triggers have been disabled (ALTER TABLE DISABLE TRIGGER), the UPDATE 
crashed the server.


What else could I try?

Regards,
Ondřej Bouda



Dne 6.11.2018 v 19:52 Alvaro Herrera napsal(a):

On 2018-Nov-06, Ondřej Bouda wrote:


So we dumped and restored all our databases. After that, the crash on DELETE
never occurred (before, it was several times a day). However, the crash on
UPDATE still occurs on specific rows. We are quite certain no ALTER TABLE
statement was executed on the table after the restore.
There are two AFTER INSERT OR UPDATE constraint triggers and one BEFORE
INSERT OR UPDATE trigger on the table, all of which are implemented in
plpgsql. Multiple physical servers, on separate databases with identical
schema, crash on the same type of UPDATE query (different just in concrete
values to be updated). The same code worked perfectly on 10.x.

See the attached backtrace below. Can we do something else to catch the bug?
Or can we hope for this bug to be already fixed and released in the upcoming
version?


Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
that one.  Can you share more details on this?  I think the failing
update is on table with oid=557732818, but I might be wrong.





Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Ondřej Bouda wrote:

> > Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
> > that one.  Can you share more details on this?  I think the failing
> > update is on table with oid=557732818, but I might be wrong.
> 
> That's exactly the table, public.schedulecard.
> We issue an UPDATE changing some of its columns. E.g.,
> 
> UPDATE public.schedulecard SET ext_ident=null,
> rotates=false,period_num=1,period_day=2 WHERE id=3817

What indexes are there in this table?  Indexes on expressions are
particularly suspect.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Question about index on different tablespace and rebuild it

2018-11-06 Thread Condor

On 05-11-2018 10:56, Condor wrote:

Hello,

I have a database that use index on different table space (nvme). I
read documentation about table space and understand table space cannot
be treated as an autonomous collection of data files.
My question is: Is this always true ? I mean if I have table .. okay
here is example:

create table t1 ( i integer);
create index t1_i_idx on t1 using btree (i) tablespace nvme;

In this case only the index is on different table space nvme that is
not part of data stored into table, I mean if I drop the index, data
stored in table will still be untouched and not damaged.
So in this case if I lost table space nvme that is stored into
different nvme drive, can I insert new one and rebuild all index files
with reindexdb for example or some other tool like
mysql/mariadb for example myismcheck or something like that, that can
rebuild index files when DB sever is offline ?


Regards,
HS



Some one can answer ?



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

Dne 6.11.2018 v 20:45 Alvaro Herrera napsal(a):

What indexes are there in this table?  Indexes on expressions are
particularly suspect.


There are some simple btree indexes due to foreign keys, and this one, 
which seems as the cause for the crashes:


CREATE INDEX schedulecard_overlap_idx
ON public.schedulecard USING gist
(scheduletemplate_id, (period_day::integer % 7), timerange)
TABLESPACE pg_default;

When I drop it, the UPDATE statement gets executed without any problems.
When I create the index again, the same UPDATE crashes the server.


Regards,
Ondřej Bouda



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> I thought triggers should be suspected. However, even when all the three 
> triggers have been disabled (ALTER TABLE DISABLE TRIGGER), the UPDATE 
> crashed the server.

Foreign-key triggers too?

> What else could I try?

A stack trace would be really helpful.

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
Alvaro Herrera  writes:
> What indexes are there in this table?  Indexes on expressions are
> particularly suspect.

I had not looked at the "projection index" code before, and now that
I have, I am desperately unhappy with it.  It's seriously buggy,
badly underdocumented, unbelievably inefficient, and it looks like
it creates a whole new set of reasons for unwanted recursion inside
the relcache.  The relcache has got NO business calling cost_qual_eval,
for example.

Having said that, I'm not real sure how it ends up with this crash.
It looks like ProjIndexIsUnchanged is getting the wrong value for
the index column's typbyval, but how could that be?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> There are some simple btree indexes due to foreign keys, and this one, 
> which seems as the cause for the crashes:

> CREATE INDEX schedulecard_overlap_idx
>  ON public.schedulecard USING gist
>  (scheduletemplate_id, (period_day::integer % 7), timerange)
>  TABLESPACE pg_default;

> When I drop it, the UPDATE statement gets executed without any problems.
> When I create the index again, the same UPDATE crashes the server.

Hm, what are the data types of those columns?

And I assume you've got btree_gist installed?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

> Foreign-key triggers too?

There are four trivial foreign keys from public.schedulecard like this:

ALTER TABLE public.schedulecard
ADD CONSTRAINT fk_schedulecard_schedulecard FOREIGN KEY 
(schedulecard_id)

REFERENCES public.schedulecard (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE RESTRICT;

(two of them having ON UPDATE CASCADE, but all other characteristics are 
the same)


Also, several tables have foreign keys to public.schedulecard.id, some 
of which are ON UPDATE CASCADE, others NO ACTION.



> A stack trace would be really helpful.

Already sent earlier, maybe it got lost. Resending it:

[New LWP 8307]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: 11/main: usr mydb 127.0.0.1(56829) 
UPDATE  '.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:793
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:793
No locals.
#1  0x7f6a6b993296 in datumIsEqual (value1=12, value2=12, 
typByVal=, typLen=16) at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/utils/adt/datum.c:249

size1 = 16
size2 = 
s1 = 0xc 
s2 = 0xc 
res = 
#2  0x7f6a6b6b89de in ProjIndexIsUnchanged (newtup=0x7f6a6d5ccdd0, 
oldtup=0x7ffc9c1c1270, relation=0x7f6a6b4c3680) at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/access/heap/heapam.c:4539

att = 
indexOid = 
indexDesc = 0x7f6a6b4c7cf0
indexInfo = 0x7f6a6d5cdbf8
i = 0
indexoidlist = 
econtext = 0x7f6a6d4c2c10
new_isnull = {false, false, false, 156, 252, 127, false, false, 
44, 29, 128, 107, 106, 127, false, false, 203, 89, 76, 107, 106, 127, 
false, false, 123, false, false, false, false, false, false, false}

slot = 0x7f6a6d5cf8b8
equals = true
old_isnull = {false, false, false, 109, 106, 127, false, false, 
69, 46, 80, 109, 106, 127, false, false, 64, 46, 80, 109, 106, 127, 
false, false, 72, 46, 80, 109, 106, 127, false, false}
new_values = {12, 2, 140095078059546, 16, 140722927572064, 
140095077232192, 140095077233456, 140722927572079, 0, 0, 
140722927572048, 140095054460096, 140095077050704, 140095078061816, 5, 
4, 140722927572096, 140095046814123, 140722927572096, 140095046825162, 
140095043418528, 140095077051552, 140722927572224, 140095046865098, 
60129542543, 23274993280, 140095043418528, 140095077050704, 
140095043417728, 140095077050976, 140095078056288, 140095043422512}

indexno = 2
l = 0x7f6a6d503840
estate = 0x7f6a6d4c2a00
old_values = {12, 1, 140078631611970, 1, 140095077050704, 1024, 
140095077232192, 0, 140722927571760, 140095049517261, 140722927571808, 
140722927571776, 140722927571792, 140095046814329, 18, 140095077232200, 
140095076976896, 140095078056288, 22, 100, 140722927571856, 
140722927571920, 140095046831813, 140722927571856, 140095046842624, 0, 
140095077226392, 3, 140095077232192, 140095077232528, 140095077232192, 
140095078061816}
#3  heap_update (relation=relation@entry=0x7f6a6b4c3680, 
otid=otid@entry=0x7ffc9c1c1690, newtup=newtup@entry=0x7f6a6d5ccdd0, 
cid=0, crosscheck=, wait=wait@entry=true, 
hufd=hufd@entry=0x7ffc9c1c15b0, lockmode=lockmode@entry=0x7ffc9c1c15a4) 
at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/access/heap/heapam.c:4230

result = 
xid = 245762845
hot_attrs = 0x7f6a6d503740
proj_idx_attrs = 0x7f6a6d503758
key_attrs = 0x7f6a6d503770
id_attrs = 0x7f6a6d503788
interesting_attrs = 0x7f6a6d5037a0
lp = 
oldtup = {t_len = 85, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 
45}, ip_posid = 10}, t_tableOid = 557732818, t_data = 0x7f669913ee10}

heaptup = 0x7f6a6d5ccdd0
old_key_tuple = 0x0
old_key_copied = false
page = 0x7f669913d180 "]="
block = 
mxact_status = 
buffer = 
newbuf = 14683
vmbuffer = 0
vmbuffer_new = 0
need_toast = 
newtupsize = 
pagefree = 
have_tuple_lock = false
iscombo = false
use_hot_update = false
hot_attrs_checked = 
key_intact = 
all_visible_cleared = false
all_visible_cleared_new = false
checked_lockers = 
locker_remains = 
xmax_new_tuple = 
xmax_old_tuple = 245762845
infomask_old_tuple = 0
infomask2_old_tuple = 8192
infomask_new_tuple = 144
infomask2_new_tuple = 0
__func__ = "heap_update"
#4  0x7f6a6b82f2c9 in ExecUpdate 
(mtstate=mtstate@entry=0x7f6a6d4d81c0, tupleid=0x7ffc9c1c1690, 
oldtuple=0x0, slot=0x7f6a6d5cc160, 
planSlot=planSlot@entry=0x7f6a6d567578, 
epqstate=epqstate@entry=0x7f6a6d4d8280, 
estate=estate@entry=0x7f6a6d4d6a60, canSetTag=true) a

Re: Fwd: Log file

2018-11-06 Thread Ron

On 11/06/2018 12:06 PM, Igor Korot wrote:
[snip]

Ok.
I guess I will have to write such function.


Cron and the relevant log_* config variables should solve your problems.

--
Angular momentum makes the world go 'round.



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

> Hm, what are the data types of those columns?

scheduletemplate_id bigint NOT NULL,
period_day smallint NOT NULL,
timerange timerange NOT NULL,

where timerange is defined as follows:
CREATE TYPE public.timerange AS RANGE
(
SUBTYPE=time,
SUBTYPE_OPCLASS = time_ops
);


> And I assume you've got btree_gist installed?

Yes, version 1.5.


Regards,
Ondřej Bouda



Idle query that's not ""?

2018-11-06 Thread Ron

Hi,

v9.6.6

Why do these idle queries (pids 8357, 11260 and 11355) "remember" the 
queries they ran instead of having the text ""?


postgres=# select pid,
   xact_start as txn_start,
   to_char(EXTRACT(epoch FROM now() - query_start), '999,999.') 
as query_age_secs,

   state,
   cast (query as char(20))
from pg_stat_activity
where datname <> 'postgres'
order by query_start;
;

 pid  |   txn_start   | query_age_secs | state  |    query
--+---+++--
26538 | 2018-11-06 14:40:55.053842-05 |    3,451.9853  | active | SELECT  
to_char(b.pr
27497 | 2018-11-06 14:59:26.946854-05 |    2,340.5871  | active | SELECT  
to_char(b.pr
29110 | 2018-11-06 14:59:50.479934-05 |    2,317.1725  | active | SELECT  
to_char(b.pr
* 8357 |   |    1,324.1356  | idle   | SELECT  
CAST(c.ecid*
 9016 | 2018-11-06 15:34:51.535476-05 |  215.8391  | active | SELECT  
to_char(b.pr
 9810 | 2018-11-06 15:35:00.483292-05 |  206.7676  | active | SELECT  
to_char(b.pr
*11260 |   |  190.0814  | idle   | WITH 
configs AS (SEL*
12800 | 2018-11-06 15:35:49.540631-05 |  157.9880  | active | SELECT  
to_char(b.pr
*11355 |   |   42.9772  | idle   | SELECT 
CASE WHEN typ*
22618 | 2018-11-06 15:38:02.317146-05 |   25.3219  | active | SELECT  
to_char(b.pr
23176 | 2018-11-06 15:38:12.90985-05  |   14.7325  | active | SELECT  
to_char(b.pr
23566 | 2018-11-06 15:38:28.802919-05 | .6116  | active | select 
tms.TRIGGER.T
23588 | 2018-11-06 15:38:29.207373-05 | .2089  | active | select 
cds.IMAGE_RPS
23590 | 2018-11-06 15:38:29.233724-05 | .1814  | active | select 
tms.TRIGGER.T
23584 | 2018-11-06 15:38:29.046393-05 | .0442  | active | select 
tms.MARK_SENS
23595 | 2018-11-06 15:38:29.403969-05 | .0001  | active | select 
JOB_STEP.JOB_

(16 rows)


--
Angular momentum makes the world go 'round.


Re: Idle query that's not ""?

2018-11-06 Thread Hellmuth Vargas
Hi
In the documentation describes the data in this field:

https://www.postgresql.org/docs/10/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

"Text of this backend's most recent query. If state is active this field
shows the currently executing query. In all other states, it shows the last
query that was executed. "


El mar., 6 de nov. de 2018 a la(s) 15:46, Ron (ronljohnso...@gmail.com)
escribió:

> Hi,
>
> v9.6.6
>
> Why do these idle queries (pids 8357, 11260 and 11355) "remember" the
> queries they ran instead of having the text ""?
>
> postgres=# select pid,
>xact_start as txn_start,
>to_char(EXTRACT(epoch FROM now() - query_start),
> '999,999.') as query_age_secs,
>state,
>cast (query as char(20))
> from pg_stat_activity
> where datname <> 'postgres'
> order by query_start;
> ;
>
>  pid  |   txn_start   | query_age_secs | state  |
> query
>
> --+---+++--
> 26538 | 2018-11-06 14:40:55.053842-05 |3,451.9853  | active | SELECT
> to_char(b.pr
> 27497 | 2018-11-06 14:59:26.946854-05 |2,340.5871  | active | SELECT
> to_char(b.pr
> 29110 | 2018-11-06 14:59:50.479934-05 |2,317.1725  | active | SELECT
> to_char(b.pr
> * 8357 |   |1,324.1356  | idle   | SELECT
> CAST(c.ecid*
>  9016 | 2018-11-06 15:34:51.535476-05 |  215.8391  | active | SELECT
> to_char(b.pr
>  9810 | 2018-11-06 15:35:00.483292-05 |  206.7676  | active | SELECT
> to_char(b.pr
> *11260 |   |  190.0814  | idle   | WITH
> configs AS (SEL*
> 12800 | 2018-11-06 15:35:49.540631-05 |  157.9880  | active | SELECT
> to_char(b.pr
> *11355 |   |   42.9772  | idle   | SELECT
> CASE WHEN typ*
> 22618 | 2018-11-06 15:38:02.317146-05 |   25.3219  | active | SELECT
> to_char(b.pr
> 23176 | 2018-11-06 15:38:12.90985-05  |   14.7325  | active | SELECT
> to_char(b.pr
> 23566 | 2018-11-06 15:38:28.802919-05 | .6116  | active | select
> tms.TRIGGER.T
> 23588 | 2018-11-06 15:38:29.207373-05 | .2089  | active | select
> cds.IMAGE_RPS
> 23590 | 2018-11-06 15:38:29.233724-05 | .1814  | active | select
> tms.TRIGGER.T
> 23584 | 2018-11-06 15:38:29.046393-05 | .0442  | active | select
> tms.MARK_SENS
> 23595 | 2018-11-06 15:38:29.403969-05 | .0001  | active | select
> JOB_STEP.JOB_
> (16 rows)
>
>
> --
> Angular momentum makes the world go 'round.
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: Idle query that's not ""?

2018-11-06 Thread David G. Johnston
On Tue, Nov 6, 2018 at 1:46 PM Ron  wrote:
> Why do these idle queries (pids 8357, 11260 and 11355) "remember" the queries 
> they ran instead of having the text ""?

Because having the text "" is redundant with the field status=idle

David J.



Re: Idle query that's not ""?

2018-11-06 Thread Ron
Right.  But when does the query text become ""?  Or has that become 
obsolete? (We recently migrated from 8.4.)


On 11/06/2018 02:53 PM, Hellmuth Vargas wrote:

Hi
In the documentation describes the data in this field:

https://www.postgresql.org/docs/10/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW 

"Text of this backend's most recent query. If |state| is |active| this 
field shows the currently executing query. In all other states, it shows 
the last query that was executed. "



El mar., 6 de nov. de 2018 a la(s) 15:46, Ron (ronljohnso...@gmail.com 
) escribió:


Hi,

v9.6.6

Why do these idle queries (pids 8357, 11260 and 11355) "remember" the
queries they ran instead of having the text ""?

postgres=# select pid,
   xact_start as txn_start,
   to_char(EXTRACT(epoch FROM now() - query_start),
'999,999.') as query_age_secs,
   state,
   cast (query as char(20))
from pg_stat_activity
where datname <> 'postgres'
order by query_start;
;

 pid  |   txn_start   | query_age_secs | state 
|    query

--+---+++--
26538 | 2018-11-06 14:40:55.053842-05 | 3,451.9853  | active | SELECT 
to_char(b.pr 
27497 | 2018-11-06 14:59:26.946854-05 | 2,340.5871  | active | SELECT 
to_char(b.pr 
29110 | 2018-11-06 14:59:50.479934-05 | 2,317.1725  | active | SELECT 
to_char(b.pr 
* 8357 |   | 1,324.1356  | idle   |
SELECT  CAST(c.ecid*
 9016 | 2018-11-06 15:34:51.535476-05 | 215.8391  | active | SELECT 
to_char(b.pr 
 9810 | 2018-11-06 15:35:00.483292-05 | 206.7676  | active | SELECT 
to_char(b.pr 
*11260 |   | 190.0814  | idle   | WITH
configs AS (SEL*
12800 | 2018-11-06 15:35:49.540631-05 | 157.9880  | active | SELECT 
to_char(b.pr 
*11355 |   | 42.9772  | idle   | SELECT
CASE WHEN typ*
22618 | 2018-11-06 15:38:02.317146-05 | 25.3219  | active | SELECT 
to_char(b.pr 
23176 | 2018-11-06 15:38:12.90985-05  | 14.7325  | active | SELECT 
to_char(b.pr 
23566 | 2018-11-06 15:38:28.802919-05 | .6116  | active | select
tms.TRIGGER.T
23588 | 2018-11-06 15:38:29.207373-05 | .2089  | active | select
cds.IMAGE_RPS
23590 | 2018-11-06 15:38:29.233724-05 | .1814  | active | select
tms.TRIGGER.T
23584 | 2018-11-06 15:38:29.046393-05 | .0442  | active | select
tms.MARK_SENS
23595 | 2018-11-06 15:38:29.403969-05 | .0001  | active | select
JOB_STEP.JOB_
(16 rows)


-- 
Angular momentum makes the world go 'round.




--
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate



--
Angular momentum makes the world go 'round.


Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
>>> Hm, what are the data types of those columns?

> scheduletemplate_id bigint NOT NULL,
> period_day smallint NOT NULL,
> timerange timerange NOT NULL,

OK, so here's a minimal reproducer:

drop table schedulecard;

create table schedulecard (
  scheduletemplate_id bigint NOT NULL,
  period_day smallint NOT NULL
);

CREATE INDEX schedulecard_overlap_idx
 ON schedulecard USING gist
 (scheduletemplate_id, (period_day::integer % 7));

insert into schedulecard values(12, 1);

update schedulecard set period_day = period_day + 7;

Interestingly, it doesn't crash if I change the index type to btree,
which I was not expecting because the crashing code seems pretty
independent of the index type.

Haven't traced further than that yet.

regards, tom lane



Re: Idle query that's not ""?

2018-11-06 Thread David G. Johnston
On Tue, Nov 6, 2018 at 1:59 PM Ron  wrote:
>
> Right.  But when does the query text become ""?  Or has that become 
> obsolete? (We recently migrated from 8.4.)

That behavior changed sometime around 9.0; since it always shows the
last query executed it logically follows that it will never show the
placeholder "" (I suppose it might do so upon initial connect if
no queries have been sent yet...not sure what it says then or even if
it is possible)

David J.



Re: Idle query that's not ""?

2018-11-06 Thread Ron

On 11/06/2018 03:04 PM, David G. Johnston wrote:

On Tue, Nov 6, 2018 at 1:59 PM Ron  wrote:

Right.  But when does the query text become ""?  Or has that become 
obsolete? (We recently migrated from 8.4.)

That behavior changed sometime around 9.0; since it always shows the
last query executed it logically follows that it will never show the
placeholder "" (I suppose it might do so upon initial connect if
no queries have been sent yet...not sure what it says then or even if
it is possible)


So... obsolete.  Thanks.

--
Angular momentum makes the world go 'round.



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
I wrote:
> Interestingly, it doesn't crash if I change the index type to btree,
> which I was not expecting because the crashing code seems pretty
> independent of the index type.

Oh ... duh.  The problem here is that ProjIndexIsUnchanged thinks that
the type of the index column is identical to the type of the source
datum for it, which is not true for any opclass making use of the
opckeytype property.

Ondřej, as a short-term workaround you could prevent the crash
by setting that index's recheck_on_update property to false.

regards, tom lane



Re: why select count(*) consumes wal logs

2018-11-06 Thread Bruno Lavoie
Le mar. 6 nov. 2018 12:40 PM, Tom Lane  a écrit :

> Ravi Krishna  writes:
> > I loaded 133 million rows to a wide table (more than 100 cols) via COPY.
> The table has
> > no index at this time. Since I am the only user I don't see any other
> activity.
> > Now when I run select count(*) on the table where I just loaded data, it
> runs for ever,
> > more than 10min and still running. Intrigued, I checked locks and saw
> nothing.  Then I noticed something
> > strange.  When select count(*) runs, PG is writing to wal_logs, and that
> too a large amount. Why?
>
> That represents setting the yes-this-row-is-committed hint bits on the
> newly loaded rows.  The first access to any such row will set that bit,
> whether it's a select or a VACUUM or whatever.
>
> regards, tom lane
>


And IIRC, it can generate a high WAL traffic since the first page change
after a checkpoint is done with full page write. And you said that it's
happening on a big table with wide rows

>


Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

Ondřej, as a short-term workaround you could prevent the crash
by setting that index's recheck_on_update property to false.


Thanks for the tip. I am unsuccessful using it, though:

# ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
FALSE);


ERROR:  unrecognized parameter "recheck_on_update"


Creating a new index is wrong, too:

# CREATE INDEX schedulecard_overlap_idx2
ON public.schedulecard USING gist
(scheduletemplate_id, (period_day::integer % 7), timerange)
WITH (recheck_on_update = FALSE);

ERROR:  unrecognized parameter "recheck_on_update"


It only succeeds if not USING gist:

# CREATE INDEX schedulecard_overlap_idx2
ON public.schedulecard
(scheduletemplate_id, (period_day::integer % 7), timerange)
WITH (recheck_on_update = FALSE);

CREATE INDEX


Is there any other workaround for a gist index, please?
Maybe we will just drop the index until the bug gets fixed - better slow 
queries than crashing servers...


Thanks,
Ondřej Bouda





Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
>> Ondřej, as a short-term workaround you could prevent the crash
>> by setting that index's recheck_on_update property to false.

> Thanks for the tip. I am unsuccessful using it, though:
> # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
> FALSE);
> ERROR:  unrecognized parameter "recheck_on_update"

Oh, for crying out loud.  That's yet a different bug.
I'm not sure that it's the fault of the recheck_on_update
feature proper though; it might be a pre-existing bug in
the reloptions code.  Looks like somebody forgot to list
RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
fault of commit c203d6cf8 or was it busted before?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Tom Lane wrote:

> =?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> >> Ondřej, as a short-term workaround you could prevent the crash
> >> by setting that index's recheck_on_update property to false.
> 
> > Thanks for the tip. I am unsuccessful using it, though:
> > # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
> > FALSE);
> > ERROR:  unrecognized parameter "recheck_on_update"
> 
> Oh, for crying out loud.  That's yet a different bug.
> I'm not sure that it's the fault of the recheck_on_update
> feature proper though; it might be a pre-existing bug in
> the reloptions code.  Looks like somebody forgot to list
> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
> fault of commit c203d6cf8 or was it busted before?

RELOPT_KIND_INDEX was invented by that commit, looks like :-(

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Andres Freund
On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
> =?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> >> Ondřej, as a short-term workaround you could prevent the crash
> >> by setting that index's recheck_on_update property to false.
> 
> > Thanks for the tip. I am unsuccessful using it, though:
> > # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
> > FALSE);
> > ERROR:  unrecognized parameter "recheck_on_update"
> 
> Oh, for crying out loud.  That's yet a different bug.
> I'm not sure that it's the fault of the recheck_on_update
> feature proper though; it might be a pre-existing bug in
> the reloptions code.  Looks like somebody forgot to list
> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
> fault of commit c203d6cf8 or was it busted before?

Looks new:
+   RELOPT_KIND_INDEX = 
RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,

there aren't any other "for all indexes" type options, so the whole
category didn't exist before.

It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
wouldn't have been omitted: It breaks index am extensibility.

Greetings,

Andres Freund



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
Andres Freund  writes:
> On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
>> Looks like somebody forgot to list
>> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
>> fault of commit c203d6cf8 or was it busted before?

> Looks new:
> +   RELOPT_KIND_INDEX = 
> RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> there aren't any other "for all indexes" type options, so the whole
> category didn't exist before.

> It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> wouldn't have been omitted: It breaks index am extensibility.

Hm, well, that enum already knows about all index types, doesn't it?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tomas Vondra
On 11/6/18 10:54 PM, Andres Freund wrote:
> On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
>> =?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
 Ondřej, as a short-term workaround you could prevent the crash
 by setting that index's recheck_on_update property to false.
>>
>>> Thanks for the tip. I am unsuccessful using it, though:
>>> # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
>>> FALSE);
>>> ERROR:  unrecognized parameter "recheck_on_update"
>>
>> Oh, for crying out loud.  That's yet a different bug.
>> I'm not sure that it's the fault of the recheck_on_update
>> feature proper though; it might be a pre-existing bug in
>> the reloptions code.  Looks like somebody forgot to list
>> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
>> fault of commit c203d6cf8 or was it busted before?
> 
> Looks new:
> +   RELOPT_KIND_INDEX = 
> RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> 
> there aren't any other "for all indexes" type options, so the whole
> category didn't exist before.
> 
> It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> wouldn't have been omitted: It breaks index am extensibility.
> 

Does it? The RELOPT_KIND_* stuff is hard-coded in reloptions.h anyway,
so I'm not sure how this particular thing makes it less extensible?

That being said, we also have RELOPT_KIND_BRIN, and that seems to be
missing from RELOPT_KIND_INDEX too (and AFAICS the optimization works
for all index types).

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Andres Freund
Hi,

On 2018-11-06 23:11:29 +0100, Tomas Vondra wrote:
> On 11/6/18 10:54 PM, Andres Freund wrote:
> > Looks new:
> > +   RELOPT_KIND_INDEX = 
> > RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> > 
> > there aren't any other "for all indexes" type options, so the whole
> > category didn't exist before.
> > 
> > It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> > wouldn't have been omitted: It breaks index am extensibility.
> > 
> 
> Does it? The RELOPT_KIND_* stuff is hard-coded in reloptions.h anyway,
> so I'm not sure how this particular thing makes it less extensible?

Well, you can create new index AMs in extensions these days, but given
the relopt design above, the feature cannot be disabled for them. Yes,
there's *currently* probably no great way to have reloptions across all
potential index types, but that's not an excuse for adding something
broken.

Greetings,

Andres Freund



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Andres Freund
Hi,

On 2018-11-06 17:11:40 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
> >> Looks like somebody forgot to list
> >> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
> >> fault of commit c203d6cf8 or was it busted before?
> 
> > Looks new:
> > +   RELOPT_KIND_INDEX = 
> > RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> > there aren't any other "for all indexes" type options, so the whole
> > category didn't exist before.
> 
> > It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> > wouldn't have been omitted: It breaks index am extensibility.
> 
> Hm, well, that enum already knows about all index types, doesn't it?

Not quite sure what you mean.

Before c203d6cf8 there weren't reloptions across index types. But after
it, if one adds a new index AM via an extension, one can't set
recheck_on_update = false for indexes using it, even though the feature
affects such indexes. We support adding indexes AMs at runtime these
days, including WAL logging (even though that's a bit
voluminous). There's even a contrib index AM...

The list of AMs is supposed to be extensible at runtime, cf
add_reloption_kind().

Greetings,

Andres Freund



recursion in plpgsql

2018-11-06 Thread David Gauthier
Hi:

I'm trying/failing to write a recursive plpgsql function where the function
tries to operate on a hierary of records in a reflexive table.
parent-child-grandchild type of recursion.

I tried with a cursor, but got a "cursor already in use" error.  So that
looks like scoping.

I know I did this sort of thing in the past, but I can't remember if I used
cursors or some other construct to traverse the hierarchy.

Here's the code that's failing...



create or replace function spk_fix_areas(parent_id int)
 returns text as $$

  declare
par_area text;
child int;
child_node_curr cursor for select id from spk_ver_node where parent =
parent_id;
area_id int;
area_area text;
  begin

  select area into par_area from spk_ver_task_area where id = parent_id;

  open child_node_curr;

  loop

fetch child_node_curr into child;
exit when not found;

raise notice 'child: %',child;

select id,area into area_id,area_area from spk_ver_task_area where id =
child and area = par_area;
continue when found;

raise notice 'attempting insert child = %, area = %',child,par_area;
insert into spk_ver_task_area (id,area) values (child,par_area);

select spk_fix_areas(child);

  end loop;

  return('done');


  end;
$$ language plpgsql;

===

Thanks for any help !


Re: recursion in plpgsql

2018-11-06 Thread Steve Crawford
On Tue, Nov 6, 2018 at 2:54 PM David Gauthier 
wrote:

> Hi:
>
> I'm trying/failing to write a recursive plpgsql function where the
> function tries to operate on a hierary of records in a reflexive table.
> parent-child-grandchild type of recursion.
>
> I tried with a cursor, but got a "cursor already in use" error.  So that
> looks like scoping.
>
> I know I did this sort of thing in the past, but I can't remember if I
> used cursors or some other construct to traverse the hierarchy.
>

Recursive common-table-expression queries would be the typical way. Perhaps
that's what you used before

 https://www.postgresql.org/docs/current/queries-with.html

Cheers,
Steve


Re: recursion in plpgsql

2018-11-06 Thread Tom Lane
David Gauthier  writes:
> I'm trying/failing to write a recursive plpgsql function where the function
> tries to operate on a hierary of records in a reflexive table.
> parent-child-grandchild type of recursion.
> I tried with a cursor, but got a "cursor already in use" error.  So that
> looks like scoping.

IIRC, the "portal" underlying a plpgsql cursor just gets the same name
as the cursor variable by default, so you'll get portal-name conflicts
with the coding style you show here.

It's possible to avoid that by ensuring that each cursor gets a different
portal name.  I'm too lazy to check the details right now, but at the
very least there's a way to do it by declaring the variable as "refcursor"
and assigning it a different name at each nesting depth.  There might be
some more elegant solution, too.

regards, tom lane



Re: Running pg_upgrade Version 11

2018-11-06 Thread rob stone



On Tue, 2018-11-06 at 15:17 +0900, Michael Paquier wrote:
> On Tue, Nov 06, 2018 at 04:27:35PM +1100, rob stone wrote:
> > Logged in as user postgres and postgres owns the files created by
> > initdb, so is this a permissions problem or am I having a brain
> > fade?
> 
> Having 0600 as umask for those files is normal.  Don't you have more
> logs about the error?  You should not see this error, except if
> data_v10
> is not a data folder initialized correctly, so perhaps you messed up
> something in your environment?
> --
> Michael



Problem caused by my eyesight.
A colleague pointed out the typo in the argument to the -d parameter.
Working as intended.

Sorry for the noise. My apologies.

Thanks,
Robert






Re: Question about index on different tablespace and rebuild it

2018-11-06 Thread Guillaume Lelarge
Le mar. 6 nov. 2018 20:49, Condor  a écrit :

> On 05-11-2018 10:56, Condor wrote:
> > Hello,
> >
> > I have a database that use index on different table space (nvme). I
> > read documentation about table space and understand table space cannot
> > be treated as an autonomous collection of data files.
> > My question is: Is this always true ? I mean if I have table .. okay
> > here is example:
> >
> > create table t1 ( i integer);
> > create index t1_i_idx on t1 using btree (i) tablespace nvme;
> >
> > In this case only the index is on different table space nvme that is
> > not part of data stored into table, I mean if I drop the index, data
> > stored in table will still be untouched and not damaged.
> > So in this case if I lost table space nvme that is stored into
> > different nvme drive, can I insert new one and rebuild all index files
> > with reindexdb for example or some other tool like
> > mysql/mariadb for example myismcheck or something like that, that can
> > rebuild index files when DB sever is offline ?
> >
> >
> > Regards,
> > HS
>
>
> Some one can answer ?
>

Not sure I really understand your question but to do a reindex, you need
the database server to be online.

>


Re: Running pg_upgrade Version 11

2018-11-06 Thread Ron

On 11/06/2018 06:30 PM, rob stone wrote:


On Tue, 2018-11-06 at 15:17 +0900, Michael Paquier wrote:

On Tue, Nov 06, 2018 at 04:27:35PM +1100, rob stone wrote:

Logged in as user postgres and postgres owns the files created by
initdb, so is this a permissions problem or am I having a brain
fade?

Having 0600 as umask for those files is normal.  Don't you have more
logs about the error?  You should not see this error, except if
data_v10
is not a data folder initialized correctly, so perhaps you messed up
something in your environment?
--
Michael



Problem caused by my eyesight.
A colleague pointed out the typo in the argument to the -d parameter.
Working as intended.


That's why I like to line up my statements

/usr/lib/postgresql/11/bin/pg_upgrade \
-b /usr/lib/postgresql/10/bin \
-B /usr/lib/postgresql/11/bin \
-d /home/postgres/testing/data_v10 \
-D /home/postgres/testing/data_v11


--
Angular momentum makes the world go 'round.



Recovery data base!!

2018-11-06 Thread Elson Vaz
Hello people,


I need a lot of help,

Accidentally I deleted all the files from the database, the last copy that
I have a day late.
I tried with R studio to recover the files, but some files are not correct
/ corrupted.
I have also back up the complete database directory for a month.
The postgres I have is 9.4, and the OS is RedHat

Can someone help me .


Re: Recovery data base!!

2018-11-06 Thread pavan95
Hi Elson,

Do you have the latest basebackup/filesystem backup of the instance?

If yes you can recover to the backups taken time.



Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Recovery data base!!

2018-11-06 Thread Andreas Kretschmer
On 7 November 2018 06:13:20 CET, Elson Vaz  wrote:
>Hello people,
>
>
>I need a lot of help,
>
>Accidentally I deleted all the files from the database, 
> ...
>I have also back up the complete database directory for a month.

Install the latest backup. Consider a better backup procedure for the future.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Question about index on different tablespace and rebuild it

2018-11-06 Thread Tony Shelver
Did you check the documentation for alter index?
https://www.postgresql.org/docs/10/sql-alterindex.html

You could create a script file (plenty of examples on the internet on
generating these) and then run through psql or whatever.

Also, if you just have a few indexes to move, you could use (for example)
pgadmin4 to generate and run stop/start scripts.

Data in the index is stored separately from the table, dropping indexes in
any of SQL standard databases doesn't affect the data in the tables.

On Tue, 6 Nov 2018 at 21:49, Condor  wrote:

> On 05-11-2018 10:56, Condor wrote:
> > Hello,
> >
> > I have a database that use index on different table space (nvme). I
> > read documentation about table space and understand table space cannot
> > be treated as an autonomous collection of data files.
> > My question is: Is this always true ? I mean if I have table .. okay
> > here is example:
> >
> > create table t1 ( i integer);
> > create index t1_i_idx on t1 using btree (i) tablespace nvme;
> >
> > In this case only the index is on different table space nvme that is
> > not part of data stored into table, I mean if I drop the index, data
> > stored in table will still be untouched and not damaged.
> > So in this case if I lost table space nvme that is stored into
> > different nvme drive, can I insert new one and rebuild all index files
> > with reindexdb for example or some other tool like
> > mysql/mariadb for example myismcheck or something like that, that can
> > rebuild index files when DB sever is offline ?
> >
> >
> > Regards,
> > HS
>
>
> Some one can answer ?
>
>