Help with large delete

2022-04-16 Thread Perry Smith
Currently I have one table that mimics a file system.  Each entry has a 
parent_id and a base name where parent_id is an id in the table that must exist 
in the table or be null with cascade on delete.

I’ve started a delete of a root entry with about 300,000 descendants.  The 
table currently has about 22M entries and I’m adding about 1600 entries per 
minute still.  Eventually there will not be massive amounts of entries being 
added and the table will be mostly static.

I started the delete before from a terminal that got detached.  So I killed 
that process and started it up again from a terminal less likely to get 
detached.˘

My question is basically how can I make life easier for Postgres?  I believe 
(hope) the deletes will be few and far between but they will happen from time 
to time.  In this case, Dropbox — its a long story that isn’t really pertinent. 
 The point is that @#$% happens.

“What can I do” includes starting completely over if necessary.  I’ve only got 
about a week invested in this and its just machine time at zero cost.  I could 
stop the other processes that are adding entries and let the delete finish if 
that would help.  etc.

Thank you for your time,
Perry



signature.asc
Description: Message signed with OpenPGP


Re: Help with large delete

2022-04-16 Thread Rob Sargent

On 4/16/22 07:25, Perry Smith wrote:

Currently I have one table that mimics a file system.  Each entry has a 
parent_id and a base name where parent_id is an id in the table that must exist 
in the table or be null with cascade on delete.

I’ve started a delete of a root entry with about 300,000 descendants.  The 
table currently has about 22M entries and I’m adding about 1600 entries per 
minute still.  Eventually there will not be massive amounts of entries being 
added and the table will be mostly static.

I started the delete before from a terminal that got detached.  So I killed 
that process and started it up again from a terminal less likely to get 
detached.˘

My question is basically how can I make life easier for Postgres?  I believe 
(hope) the deletes will be few and far between but they will happen from time 
to time.  In this case, Dropbox — its a long story that isn’t really pertinent. 
 The point is that @#$% happens.

“What can I do” includes starting completely over if necessary.  I’ve only got 
about a week invested in this and its just machine time at zero cost.  I could 
stop the other processes that are adding entries and let the delete finish if 
that would help.  etc.

Thank you for your time,
Perry

I would try 1) find any nodes with disproportionately more nodes, deal 
with them separately.  You my have a gut feel for where these nodes are? 
2) Start at least one step down: run a transaction for each entry in 
root node.  Maybe go two levels down.


Re: Help with large delete

2022-04-16 Thread Peter J. Holzer
On 2022-04-16 08:25:56 -0500, Perry Smith wrote:
> Currently I have one table that mimics a file system.  Each entry has
> a parent_id and a base name where parent_id is an id in the table that
> must exist in the table or be null with cascade on delete.
> 
> I’ve started a delete of a root entry with about 300,000 descendants.
> The table currently has about 22M entries and I’m adding about 1600
> entries per minute still.  Eventually there will not be massive
> amounts of entries being added and the table will be mostly static.
> 
> I started the delete before from a terminal that got detached.  So I
> killed that process and started it up again from a terminal less
> likely to get detached.˘
> 
> My question is basically how can I make life easier for Postgres?

Deleting 300k rows doesn't sound that bad. Neither does recursively
finding those 300k rows, although if you have a very biased distribution
(many nodes with only a few children, but some with hundreds of
thousands or even millions of children), PostgreSQL may not find a good
plan.

So as almost always when performance is an issue:

* What exactly are you doing?
* What is the execution plan?
* How long does it take?

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Help with large delete

2022-04-16 Thread Tom Lane
Perry Smith  writes:
> Currently I have one table that mimics a file system.  Each entry has a 
> parent_id and a base name where parent_id is an id in the table that must 
> exist in the table or be null with cascade on delete.
> I’ve started a delete of a root entry with about 300,000 descendants.  The 
> table currently has about 22M entries and I’m adding about 1600 entries per 
> minute still.  Eventually there will not be massive amounts of entries being 
> added and the table will be mostly static.

The most obvious question is do you have an index on the referencing
column.  PG doesn't require one to exist to create an FK; but if you
don't, deletes of referenced rows had better be uninteresting to you
performance-wise, because each one will cause a seqscan.

regards, tom lane




Re: Require details that can we see the password history to a User account in PostgreSQL Database.

2022-04-16 Thread Adrian Klaver

On 4/16/22 00:31, Sonai muthu raja M wrote:

Dear Adrian,

Yes, exactly. My query regardingan application user that when passwords 
were changed and what the previous values were?


1) Postgres has not built in process to audit changes to it's own roles.

2) That also means it does not audit whatever you are doing in the 
application above it.


3) Assuming the application user information is stored in a Postgres 
table you could create a trigger on the table that stores the changes in 
a separate audit table.




Kindly let us know the above information since this detail require for 
an internal auditing purpose.


Thanks.

*/Warm regards,/**/

M Sonai Muthu Raja



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: ***SPAM*** Re: Help with large delete

2022-04-16 Thread Perry Smith


> On Apr 16, 2022, at 10:33, Tom Lane  wrote:
> 
> Perry Smith  writes:
>> Currently I have one table that mimics a file system.  Each entry has a 
>> parent_id and a base name where parent_id is an id in the table that must 
>> exist in the table or be null with cascade on delete.
>> I’ve started a delete of a root entry with about 300,000 descendants.  The 
>> table currently has about 22M entries and I’m adding about 1600 entries per 
>> minute still.  Eventually there will not be massive amounts of entries being 
>> added and the table will be mostly static.
> 
> The most obvious question is do you have an index on the referencing
> column.  PG doesn't require one to exist to create an FK; but if you
> don't, deletes of referenced rows had better be uninteresting to you
> performance-wise, because each one will cause a seqscan.

To try to reply to Peter’s question, I just now started:

psql -c "explain analyze delete from dateien where basename = 
'/mnt/pedz/Visual_Media'” find_dups

And it hasn’t replied yet.  I hope you are not slapping your head muttering 
“this guy is an idiot!!” — in that this would not give you the plan you are 
asking for...

This is inside a BSD “jail” on a NAS.  I’m wondering if the jail has a limited 
time and the other processes have consumed it all.  In any case, if / when it 
replies, I will post the results.

For Tom’s question, here is the description of the table:

psql -c '\d dateien' find_dups
  Table "public.dateien"
   Column   |  Type  | Collation | Nullable |   
Default
++---+--+-
 id | bigint |   | not null | 
nextval('dateien_id_seq'::regclass)
 basename   | character varying  |   | not null |
 parent_id  | bigint |   |  |
 dev| bigint |   | not null |
 ftype  | character varying  |   | not null |
 uid| bigint |   | not null |
 gid| bigint |   | not null |
 ino| bigint |   | not null |
 mode   | bigint |   | not null |
 mtime  | timestamp without time zone|   | not null |
 nlink  | bigint |   | not null |
 size   | bigint |   | not null |
 sha1   | character varying  |   |  |
 created_at | timestamp(6) without time zone |   | not null |
 updated_at | timestamp(6) without time zone |   | not null |
Indexes:
"dateien_pkey" PRIMARY KEY, btree (id)
"unique_dev_ino_for_dirs" UNIQUE, btree (dev, ino) WHERE ftype::text = 
'directory'::text
"unique_parent_basename" UNIQUE, btree (COALESCE(parent_id, 
'-1'::integer::bigint), basename)
Foreign-key constraints:
"fk_rails_c01ebbd0bf" FOREIGN KEY (parent_id) REFERENCES dateien(id) ON 
DELETE CASCADE
Referenced by:
TABLE "dateien" CONSTRAINT "fk_rails_c01ebbd0bf" FOREIGN KEY (parent_id) 
REFERENCES dateien(id) ON DELETE CASCADE




signature.asc
Description: Message signed with OpenPGP


Re: ***SPAM*** Re: Help with large delete

2022-04-16 Thread Tom Lane
Perry Smith  writes:
> On Apr 16, 2022, at 10:33, Tom Lane  wrote:
>> The most obvious question is do you have an index on the referencing
>> column.  PG doesn't require one to exist to create an FK; but if you
>> don't, deletes of referenced rows had better be uninteresting to you
>> performance-wise, because each one will cause a seqscan.

> For Tom’s question, here is the description of the table:

> psql -c '\d dateien' find_dups
>   Table "public.dateien"
>Column   |  Type  | Collation | Nullable | 
>   Default
> ++---+--+-
>  id | bigint |   | not null | 
> nextval('dateien_id_seq'::regclass)
>  basename   | character varying  |   | not null |
>  parent_id  | bigint |   |  |
>  dev| bigint |   | not null |
>  ftype  | character varying  |   | not null |
>  uid| bigint |   | not null |
>  gid| bigint |   | not null |
>  ino| bigint |   | not null |
>  mode   | bigint |   | not null |
>  mtime  | timestamp without time zone|   | not null |
>  nlink  | bigint |   | not null |
>  size   | bigint |   | not null |
>  sha1   | character varying  |   |  |
>  created_at | timestamp(6) without time zone |   | not null |
>  updated_at | timestamp(6) without time zone |   | not null |
> Indexes:
> "dateien_pkey" PRIMARY KEY, btree (id)
> "unique_dev_ino_for_dirs" UNIQUE, btree (dev, ino) WHERE ftype::text = 
> 'directory'::text
> "unique_parent_basename" UNIQUE, btree (COALESCE(parent_id, 
> '-1'::integer::bigint), basename)
> Foreign-key constraints:
> "fk_rails_c01ebbd0bf" FOREIGN KEY (parent_id) REFERENCES dateien(id) ON 
> DELETE CASCADE
> Referenced by:
> TABLE "dateien" CONSTRAINT "fk_rails_c01ebbd0bf" FOREIGN KEY (parent_id) 
> REFERENCES dateien(id) ON DELETE CASCADE

Yeah.  So if you want to make deletes on this table not be unpleasantly
slow, you need an index on the parent_id column, and you don't have one.

(unique_parent_basename doesn't help, because with that definition it's
useless for looking up rows by parent_id.)

regards, tom lane




Re: Help with large delete

2022-04-16 Thread Jan Wieck
Make your connection immune to disconnects by using something like the
screen utility.


Regards, Jan

On Sat, Apr 16, 2022, 09:26 Perry Smith  wrote:

> Currently I have one table that mimics a file system.  Each entry has a
> parent_id and a base name where parent_id is an id in the table that must
> exist in the table or be null with cascade on delete.
>
> I’ve started a delete of a root entry with about 300,000 descendants.  The
> table currently has about 22M entries and I’m adding about 1600 entries per
> minute still.  Eventually there will not be massive amounts of entries
> being added and the table will be mostly static.
>
> I started the delete before from a terminal that got detached.  So I
> killed that process and started it up again from a terminal less likely to
> get detached.˘
>
> My question is basically how can I make life easier for Postgres?  I
> believe (hope) the deletes will be few and far between but they will happen
> from time to time.  In this case, Dropbox — its a long story that isn’t
> really pertinent.  The point is that @#$% happens.
>
> “What can I do” includes starting completely over if necessary.  I’ve only
> got about a week invested in this and its just machine time at zero cost.
> I could stop the other processes that are adding entries and let the delete
> finish if that would help.  etc.
>
> Thank you for your time,
> Perry
>
>


Re: Help with large delete

2022-04-16 Thread Perry Smith


> On Apr 16, 2022, at 12:57, Jan Wieck  wrote:
> 
> Make your connection immune to disconnects by using something like the screen 
> utility.

Exactly… I’m using emacs in a server (daemon) mode so it stays alive.  Then I 
do “shell” within it.


> On Sat, Apr 16, 2022, 09:26 Perry Smith  > wrote:
> Currently I have one table that mimics a file system.  Each entry has a 
> parent_id and a base name where parent_id is an id in the table that must 
> exist in the table or be null with cascade on delete.
> 
> I’ve started a delete of a root entry with about 300,000 descendants.  The 
> table currently has about 22M entries and I’m adding about 1600 entries per 
> minute still.  Eventually there will not be massive amounts of entries being 
> added and the table will be mostly static.
> 
> I started the delete before from a terminal that got detached.  So I killed 
> that process and started it up again from a terminal less likely to get 
> detached.˘
> 
> My question is basically how can I make life easier for Postgres?  I believe 
> (hope) the deletes will be few and far between but they will happen from time 
> to time.  In this case, Dropbox — its a long story that isn’t really 
> pertinent.  The point is that @#$% happens.
> 
> “What can I do” includes starting completely over if necessary.  I’ve only 
> got about a week invested in this and its just machine time at zero cost.  I 
> could stop the other processes that are adding entries and let the delete 
> finish if that would help.  etc.
> 
> Thank you for your time,
> Perry
> 



signature.asc
Description: Message signed with OpenPGP


Re: Help with large delete

2022-04-16 Thread Rob Sargent
On Apr 16, 2022, at 12:24 PM, Perry Smith  wrote:On Apr 16, 2022, at 12:57, Jan Wieck  wrote:Make your connection immune to disconnects by using something like the screen utility.Exactly… I’m using emacs in a server (daemon) mode so it stays alive.  Then I do “shell” within it.I use emacs a lot.  It doesn’t keep the terminal alive in my experience. Perhaps nohup?



signature.asc
Description: Binary data


Re: Help with large delete

2022-04-16 Thread Perry Smith


> On Apr 16, 2022, at 13:56, Rob Sargent  wrote:
> 
> 
> 
>> On Apr 16, 2022, at 12:24 PM, Perry Smith  wrote:
>> 
>> 
>> 
>>> On Apr 16, 2022, at 12:57, Jan Wieck >> > wrote:
>>> 
>>> Make your connection immune to disconnects by using something like the 
>>> screen utility.
>> 
>> Exactly… I’m using emacs in a server (daemon) mode so it stays alive.  Then 
>> I do “shell” within it.
> I use emacs a lot.  It doesn’t keep the terminal alive in my experience. 
> Perhaps nohup?

https://www.emacswiki.org/emacs/EmacsAsDaemon

Doing: emacs —daemon

You will see a couple of messages about loading up the customized file and then 
it detaches and you get back to the prompt.



signature.asc
Description: Message signed with OpenPGP