Re: Rearchitecting for storage

2019-07-19 Thread Luca Ferrari
On Thu, Jul 18, 2019 at 10:09 PM Matthew Pounsett  wrote:
> That would likely keep the extra storage requirements small, but still 
> non-zero.  Presumably the upgrade would be unnecessary if it could be done 
> without rewriting files.  Is there any rule of thumb for making sure one has 
> enough space available for the upgrade?   I suppose that would come down to 
> what exactly needs to get rewritten, in what order, etc., but the pg_upgrade 
> docs don't seem to have that detail.  For example, since we've got an ~18TB 
> table (including its indices), if that needs to be rewritten then we're still 
> looking at requiring significant extra storage.  Recent experience suggests 
> postgres won't necessarily do things in the most storage-efficient way.. we 
> just had a reindex on that database fail (in --single-user) because 17TB was 
> insufficient free storage for the db to grow into.

This could be trivial, but any chance you can partition the table
and/or archive unused records (at least temporarly)? A 18 TB table
quite frankly sounds a good candidate to contain records no one is
interested in the near future.
In any case, if you can partition the table chances are you can at
least do a per-table backup that could simplify maintanance of the
database.

In desperate order, I would check also the log files (I mean, textual
logs, not wals) because occasionally I found them requiring a few GBs
on my disk, and that can be easily archived to gain some more extra
space.
Then I would go for some commodity NAS to attach as extra storage, at
least for the upgrade process.

If any of the following fails, I would probably drop all the indexes
to gain extra space, perform the upgrade, and then reindex (removing
the old cluster, in the case it has not been upgraded with the link
option).

Luca




very high replay_lag on 3-node cluster

2019-07-19 Thread Tiemen Ruiten
Hello,

In my previous post[1] on this list I brought up an issue with long running
checkpoints. I reduced checkpoint_timeout to a more reasonable value (15m
down from 60m) and forced/immediate checkpoints now complete mostly in
under a minute. This thread and another one[2] on the Clusterlabs
mailinglist also helped me understand more about how PostgreSQL internals
work, thanks everyone!

Now to my current issue: I took the advice to add more monitoring on replay
lag (using pg_last_xact_replay_timestamp) and things are not looking good.
Last night replication lagged by almost 6 hours on one of the nodes[3], but
eventually caught up. As you can see in that screenshot, ph-sql-03 is
consistently slower to replay than ph-sql-05 (ph-sql-04 is the current
master) and there happen to be different SSD's in ph-sql-03 (Crucial MX300
vs Crucial MX500 in the other two), which makes me think this is IO
related.

When I check the replay_lag column of pg_stat_replication, the numbers are
consistent with the data from pg_last_xact_replay_timestamp:

postgres=# SELECT application_name, replay_lag FROM pg_stat_replication;
 application_name |   replay_lag
--+-
 ph-sql-03| 00:15:16.179952
 ph-sql-05| 00:10:01.078163

Currently this doesn't present an operational issue, as the slaves aren't
used by applications (still waiting for development to make the necessary
changes). So there are no queries running at all on the slaves apart from
the occasional monitoring.

Cluster specifications:
all nodes:
- filesystem: ZFS stripe of mirrors
- 2* CPU: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (20 cores total)
- 128 GB RAM
ph-sql-03: 8* Crucial MX300 1050MB, underprovisioned to 1TB
ph-sql-0{4,5}: 8* Crucial MX500 1TB

postgresql.conf with GUCs changed from default:

shared_buffers = 8GB
work_mem = 64MB
maintenance_work_mem = 2GB
autovacuum_work_mem = 1GB
effective_io_concurrency = 200
max_worker_processes = 50
max_parallel_maintenance_workers = 8
max_parallel_workers_per_gather = 8
max_parallel_workers = 40
wal_level = replica
synchronous_commit = off
full_page_writes = off
wal_log_hints = on
wal_buffers = 128MB
checkpoint_timeout = 15min
max_wal_size = 8GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
archive_mode = on
archive_command = 'pgbackrest --stanza=pgdb2 archive-push %p'
max_wal_senders = 10
wal_keep_segments = 20
hot_standby = on
hot_standby_feedback = on
random_page_cost = 1.5
effective_cache_size = 48GB
default_statistics_target = 500
shared_preload_libraries = 'timescaledb, pg_cron, pg_prewarm'   # (change
requires restart)
max_locks_per_transaction = 512

What are possible reasons for the high replay_lag? Is my storage just too
slow? Are there any tunables available?

[1]
https://www.postgresql.org/message-id/flat/CAEkBuzeno6ztiM1g4WdzKRJFgL8b2nfePNU%3Dq3sBiEZUm-D-sQ%40mail.gmail.com

[2] https://lists.clusterlabs.org/pipermail/users/2019-July/025967.html
[3] https://ibb.co/0sncjBZ

Thank you,

Tiemen Ruiten


Re: Rearchitecting for storage

2019-07-19 Thread Matthew Pounsett
On Thu, 18 Jul 2019 at 19:53, Rob Sargent  wrote:

>
> >
> > That would likely keep the extra storage requirements small, but still
> non-zero.  Presumably the upgrade would be unnecessary if it could be done
> without rewriting files.  Is there any rule of thumb for making sure one
> has enough space available for the upgrade?   I suppose that would come
> down to what exactly needs to get rewritten, in what order, etc., but the
> pg_upgrade docs don't seem to have that detail.  For example, since we've
> got an ~18TB table (including its indices), if that needs to be rewritten
> then we're still looking at requiring significant extra storage.  Recent
> experience suggests postgres won't necessarily do things in the most
> storage-efficient way.. we just had a reindex on that database fail (in
> --single-user) because 17TB was insufficient free storage for the db to
> grow into.
> >
> Can you afford to drop and re-create those 6 indices?


Technically, yes.  I don't see any reason we'd be prevented from doing
that.  But, rebuilding them will take a long time.  That's a lot of
downtime to incur any time we update the DB.  I'd prefer to avoid it if I
can.  For scale, the recent 'reindex database' that failed ran for nine
days before it ran out of room, and that was in single-user.  Trying to do
that concurrently would take a lot longer, I imagine.


Re: Rearchitecting for storage

2019-07-19 Thread Matthew Pounsett
On Thu, 18 Jul 2019 at 21:59, Andy Colson  wrote:

> >
>
> Now might be a good time to consider splitting the database onto multiple
> computers.  Might be simpler with a mid-range database, then your plan for
> the future is "add more computers".
>

Hmm... yes.  Range partitioning seems like a possible way forward.  I
hadn't considered that yet.  We might hold that back for when a dual-head
approach (server + jbod) can't scale anymore, but I think that's a long way
in the future.

>
>


Re: Rearchitecting for storage

2019-07-19 Thread Matthew Pounsett
On Fri, 19 Jul 2019 at 04:21, Luca Ferrari  wrote:

>
> This could be trivial, but any chance you can partition the table
> and/or archive unused records (at least temporarly)? A 18 TB table
> quite frankly sounds a good candidate to contain records no one is
> interested in the near future.
>

Partitioning is a possibility.  The whole database is historical test
results, stored specifically for doing comparisons over time, so I'm not
sure we can actually archive anything.  Expiring old test data is a
discussion we might have to have, eventually.


> In any case, if you can partition the table chances are you can at
> least do a per-table backup that could simplify maintanance of the
> database.
>

My current backup plan for this database is on-site replication, and a
monthly pg_dump from the standby to be copied off-site.  Doing per-table
backups sounds like a great way to end up with an inconsistent backup, but
perhaps I misunderstand what you mean.

Another possibility is putting the server into backup mode and taking a
snapshot of the filesystem, but coordinating that across chassis (in the
case where partitioning is used) in such a way that the db is consistent
sounds like a hard problem... unless issuing pg_start_backup on the chassis
holding the master table coordinates backup mode on all the chassis holding
child tables at the same time?  I haven't read enough on that yet.


>
> In desperate order, I would check also the log files (I mean, textual
> logs, not wals) because occasionally I found them requiring a few GBs
> on my disk, and that can be easily archived to gain some more extra
> space.
> Then I would go for some commodity NAS to attach as extra storage, at
> least for the upgrade process.
>

Okay.  So I guess the short answer is no, nobody really knows how to judge
how much space is required for an upgrade?  :)

Any logs we have are going to be a rounding error when compared to the
database itself.  And buying storage last-minute because an upgrade failed
is exactly the sort of thing that a resource constrained not-for-profit
can't do.  We really need to be able to plan this out long term so that we
get as much as possible out of every dollar.


> If any of the following fails, I would probably drop all the indexes
> to gain extra space, perform the upgrade, and then reindex (removing
> the old cluster, in the case it has not been upgraded with the link
> option).
>

Yeah, this sort of trial-and-error approach to getting upgrades done will
bother me, but it seems like it'll be necessary once we start growing into
whatever new home we get for the db.

Thanks very much for your time on this.


Re: Rearchitecting for storage

2019-07-19 Thread Kenneth Marshall
Hi Matt,

On Fri, Jul 19, 2019 at 10:41:31AM -0400, Matthew Pounsett wrote:
> On Fri, 19 Jul 2019 at 04:21, Luca Ferrari  wrote:
> 
> >
> > This could be trivial, but any chance you can partition the table
> > and/or archive unused records (at least temporarly)? A 18 TB table
> > quite frankly sounds a good candidate to contain records no one is
> > interested in the near future.
> >
> 
> Partitioning is a possibility.  The whole database is historical test
> results, stored specifically for doing comparisons over time, so I'm not
> sure we can actually archive anything.  Expiring old test data is a
> discussion we might have to have, eventually.
> 

This is a case were using a compressed filesystem can give you space
savings as well as faster read performance due to the compression
factor. In my case a sequential scan of something in the compressed
tablespace runs almost 3X faster than on the non-compressed one.

Regards,
Ken





Re: Rearchitecting for storage

2019-07-19 Thread Matthew Pounsett
On Thu, 18 Jul 2019 at 09:44, Matthew Pounsett  wrote:

>
> I've recently inherited a database that is dangerously close to outgrowing
> the available storage on its existing hardware.  I'm looking for (pointers
> to) advice on scaling the storage in a financially constrained
> not-for-profit.
>

Thanks for your replies, everyone.  Here's a quick summary of what I've got
out of this.

Although nobody really addressed the core question of the performance
tradeoffs in different storage architectures, perhaps the fact that nobody
mentioned them means there really aren't any.  We'll proceed on the
assumption that externally attached storage really doesn't make a
difference.  NAS storage seems like a poor choice to me, for performance
reasons, and nobody's really said anything to disabuse me of that notion.

We're going to have a look at the relative costs of single-head solutions
as well as dual head (server + jbod) setups, and see what gets us the most
growth for the least cost. We'll plan for enough storage to get us five
years of growth, and just accept that maybe in the 5th year we won't be
able to do in-place upgrades without dropping indexes.

Horizontal scalability through range partitioning sounds interesting, but I
don't think it's a cost-effective solution for us right now.  As long as
it's possible for the db to fit in a single server (or jbod) using
commodity hardware, the incremental cost of adding more chassis (and
therefore more motherbaords, more CPUs, more memory) isn't offset by a
reduced cost anywhere else (e.g. using cheaper drives).  And that's not
even accounting for the increased operational cost of coordinating the DB
across multiple servers.  It could be a useful approach if DB growth
outpaces historical averages and we need to add hardware before a normal
replacement cycle.  It could also be useful at the end of that replacement
cycle if DB growth has outpaced commodity hardware improvements, and single
server solutions are no longer viable.

The DB server we inherited is currently a single instance, but once we
expand and have replication in place I'll have to do some testing to see if
LVM compression gives us any performance boost (or incurs an acceptable
performance cost). The big question there is whether the processing
required to do the compression is faster than the difference in read times
on the disk... I think that might be dependent on the data and how it's
accessed.  It certainly seems like it could give us some benefits, but I
don't think it's an experiment I want to attempt with only a single
production copy of the DB; the downtime required to rebuild the DB server
for A+B comparisons would be unacceptable.

Thanks again everyone.  This has been educational.


Re: Rearchitecting for storage

2019-07-19 Thread Peter J. Holzer
On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> Okay.  So I guess the short answer is no, nobody really knows how to
> judge how much space is required for an upgrade?  :)

As I understand it, a pg_upgrade --link uses only negligible extra
space. It duplicates a bit of householding information, but not your
data tables or indexes. Your 18 TB table will definitely not be duplicated
during the upgrade if you can use --link.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Rearchitecting for storage

2019-07-19 Thread Matthew Pounsett
On Fri, 19 Jul 2019 at 11:25, Peter J. Holzer  wrote:

> On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> > Okay.  So I guess the short answer is no, nobody really knows how to
> > judge how much space is required for an upgrade?  :)
>
> As I understand it, a pg_upgrade --link uses only negligible extra
> space. It duplicates a bit of householding information, but not your
> data tables or indexes. Your 18 TB table will definitely not be duplicated
> during the upgrade if you can use --link.
>

The documentation for pg_upgrade --link says that the old copy is no longer
usable, which means it's modifying files that are linked.  If it were only
modifying small housekeeping files, then it would be most efficient not to
link those, which would keep both copies of the db usable.  That seems
incompatible with your suggestion that it doesn't need to modify the data
files.  Depending on how it goes about doing that, it could mean a
significant short-term increase in storage requirements while the data is
being converted.

Going back to our recent 'reindex database' attempt, pgsql does not
necessarily do these things in the most storage-efficient manner; it seems
entirely likely that it would choose to use links to duplicate the data
directory, then create copies of each data file as it converts them over,
then link that back to the original for an atomic replacement.  That could
eat up a HUGE amount of storage during the conversion process without the
start and end sizes being very different at all.

Sorry, but I can't reconcile your use of "as I understand it" with your use
of "definitely".  It sounds like you're guessing, rather than speaking from
direct knowledge of how the internals of pg_upgrade.


Re: Rearchitecting for storage

2019-07-19 Thread Jacob Bunk Nielsen
Matthew Pounsett  writes:
> On Thu, 18 Jul 2019 at 19:53, Rob Sargent  wrote:
>
>  Can you afford to drop and re-create those 6 indices?
>
> Technically, yes. I don't see any reason we'd be prevented from doing that. 
> But, rebuilding them will take a long time. That's a lot of downtime to incur 
> any time we update
> the DB. I'd prefer to avoid it if I can. For scale, the recent 'reindex 
> database' that failed ran for nine days before it ran out of room, and that 
> was in single-user. Trying to do
> that concurrently would take a lot longer, I imagine.

This may be a stupid question, but are you certain they are all used? It
wouldn't be the first time that I've seen someone create indexes and
then never use them. This script can tell you if there are any indexes
that seems largely unused.

https://github.com/pgexperts/pgx_scripts/blob/master/indexes/unused_indexes.sql

If you can run your application without access to the indexes for a
while you can create them concurrently in the background using "CREATE
INDEX CONCURRENTLY ...".

Best regards,
Jacob





Re: Rearchitecting for storage

2019-07-19 Thread Jacob Bunk Nielsen
Matthew Pounsett  writes:

> [...] Is there any rule of thumb for making sure one has enough space
> available for the upgrade?

No, because it depends greatly on which version you are upgrading from
and which version you are upgrading to etc.

Perhaps you could carve out a slice of data, e.g. 1 GB and load it into
a test database and try to upgrade that. That would probably give you an
idea.

Also, you mentioned that your database contains historical test data¹,
then I would guess that one of the indexes is related to timestamps? But
maybe you could live with a smaller BRIN index for the timestamps:
https://www.postgresql.org/docs/11/brin-intro.html - that could
potentially save some space, and may not have been something on the
radar when the database was first developed.

Best regards,
Jacob

¹) I think I know which kind of data based on your progress reports on
   a DNS related list I'm subscribed to.





pg_advisory_lock lock FAILURE / What does those numbers mean (process 240828 waits for ExclusiveLock on advisory lock [1167570,16820923,3422556162,1];)?

2019-07-19 Thread Alexandru Lazarev
Hi Community,
I receive locking failure on pg_advisory_lock, I do deadlock condition and
receive following:
- - -
ERROR: deadlock detected
SQL state: 40P01
Detail: Process 240828 waits for ExclusiveLock on advisory lock [
*1167570,16820923,3422556162,1*]; blocked by process 243637.
Process 243637 waits for ExclusiveLock on advisory lock [
*1167570,16820923,3422556161,1*]; blocked by process 240828.
- - -
I do from Tx1:
select pg_advisory_lock(72245317596090369);
select pg_advisory_lock(72245317596090370);
and from Tx2:
select pg_advisory_lock(72245317596090370);
select pg_advisory_lock(72245317596090369);

where long key is following: 72245317596090369-> HEX 0x0100*AABBCC001001*
where 1st byte (highest significance "0x01") is namespace masked with MAC
Address " *AABBCC001001*", but in error i see 4 numbers - what is their
meaning?
I deducted that 2nd ( *16820923* .) HEX 0x100AABB, 1st half of long key)
and 3rd is ( *3422556161* -> HEX 0xCC001001, 2nd half of long key)
but what are 1st ( *1167570* ) and 4th (*1*) numbers?





Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: pg_advisory_lock lock FAILURE / What does those numbers mean (process 240828 waits for ExclusiveLock on advisory lock [1167570,16820923,3422556162,1];)?

2019-07-19 Thread Laurenz Albe
On Fri, 2019-07-19 at 21:15 +0300, Alexandru Lazarev wrote:
> I receive locking failure on pg_advisory_lock, I do deadlock condition and 
> receive following: 
> - - -
> ERROR: deadlock detected
> SQL state: 40P01
> Detail: Process 240828 waits for ExclusiveLock on advisory lock 
> [1167570,16820923,3422556162,1]; blocked by process 243637.
> Process 243637 waits for ExclusiveLock on advisory lock 
> [1167570,16820923,3422556161,1]; blocked by process 240828.
> - - -
> I do from Tx1: 
> select pg_advisory_lock(72245317596090369);
> select pg_advisory_lock(72245317596090370);
> and from Tx2:
> select pg_advisory_lock(72245317596090370);
> select pg_advisory_lock(72245317596090369);
> 
> where long key is following: 72245317596090369-> HEX 0x0100AABBCC001001
> where 1st byte (highest significance "0x01") is namespace masked with MAC 
> Address " AABBCC001001", but in error i see 4 numbers - what is their meaning?
> I deducted that 2nd ( 16820923 .) HEX 0x100AABB, 1st half of long key) and 
> 3rd is ( 3422556161 -> HEX 0xCC001001, 2nd half of long key)
> but what are 1st ( 1167570 ) and 4th (1) numbers?

See this code in src/backend/utils/adt/lockfuncs.c:

/*
 * Functions for manipulating advisory locks
 *
 * We make use of the locktag fields as follows:
 *
 *  field1: MyDatabaseId ... ensures locks are local to each database
 *  field2: first of 2 int4 keys, or high-order half of an int8 key
 *  field3: second of 2 int4 keys, or low-order half of an int8 key
 *  field4: 1 if using an int8 key, 2 if using 2 int4 keys
 */
#define SET_LOCKTAG_INT64(tag, key64) \
SET_LOCKTAG_ADVISORY(tag, \
 MyDatabaseId, \
 (uint32) ((key64) >> 32), \
 (uint32) (key64), \
 1)
#define SET_LOCKTAG_INT32(tag, key1, key2) \
SET_LOCKTAG_ADVISORY(tag, MyDatabaseId, key1, key2, 2)

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Rearchitecting for storage

2019-07-19 Thread Peter J. Holzer
On 2019-07-19 11:37:52 -0400, Matthew Pounsett wrote:
> On Fri, 19 Jul 2019 at 11:25, Peter J. Holzer  wrote:
> 
> On 2019-07-19 10:41:31 -0400, Matthew Pounsett wrote:
> > Okay.  So I guess the short answer is no, nobody really knows how to
> > judge how much space is required for an upgrade?  :)
> 
> As I understand it, a pg_upgrade --link uses only negligible extra
> space. It duplicates a bit of householding information, but not your
> data tables or indexes. Your 18 TB table will definitely not be duplicated
> during the upgrade if you can use --link.
> 
> 
> The documentation for pg_upgrade --link says that the old copy is no longer
> usable, which means it's modifying files that are linked.  If it were only
> modifying small housekeeping files, then it would be most efficient not to 
> link
> those, which would keep both copies of the db usable.

This was discussed recently: The old database is made intentionally
unusable to prevent accidentally starting both (which would result in
data corruption).

> That seems incompatible with your suggestion that it doesn't need to
> modify the data files.  Depending on how it goes about doing that, it
> could mean a significant short-term increase in storage requirements
> while the data is being converted.  
> 
> Going back to our recent 'reindex database' attempt, pgsql does not
> necessarily do these things in the most storage-efficient manner; it
> seems entirely likely that it would choose to use links to duplicate
> the data directory, then create copies of each data file as it
> converts them over, then link that back to the original for an atomic
> replacement.  That could eat up a HUGE amount of storage during the
> conversion process without the start and end sizes being very
> different at all.  

I can't really think of a scenario in which this would be the best
(or even a good) strategy to convert the database. I am quite confident
that pg_upgrade doesn't do that at present and reasonably confident that
it won't do it in the future.


> Sorry, but I can't reconcile your use of "as I understand it" with
> your use of "definitely".  It sounds like you're guessing, rather than
> speaking from direct knowledge of how the internals of pg_upgrade.

I don't have direct knowledge of the internals of pg_upgrade, but I
have upgraded a database of about 1 TB at least twice with --link. Since
I had much less than 1 TB of free space and the upgrade completed very
quickly, I am very confident that no user defined tables are copied. I
have also been on this mailing list for a few years and read quite a few
discussions about the usage of pg_upgrade in that time (though I may not
always have paid much attention to them).

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: pg_advisory_lock lock FAILURE / What does those numbers mean (process 240828 waits for ExclusiveLock on advisory lock [1167570,16820923,3422556162,1];)?

2019-07-19 Thread Alexandru Lazarev
Thanks. Question closed. :)


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

On Fri, Jul 19, 2019 at 10:27 PM Laurenz Albe 
wrote:

> On Fri, 2019-07-19 at 21:15 +0300, Alexandru Lazarev wrote:
> > I receive locking failure on pg_advisory_lock, I do deadlock condition
> and receive following:
> > - - -
> > ERROR: deadlock detected
> > SQL state: 40P01
> > Detail: Process 240828 waits for ExclusiveLock on advisory lock
> [1167570,16820923,3422556162,1]; blocked by process 243637.
> > Process 243637 waits for ExclusiveLock on advisory lock
> [1167570,16820923,3422556161,1]; blocked by process 240828.
> > - - -
> > I do from Tx1:
> > select pg_advisory_lock(72245317596090369);
> > select pg_advisory_lock(72245317596090370);
> > and from Tx2:
> > select pg_advisory_lock(72245317596090370);
> > select pg_advisory_lock(72245317596090369);
> >
> > where long key is following: 72245317596090369-> HEX 0x0100AABBCC001001
> > where 1st byte (highest significance "0x01") is namespace masked with
> MAC Address " AABBCC001001", but in error i see 4 numbers - what is their
> meaning?
> > I deducted that 2nd ( 16820923 .) HEX 0x100AABB, 1st half of long key)
> and 3rd is ( 3422556161 -> HEX 0xCC001001, 2nd half of long key)
> > but what are 1st ( 1167570 ) and 4th (1) numbers?
>
> See this code in src/backend/utils/adt/lockfuncs.c:
>
> /*
>  * Functions for manipulating advisory locks
>  *
>  * We make use of the locktag fields as follows:
>  *
>  *  field1: MyDatabaseId ... ensures locks are local to each database
>  *  field2: first of 2 int4 keys, or high-order half of an int8 key
>  *  field3: second of 2 int4 keys, or low-order half of an int8 key
>  *  field4: 1 if using an int8 key, 2 if using 2 int4 keys
>  */
> #define SET_LOCKTAG_INT64(tag, key64) \
> SET_LOCKTAG_ADVISORY(tag, \
>  MyDatabaseId, \
>  (uint32) ((key64) >> 32), \
>  (uint32) (key64), \
>  1)
> #define SET_LOCKTAG_INT32(tag, key1, key2) \
> SET_LOCKTAG_ADVISORY(tag, MyDatabaseId, key1, key2, 2)
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: How to run a task continuously in the background

2019-07-19 Thread Peter J. Holzer
On 2019-07-17 12:34:41 +0100, Rory Campbell-Lange wrote:
> We make extensive use of postgresql 'contacting' an external process,
> but our use case involves many databases in a cluster rather than many
> schemas. Also we don't have to deal with cancelling the external
> process. We chose this architecture to avoid many postgresql
> connections for LISTEN/NOTIFY.
> 
> We use a pgmemcache interface trigger to update memcached with a
> notification of the database holding items to be flushed.

Memcached is a cache. It will delete old items if storage is full (or if
they expire). Is this not a problem in your case or did you make sure
that this cannot happen?

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature