Null database entry in pg_stat_database on pg-12 version

2021-05-26 Thread RAJAMOHAN
Hello all,

Today I came across this null database entry with oid as 0 in
*pg_stat_database* on *postgresql-12* version.

Any particular use case is there for this?  It also has some column values,
how it is gathering those data and which database is responsible for this?

I am not seeing this in the postgresql-10 version.

Can someone clarify this?

postgres=# select * from pg_catalog.pg_stat_database where datid='0';
-[ RECORD 1 ]-+--
datid| 0
datname  |
numbackends  | 0
xact_commit| 0
xact_rollback   | 0
blks_read | 14253
blks_hit | 1753072
tup_returned | 271673
tup_fetched   | 212113
tup_inserted  | 152603
tup_updated  | 478
tup_deleted   | 14
conflicts | 0
temp_files  | 0
temp_bytes| 0
deadlocks  | 0
checksum_failures |
checksum_last_failure |
blk_read_time  | 0
blk_write_time  | 0
stats_reset| 2021-05-24 08:36:09.580433+00


Thanks & Regards,
Rajamohan.J
Mobile : +91 8098167651, +91 7259157485


Re: Null database entry in pg_stat_database on pg-12 version

2021-05-26 Thread Magnus Hagander
On Wed, May 26, 2021 at 9:20 AM RAJAMOHAN  wrote:
>
> Hello all,
>
> Today I came across this null database entry with oid as 0 in 
> pg_stat_database on postgresql-12 version.
>
> Any particular use case is there for this?  It also has some column values, 
> how it is gathering those data and which database is responsible for this?

Yes. This is new in PostgreSQL 12
(https://www.postgresql.org/docs/12/release-12.html). It tracks
accesses to shared objects
(https://www.postgresql.org/docs/12/monitoring-stats.html#PG-STAT-DATABASE-VIEW).

-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




An exception about comparison operators

2021-05-26 Thread 张元超
Hi,everyone,
I encountered a problem when using PostgreSQL's comparison operators. The 
problem is as follows:
Problem Description:
When I use the comparison operator "!=" as the query condition, such as "select 
* from t1 where c1 !=-1", the database returns an error: "!=-operator does not 
exist". Because there is no space between ‘=’ and ‘-’, if you enter a space 
between them, the sql can be executed normally. Therefore, although we can make 
sql execute normally by adding spaces, its behavior is different from other 
comparison operators (such as ">,<,>=,<=,=,<>"). Other comparisons Operators 
will not have such problems.


I guess that this should be because the database did not correctly handle the 
"!=" operator during sql parsing, so I think this should be a bug. This problem 
exists in the 11, 12, and 13 versions of PostgreSQL.


At the same time, I tried other databases, such as Oracle, but did not find the 
same problem.
Looking forward to your reply.


Thans very much!





Re: An exception about comparison operators

2021-05-26 Thread Laurenz Albe
On Wed, 2021-05-26 at 17:15 +0800, 张元超 wrote:
> I encountered a problem when using PostgreSQL's comparison operators. The 
> problem is as follows:
> Problem Description:
> When I use the comparison operator "!=" as the query condition, such as 
> "select * from t1 where c1 !=-1", the database returns an error: "!=-operator 
> does not exist". Because there is no space
> between ‘=’ and ‘-’, if you enter a space between them, the sql can be 
> executed normally. Therefore, although we can make sql execute normally by 
> adding spaces, its behavior is different from other
> comparison operators (such as ">,<,>=,<=,=,<>"). Other comparisons Operators 
> will not have such problems.
> 
> I guess that this should be because the database did not correctly handle the 
> "!=" operator during sql parsing, so I think this should be a bug. This 
> problem exists in the 11, 12, and 13 versions of
> PostgreSQL.
> 
> At the same time, I tried other databases, such as Oracle, but did not find 
> the same problem.
> Looking forward to your reply.

See this comment in "scan.l":

 /*
  * For SQL compatibility, '+' and '-' cannot be the
  * last char of a multi-char operator unless the operator
  * contains chars that are not in SQL operators.
  * The idea is to lex '=-' as two operators, but not
  * to forbid operator names like '?-' that could not be
  * sequences of SQL operators.
  */

That means that a training minus is only treated as not belonging to
the operator name if the preceding characters belong to a standard SQL
operator name.  Now "<" and ">" are standard operator characters, so
"<>-" is treated as two tokens.
But "!" does not appear in SQL standard operators, so there is no special
processing.

This is a hack to allow constructs like 1<>-2, which are required to
comply with the SQL standard.

If you want this behavior, sitch to standard SQL operator names and
don't use !=.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





How different is AWS-RDS postgres?

2021-05-26 Thread Rob Sargent
I have what purports to be Postgres 12 ( PostgreSQL 12.5 on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 
4.8.5-11), 64-bit ) running on RDS.  I testing a new release of my 
service with an admittedly large data set (which may be my swamping AWS 
bandwidth).  But my app logs tell me a table was created and loaded (and 
later, read)


From my tomcat log

   00:09:58.567 [https-jsse-nio-10.0.2.28-15002-exec-3] DEBUG
   edu.utah.camplab.jx.PayloadFromMux - STAGING TABLE CREATED:
   bulk."rjs_GEV15_15_ff3ba73c_d927_431c_bd29_9687a47f1594"
   00:09:58.585 [https-jsse-nio-10.0.2.28-15002-exec-3] INFO
   edu.utah.camplab.jx.PayloadFromMux -
   ff3ba73c-d927-431c-bd29-9687a47f1594: started COPY work at
   1621987798585 00:09:58.586 [ff3ba73c-d927-431c-bd29-9687a47f1594]
   INFO edu.utah.camplab.jx.PayloadWriterThread -
   bulk."rjs_GEV15_15_ff3ba73c_d927_431c_bd29_9687a47f1594": Begin bulk
   copy segment 00:10:01.597 [https-jsse-nio-10.0.2.28-15002-exec-11]
   ERROR edu.utah.camplab.servlet.PayloadSaveServlet - received payload
   00:10:01.598 [https-jsse-nio-10.0.2.28-15002-exec-11] DEBUG
   org.jooq.impl.DefaultConnectionProvider - setting auto commit :
   false 00:10:01.599 [https-jsse-nio-10.0.2.28-15002-exec-11] DEBUG
   edu.utah.camplab.jx.PayloadFromMux -
   074d449b-c3ba-499f-83e3-f48427fe0156: start transaction at
   1621987801599 00:10:01.599 [https-jsse-nio-10.0.2.28-15002-exec-11]
   DEBUG org.jooq.impl.DefaultConnectionProvider - set savepoint
   00:10:07.511 [ff3ba73c-d927-431c-bd29-9687a47f1594] INFO
   edu.utah.camplab.jx.PayloadWriterThread - bulk transfer of 2528447
   took 8.925s 00:10:07.511 [ff3ba73c-d927-431c-bd29-9687a47f1594]
   DEBUG edu.utah.camplab.jx.PayloadWriterThread - staged in 8925 ms
   00:10:07.567 [https-jsse-nio-10.0.2.28-15002-exec-3] INFO
   edu.utah.camplab.jx.PayloadFromMux -
   ff3ba73c-d927-431c-bd29-9687a47f1594: Total segment save took 9486
   ms 00:10:07.567 [https-jsse-nio-10.0.2.28-15002-exec-3] INFO
   edu.utah.camplab.jx.AbstractPayload -
   ff3ba73c-d927-431c-bd29-9687a47f1594: closing process
   ff3ba73c-d927-431c-bd29-9687a47f1594 00:10:07.608
   [https-jsse-nio-10.0.2.28-15002-exec-3] DEBUG
   org.jooq.impl.DefaultConnectionProvider - release savepoint
   00:10:07.609 [https-jsse-nio-10.0.2.28-15002-exec-3] DEBUG
   edu.utah.camplab.jx.PayloadFromMux -
   ff3ba73c-d927-431c-bd29-9687a47f1594: end transaction at 1621987807609 

Which claims to have written 2,528,447 records in roughly 9 seconds into 
the newly created table 
"bulk.rjs.GEV15_15_FF3ba73c_d927_431c_bd29_9687147f1594". Nice.


However, no such table exists, though later processing renames it by 
appending "_done" to the name (being careful to remain under 64 char)


My middleware does receive an exception notice

   00:10:55.101 [https-jsse-nio-10.0.2.28-15002-exec-3] ERROR
   edu.utah.camplab.jx.AbstractPayload - run
   ff3ba73c-d927-431c-bd29-9687a47f1594: Exception from db write: SQL
   [insert into segment select * from
   bulk."rjs_GEV15_15_ff3ba73c_d927_431c_bd29_9687a47f1594" as s where
   s.probandset_id >= 'a000----' and
   s.probandset_id < 'b000----' ]; An I/O
   error occurred while sending to the backend.: {} 



which confirms(?) that the table /was/ there and read from 10 times 
prior since I copy from this temp, bulk loaded table to the actual 
target in 16 chunks by diddling the first hex digit from 0 through f.  
Here zero through nine apparently didn't have a problem.  These 16 
inserts are in a single transaction, separate from the bulk copy.  
(There are a dozen more  of these files processed and disappeared.)


My question is:
Should I be surprised that the initial bulk loaded table is nowhere to 
be seen, given the thumbs up from the logs?  Is this frailty inherent in 
AWS/RDS infrastructure?


Since this is an academic exorcise, I have minimal AWS support, which 
has yet to chime in on this matter. My access to the logs is um, er, 
constrained.










Setting up replication

2021-05-26 Thread Oliver Kohll
Hi,

We currently have an app with the database on the same server as the app
itself. I'd like to transition to a system where

1) in the short term, the db replicates to a different server. This will
allow us to take the daily pg_dump backups from the replica rather than the
primary server. They're currently slowing down the system too much as they
run.

2) in the medium term, switch the replica to be the primary and connect to
that from the app, i.e. app and db will be on separate servers, letting us
resource each appropriately. A 3rd server can then be used to replicate to
for backup purposes.

3) in the long run, depending on demand that also gives us the option of
scaling the db horizontally e.g. with a distributed db like Citus.

Are there any suggestions / good walkthroughs of how to do number 1? There
are many options!

All I know so far is we can probably use streaming replication as I can
make sure the PostgreSQL versions on each server are the same.

One thing I'm wondering is how often should a base backup be taken? Also
should we set up everything manually with scripts or use a 3rd party backup
tool like barman?

Any suggestions appreciated.

Oliver


Re: How different is AWS-RDS postgres?

2021-05-26 Thread Ron

On 5/26/21 12:50 PM, Rob Sargent wrote:
I have what purports to be Postgres 12 ( PostgreSQL 12.5 on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 
4.8.5-11), 64-bit ) running on RDS.  I testing a new release of my service 
with an admittedly large data set (which may be my swamping AWS 
bandwidth).  But my app logs tell me a table was created and loaded (and 
later, read)


From my tomcat log

00:09:58.567 [https-jsse-nio-10.0.2.28-15002-exec-3] DEBUG
edu.utah.camplab.jx.PayloadFromMux - STAGING TABLE CREATED:
bulk."rjs_GEV15_15_ff3ba73c_d927_431c_bd29_9687a47f1594" 00:09:58.585
[https-jsse-nio-10.0.2.28-15002-exec-3] INFO
edu.utah.camplab.jx.PayloadFromMux -
ff3ba73c-d927-431c-bd29-9687a47f1594: started COPY work at
1621987798585 00:09:58.586 [ff3ba73c-d927-431c-bd29-9687a47f1594] INFO
edu.utah.camplab.jx.PayloadWriterThread -
bulk."rjs_GEV15_15_ff3ba73c_d927_431c_bd29_9687a47f1594": Begin bulk
copy segment 00:10:01.597 [https-jsse-nio-10.0.2.28-15002-exec-11]
ERROR edu.utah.camplab.servlet.PayloadSaveServlet - received payload
00:10:01.598 [https-jsse-nio-10.0.2.28-15002-exec-11] DEBUG
org.jooq.impl.DefaultConnectionProvider - setting auto commit : false
00:10:01.599 [https-jsse-nio-10.0.2.28-15002-exec-11] DEBUG
edu.utah.camplab.jx.PayloadFromMux -
074d449b-c3ba-499f-83e3-f48427fe0156: start transaction at
1621987801599 00:10:01.599 [https-jsse-nio-10.0.2.28-15002-exec-11]
DEBUG org.jooq.impl.DefaultConnectionProvider - set savepoint
00:10:07.511 [ff3ba73c-d927-431c-bd29-9687a47f1594] INFO
edu.utah.camplab.jx.PayloadWriterThread - bulk transfer of 2528447
took 8.925s 00:10:07.511 [ff3ba73c-d927-431c-bd29-9687a47f1594] DEBUG
edu.utah.camplab.jx.PayloadWriterThread - staged in 8925 ms
00:10:07.567 [https-jsse-nio-10.0.2.28-15002-exec-3] INFO
edu.utah.camplab.jx.PayloadFromMux -
ff3ba73c-d927-431c-bd29-9687a47f1594: Total segment save took 9486 ms
00:10:07.567 [https-jsse-nio-10.0.2.28-15002-exec-3] INFO
edu.utah.camplab.jx.AbstractPayload -
ff3ba73c-d927-431c-bd29-9687a47f1594: closing process
ff3ba73c-d927-431c-bd29-9687a47f1594 00:10:07.608
[https-jsse-nio-10.0.2.28-15002-exec-3] DEBUG
org.jooq.impl.DefaultConnectionProvider - release savepoint
00:10:07.609 [https-jsse-nio-10.0.2.28-15002-exec-3] DEBUG
edu.utah.camplab.jx.PayloadFromMux -
ff3ba73c-d927-431c-bd29-9687a47f1594: end transaction at 1621987807609 

Which claims to have written 2,528,447 records in roughly 9 seconds into 
the newly created table 
"bulk.rjs.GEV15_15_FF3ba73c_d927_431c_bd29_9687147f1594". Nice.


However, no such table exists, though later processing renames it by 
appending "_done" to the name (being careful to remain under 64 char)


My middleware does receive an exception notice

00:10:55.101 [https-jsse-nio-10.0.2.28-15002-exec-3] ERROR
edu.utah.camplab.jx.AbstractPayload - run
ff3ba73c-d927-431c-bd29-9687a47f1594: Exception from db write: SQL
[insert into segment select * from
bulk."rjs_GEV15_15_ff3ba73c_d927_431c_bd29_9687a47f1594" as s where
s.probandset_id >= 'a000----' and
s.probandset_id < 'b000----' ]; An I/O
error occurred while sending to the backend.: {} 



which confirms(?) that the table /was/ there and read from 10 times prior 
since I copy from this temp, bulk loaded table to the actual target in 16 
chunks by diddling the first hex digit from 0 through f.  Here zero 
through nine apparently didn't have a problem.  These 16 inserts are in a 
single transaction, separate from the bulk copy.  (There are a dozen more  
of these files processed and disappeared.)


My question is:
Should I be surprised that the initial bulk loaded table is nowhere to be 
seen, given the thumbs up from the logs?  Is this frailty inherent in 
AWS/RDS infrastructure?


Since this is an academic exorcise, I have minimal AWS support, which has 
yet to chime in on this matter. My access to the logs is um, er, constrained.


The big differences I notice are:

1. "postgres" is not a superuser,
2. viewing logs is a hassle.

Otherwise, they're really similar.  We've pumped about 6TB of data into an 
instance, and it's been rock solid.  JBoss is quite happy, and there haven't 
been any problems.


--
Angular momentum makes the world go 'round.


Re: How different is AWS-RDS postgres?

2021-05-26 Thread Rob Sargent



My question is:
Should I be surprised that the initial bulk loaded table is nowhere 
to be seen, given the thumbs up from the logs?  Is this frailty 
inherent in AWS/RDS infrastructure?


Since this is an academic exorcise, I have minimal AWS support, which 
has yet to chime in on this matter. My access to the logs is um, er, 
constrained.


The big differences I notice are:

1. "postgres" is not a superuser,
2. viewing logs is a hassle.

Otherwise, they're really similar.  We've pumped about 6TB of data 
into an instance, and it's been rock solid.  JBoss is quite happy, and 
there haven't been any problems.


--
Angular momentum makes the world go 'round.

Good to hear.
I pushed this same data set into our previous server (Version 10) so I 
was surprised it blew up this time, though in that case these separate 
files were more spread out in arrival time.  Nothing in the (available) 
logs other that lots of auto-vacuuming  and the suggestion to increase 
wal size (which I thought a little surprising).  I really would like to 
know just what went south but that isn't obvious yet - OOM?, network?, 
disk full? but I can't find any smoking gun on the AWS/RDS pages.


rjs


Re: How different is AWS-RDS postgres?

2021-05-26 Thread Ron

On 5/26/21 1:24 PM, Rob Sargent wrote:



My question is:
Should I be surprised that the initial bulk loaded table is nowhere to 
be seen, given the thumbs up from the logs?  Is this frailty inherent in 
AWS/RDS infrastructure?


Since this is an academic exorcise, I have minimal AWS support, which 
has yet to chime in on this matter. My access to the logs is um, er, 
constrained.


The big differences I notice are:

1. "postgres" is not a superuser,
2. viewing logs is a hassle.

Otherwise, they're really similar.  We've pumped about 6TB of data into 
an instance, and it's been rock solid.  JBoss is quite happy, and there 
haven't been any problems.


--
Angular momentum makes the world go 'round.

Good to hear.
I pushed this same data set into our previous server (Version 10) so I was 
surprised it blew up this time, though in that case these separate files 
were more spread out in arrival time. Nothing in the (available) logs 
other that lots of auto-vacuuming  and the suggestion to increase wal size 
(which I thought a little surprising).  I really would like to know just 
what went south but that isn't obvious yet - OOM?, network?, disk full? 
but I can't find any smoking gun on the AWS/RDS pages.


The difficulty in viewing logs needs to be emphasized.

--
Angular momentum makes the world go 'round.


Re: Setting up replication

2021-05-26 Thread Vijaykumar Jain
core ref:
PostgreSQL: Documentation: 13: Part III. Server Administration

although this is a lot verbose, but you would keep coming back to this to
tune your setup.


to understand basic setups. some are
How to Set Up Streaming Replication in PostgreSQL 12 - Percona Database
Performance Blog

How to setup Postgres 13 WAL streaming replication on Ubuntu 18.04 - Highgo
Software Inc.


some other references.
dhamaniasad/awesome-postgres: A curated list of awesome PostgreSQL
software, libraries, tools and resources, inspired by awesome-mysql
(github.com) 

a typical setup

Primary ---streaming replication --->  (Replica1, Replica2 )

Primary - writes
replica R1,R2  - reads ( depending on load can be put behind load
balancer like haproxy and connection pooler pgbouncer)
Scaling PostgreSQL using Connection Poolers and Load Balancers for an
Enterprise Grade environment - Percona Database Performance Blog

https://tinyurl.com/f2zk76yc  (EDB link, but the link is too big)


backups:
vm snapshots ( optional )
physical disk backups. ( optional )
pg_dumpall from replica and save it to external storage daily. (PostgreSQL:
Documentation: 13: pg_dumpall
)
barman (point in time recovery, can configure to save 7 days of WALs for
point in time recovery ) on external server. (Barman Manual (pgbarman.org)
)
Implement backup with Barman. This tutorial is part of a multipage… | by
Sylvain | coderbunker | Medium


Ideally, i would always go with core docs, as many tutorials get stale, but
i just mention to help get started quickly and then come back to core docs.

Things can get more complex (or simpler) if you go with auto failover
solutions
pg_auto_failover
patroni
enterprise solutions from EDB, cruncy etc .

this channel on youtube is pretty neat too. Scaling Postgres - YouTube


I am not sure my reply  is making it helpful or making it too loud for
simple setups.
anyways :)


On Wed, 26 May 2021 at 23:28, Oliver Kohll  wrote:

> Hi,
>
> We currently have an app with the database on the same server as the app
> itself. I'd like to transition to a system where
>
> 1) in the short term, the db replicates to a different server. This will
> allow us to take the daily pg_dump backups from the replica rather than the
> primary server. They're currently slowing down the system too much as they
> run.
>
> 2) in the medium term, switch the replica to be the primary and connect to
> that from the app, i.e. app and db will be on separate servers, letting us
> resource each appropriately. A 3rd server can then be used to replicate to
> for backup purposes.
>
> 3) in the long run, depending on demand that also gives us the option of
> scaling the db horizontally e.g. with a distributed db like Citus.
>
> Are there any suggestions / good walkthroughs of how to do number 1? There
> are many options!
>
> All I know so far is we can probably use streaming replication as I can
> make sure the PostgreSQL versions on each server are the same.
>
> One thing I'm wondering is how often should a base backup be taken? Also
> should we set up everything manually with scripts or use a 3rd party backup
> tool like barman?
>
> Any suggestions appreciated.
>
> Oliver
>


-- 
Thanks,
Vijay
Mumbai, India


Re: Setting up replication

2021-05-26 Thread Oliver Kohll
That is helpful, thanks Vijay.

I will wade in and give it a go. For some reason I had it in my head that
it was a good idea to run pg_basebackup frequently, e.g. once a day, but it
looks like it's only necessary once for the initial transfer to the replica.

Oliver

On Wed, 26 May 2021 at 20:37, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

> core ref:
> PostgreSQL: Documentation: 13: Part III. Server Administration
> 
> although this is a lot verbose, but you would keep coming back to this to
> tune your setup.
>
>
> to understand basic setups. some are
> How to Set Up Streaming Replication in PostgreSQL 12 - Percona Database
> Performance Blog
> 
> How to setup Postgres 13 WAL streaming replication on Ubuntu 18.04 -
> Highgo Software Inc.
> 
>
> some other references.
> dhamaniasad/awesome-postgres: A curated list of awesome PostgreSQL
> software, libraries, tools and resources, inspired by awesome-mysql
> (github.com) 
>
> a typical setup
>
> Primary ---streaming replication --->  (Replica1, Replica2 )
>
> Primary - writes
> replica R1,R2  - reads ( depending on load can be put behind load
> balancer like haproxy and connection pooler pgbouncer)
> Scaling PostgreSQL using Connection Poolers and Load Balancers for an
> Enterprise Grade environment - Percona Database Performance Blog
> 
> https://tinyurl.com/f2zk76yc  (EDB link, but the link is too big)
>
>
> backups:
> vm snapshots ( optional )
> physical disk backups. ( optional )
> pg_dumpall from replica and save it to external storage daily. (PostgreSQL:
> Documentation: 13: pg_dumpall
> )
> barman (point in time recovery, can configure to save 7 days of WALs for
> point in time recovery ) on external server. (Barman Manual (pgbarman.org)
> )
> Implement backup with Barman. This tutorial is part of a multipage… | by
> Sylvain | coderbunker | Medium
> 
>
> Ideally, i would always go with core docs, as many tutorials get stale,
> but i just mention to help get started quickly and then come back to core
> docs.
>
> Things can get more complex (or simpler) if you go with auto failover
> solutions
> pg_auto_failover
> patroni
> enterprise solutions from EDB, cruncy etc .
>
> this channel on youtube is pretty neat too. Scaling Postgres - YouTube
> 
>
> I am not sure my reply  is making it helpful or making it too loud for
> simple setups.
> anyways :)
>
>
> On Wed, 26 May 2021 at 23:28, Oliver Kohll  wrote:
>
>> Hi,
>>
>> We currently have an app with the database on the same server as the app
>> itself. I'd like to transition to a system where
>>
>> 1) in the short term, the db replicates to a different server. This will
>> allow us to take the daily pg_dump backups from the replica rather than the
>> primary server. They're currently slowing down the system too much as they
>> run.
>>
>> 2) in the medium term, switch the replica to be the primary and connect
>> to that from the app, i.e. app and db will be on separate servers, letting
>> us resource each appropriately. A 3rd server can then be used to replicate
>> to for backup purposes.
>>
>> 3) in the long run, depending on demand that also gives us the option of
>> scaling the db horizontally e.g. with a distributed db like Citus.
>>
>> Are there any suggestions / good walkthroughs of how to do number 1?
>> There are many options!
>>
>> All I know so far is we can probably use streaming replication as I can
>> make sure the PostgreSQL versions on each server are the same.
>>
>> One thing I'm wondering is how often should a base backup be taken? Also
>> should we set up everything manually with scripts or use a 3rd party backup
>> tool like barman?
>>
>> Any suggestions appreciated.
>>
>> Oliver
>>
>
>
> --
> Thanks,
> Vijay
> Mumbai, India
>


Re: How different is AWS-RDS postgres?

2021-05-26 Thread Ian Harding
There is an option to send the logs to cloudwatch which makes it less awful
to look at them.

On Wed, May 26, 2021, 11:52 AM Ron  wrote:

> On 5/26/21 1:24 PM, Rob Sargent wrote:
>
>
> My question is:
> Should I be surprised that the initial bulk loaded table is nowhere to be
> seen, given the thumbs up from the logs?  Is this frailty inherent in
> AWS/RDS infrastructure?
>
> Since this is an academic exorcise, I have minimal AWS support, which has
> yet to chime in on this matter. My access to the logs is um, er,
> constrained.
>
>
> The big differences I notice are:
>
> 1. "postgres" is not a superuser,
> 2. viewing logs is a hassle.
>
> Otherwise, they're really similar.  We've pumped about 6TB of data into an
> instance, and it's been rock solid.  JBoss is quite happy, and there
> haven't been any problems.
>
> --
> Angular momentum makes the world go 'round.
>
> Good to hear.
> I pushed this same data set into our previous server (Version 10) so I was
> surprised it blew up this time, though in that case these separate files
> were more spread out in arrival time.  Nothing in the (available) logs
> other that lots of auto-vacuuming  and the suggestion to increase wal size
> (which I thought a little surprising).  I really would like to know just
> what went south but that isn't obvious yet - OOM?, network?, disk full? but
> I can't find any smoking gun on the AWS/RDS pages.
>
>
> The difficulty in viewing logs needs to be emphasized.
>
> --
> Angular momentum makes the world go 'round.
>


Re: How different is AWS-RDS postgres?

2021-05-26 Thread Rob Sargent


> On May 26, 2021, at 4:37 PM, Ian Harding  wrote:
> 
> 
> There is an option to send the logs to cloudwatch which makes it less awful 
> to look at them. 
I have that but precious little of interest there. Lots of autovac, a 
smattering of hints to increase wal size!?  I have yet to spot anything which 
corresponds to the “I/O failure” which the middle ware gets. 

I don’t have query logging on, but I do see reports from my psql session 
fat-fingering.

As to the logs UI, the search is pretty feeble; I don’t understand why there 
are four  channels of logs; the graphs are wearing the same rose-coloured as 
the logs.  
And 24 hours without a peep from AWS support. (I don’t call mailing me what I 
sent them “contact”.)

My guess right now is that the entire tomcat connection pool is in a single 
transaction? That’s the only way the tables could disappear.  I am making 
separate calls to JDBC getConnection () for each doPost.