SV: pgaudit and create postgis extension logs a lot inserts

2018-01-22 Thread Svensson Peter
OK,  thanks a lot. 

Regards,
Peter

Från: David Steele [[email protected]]
Skickat: den 19 januari 2018 14:41
Till: Magnus Hagander; Joe Conway
Kopia: Svensson Peter; [email protected]
Ämne: Re: pgaudit and create postgis extension logs a lot inserts

On 1/19/18 6:05 AM, Magnus Hagander wrote:
>
>
> On Thu, Jan 18, 2018 at 6:54 PM, Joe Conway  > wrote:
>
> On 01/18/2018 04:12 AM, Svensson Peter wrote:
> > When I then install  postgis extension in a database it writes a huge
> > amount of logs which slow down the server a lot.
> > Not only table creation and functions are logged,  even  all inserts in
> > spatial_ref_sys are written to the audit-log.
> >
> > LOG:  AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,"
> > ..
> > INSERT INTO ""spatial_ref_sys"" (""srid"",""auth_name"
> > 
> >
> > This behaviour make pgaudit useless in our environment due to the
> > overhead in log-file write.
>
> How often do you intend to install PostGIS? Disable pgaudit, install
> PostGIS, enable pgaudit?
>
>
> Would it make sense for pgaudit to, at least by option, not include DDL
> statements that are generated as "sub-parts" of a CREATE EXTENSION? It
> should still log the CREATE EXTENSION of course, but not necessarily all
> the contents of it, since that's actually defined in the extension
> itself already?
That's doable, but I think it could be abused if it was always on and
installing extensions is generally not a daily activity.

It seems in this case the best action is to disable pgaudit before
installing postgis or install postgis first.

Regards,
--
-David
[email protected]



Re: 8.2 Autovacuum BUG ?

2018-01-22 Thread pavan95
Hi Pallav,

I'm currently on PostgreSQL 9.1. Everything was fine till Dec 27th 2017. But
to my wonder archive logs started to increase from December 28th 2017 till
date. 

The configuration parameters were default and everything in the past was
fine with default configuration parameters. I'm facing a serious problem
with this huge archive generation of 48GB per day, that is 2GB per hour. The
DML's statements are almost same. 

In detail, archive logs are getting generated at 9'th minute and 39'th
minute of an hour, preceding with a log message 'checkpoints are occurring
too frequently (2 seconds apart).Consider increasing the configuration
parameter "checkpoint_segments" '.

So how to reduce this abnormal archive log generation. Thanks in Advance.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-22 Thread Sergei Kornilov
Hello
How big is database?
Please show result of this query: select * from pg_stat_activity where query 
like 'autovacuum%';
I think here is running antiwraparound autovacuum. In this case all is normal, 
antiwraparound will produce a lot of WAL and this is necessary to continue 
database working.

PS: please note postgresql 9.1 is EOL.

regards, Sergei



Re: 8.2 Autovacuum BUG ?

2018-01-22 Thread pavan95
Hello Sergi,

The size of the database is 24GB.

The output of the above query is :

 datid  | datname  | procpid | usesysid | usename  | application_name |
client_addr | client_hostname | client_port |  backend_start  
|xact_start|   query_start|
waiting |current_query 
+--+-+--+--+--+-+-+-+--+--+--+-+--
 400091 | prod_erp |   19373 |   10 | postgres |  | 
   
| | | 2018-01-22 15:40:38.163865+05:30 |
2018-01-22 15:40:38.655754+05:30 | 2018-01-22 15:40:38.655754+05:30 | f  
| autovacuum: ANALYZE public.table1
 400091 | prod_erp |   19373 |   10 | postgres |  | 
   
| | | 2018-01-22 15:40:38.163865+05:30 |
2018-01-22 15:40:38.655754+05:30 | 2018-01-22 15:40:38.655754+05:30 | f  
| autovacuum: ANALYZE public.table1
400091 | prod_erp |   19373 |   10 | postgres |  |  
  
| | | 2018-01-22 15:40:38.163865+05:30 |
2018-01-22 15:40:38.218954+05:30 | 2018-01-22 15:40:38.218954+05:30 | f  
| autovacuum: ANALYZE public.table2
400091 | prod_erp |   18440 |   10 | postgres |  |  
  
| | | 2018-01-22 15:39:38.128879+05:30 |
2018-01-22 15:39:38.166507+05:30 | 2018-01-22 15:39:38.166507+05:30 | f  
| autovacuum: VACUUM public.table3


Could you please explain what antiwraparound autovacuum is?? Is it related
for preventing transactionID wraparound failures?  If so does running vacuum
full against the database will suppress this abnormal generation of archive
logs??

Please give your kind advice.

Regards,
Pavan




--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-22 Thread Andreas Kretschmer



Am 22.01.2018 um 11:21 schrieb pavan95:

Could you please explain what antiwraparound autovacuum is?? Is it related
for preventing transactionID wraparound failures?


Yes.



If so does running vacuum
full against the database will suppress this abnormal generation of archive
logs??


Such a vacuum freeze isn't abnormal. Do you have a really problem with it?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




PG 9.6 Slow inserts with long-lasting LWLocks

2018-01-22 Thread Pavel Suderevsky
Hi,

I have an issue with sporadic slow insert operations with query duration
more than 1 sec while it takes about 50ms in average.

Configuration:
OS: Centos 7.2.151
PostgreSQL: 9.6.3
CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Memory:   totalusedfree  shared buff/cache
available
Mem: 193166   103241856  44522  180985
137444
Swap: 0   0   0
Storage: Well,  about 4gb of BBU write cache.

shared_buffers = 32gb
work_mem = 128mb
max_pred_locks_per_transaction = 8192

This can occur once a day or not happen for few days while system load is
the same. "Inserts" are the prepared statement batches with 4-5 inserts.
Neither excessive memory usage nor disk or cpu utilizations have been
catched.
Wal writing rates, checkpoints, anything else from pg_stat_* tables were
checked and nothing embarrassing was found.

There are several scenarious of such long inserts were spotted:
1. No any locks catched (500ms check intervals)
2. Wait event is "buffer_mapping" - looks like the most common case
 snaphot time | state | trx duration| query duration   |
wait_event_type | wait_event | query
 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | 00:00:00.524729  |
LWLockTranche   | buffer_mapping | INSERT INTO table..
 2017-12-22 03:16:00.65814  | active | 00:00:00.012435 | 00:00:00.001855  |
LWLockTranche   | buffer_mapping | INSERT INTO table..
3. Wait event is "SerializablePredicateLockListLock" (I believe the same
root cause as previous case)
4. No any locks catched, but ~39 other backends in parallel are active

I assumed that it can be somehow related to enabled NUMA, but it looks like
memory is allocated evenly, zone_reclaim_mode is 0.
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42
44 46
node 0 size: 130978 MB
node 0 free: 1251 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
45 47
node 1 size: 65536 MB
node 1 free: 42 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

numastat -m

Per-node system memory usage (in MBs):
  Node 0  Node 1   Total
 --- --- ---
MemTotal   130978.3465536.00   196514.34
MemFree  1479.07  212.12 1691.20
MemUsed129499.2765323.88   194823.14
Active  72241.1637254.56   109495.73
Inactive47936.2424205.4072141.64
Active(anon)21162.4118978.9640141.37
Inactive(anon)   1061.94 7522.34 8584.27
Active(file)51078.7618275.6069354.36
Inactive(file)  46874.3016683.0663557.36
Unevictable 0.000.000.00
Mlocked 0.000.000.00
Dirty   0.040.020.05
Writeback   0.000.000.00
FilePages  116511.3660923.16   177434.52
Mapped  16507.2923912.8240420.11
AnonPages3661.55  530.26 4191.81
Shmem   18558.2825964.7444523.02
KernelStack16.985.77   22.75
PageTables   3943.56 1022.25 4965.81
NFS_Unstable0.000.000.00
Bounce  0.000.000.00
WritebackTmp0.000.000.00
Slab 2256.09 1291.53 3547.61
SReclaimable 2108.29  889.85 2998.14
SUnreclaim147.80  401.68  549.47
AnonHugePages1824.00  284.00 2108.00
HugePages_Total 0.000.000.00
HugePages_Free  0.000.000.00
HugePages_Surp  0.000.000.00

$ cat /proc/62679/numa_maps | grep N0 | grep zero
7f92509d3000 prefer:0 file=/dev/zero\040(deleted) dirty=8419116 mapmax=154
active=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4

Could you advise what can cause such occasional long inserts with
long-lasting LWlocks?