Oracle to postgres migration via ora2pg (blob data)

2019-07-31 Thread Daulat Ram
Hello team,

We have to migrate a schema from oracle to postgres but there is one table that 
is having following large lob segments.  This table is taking time to export. 
What parameters we have to set in ora2pg.conf to speed up the data export by 
ora2pg.

Table:CLIENT_DB_AUDIT_LOG

LOBSEGMENT   SYS_LOB095961C8$$  80.26
LOBSEGMENT   SYS_LOB095961C7$$  79.96
LOBSEGMENT   SYS_LOB094338C8$$   8.84
LOBSEGMENT   SYS_LOB084338C7$$   8.71
LOBSEGMENT   SYS_LOB085961C9$$   5.32

VM Details are:

RAM  8GB
VCPUs 2 VCPU
Disk 40GB

Thanks,



ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Daulat Ram
Hi team ,
I am getting the below error while fetching the data from Oracle 12c using 
ora2pg.

DBD::Oracle::st fetchall_arrayref failed: ORA-24345: A Truncation or null fetch 
error occurred (DBD SUCCESS_WITH_INFO: OCIStmtFetch, LongReadLen too small 
and/or LongTruncOk not set)ERROR no statement executing (perhaps you need to 
call execute first) [for Statement "SELECT "USERS_ID","NAME","USERS" FROM 
"GBOPSUI"."USER_GROUP_USERS_V5" a"] at /usr/local/share/perl5/Ora2Pg.pm line 
14110.


Initially did not have LongReadLen set, so I thought this was the cause. But, I 
have set LongReadLen, on the db handle, equal to 9000.

Thanks,
Daulat



RE: ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Daulat Ram
H,

We are using  below the ora2pg version and the data types for tables.

bash-4.2$ ora2pg -v
Ora2Pg v20.0
bash-4.2$

SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;

DATA_TYPE

TIMESTAMP(6)
FLOAT
CLOB
NUMBER
CHAR
DATE
VARCHAR2
BLOB

SQL>

We are getting the same issue for tables which are having blob, clob and char 
data types.

Thanks,
Daulat

-Original Message-
From: Luca Ferrari  
Sent: Tuesday, August 13, 2019 8:32 PM
To: Daulat Ram 
Cc: pgsql-performa...@lists.postgresql.org; pgsql-general@lists.postgresql.org
Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram  wrote:
> Initially did not have LongReadLen set, so I thought this was the cause. But, 
> I have set LongReadLen, on the db handle, equal to 9000.

Apparently this is an oracle problem because it acceppted data longer than its 
type, so my guess would be that in your table you have a
char(n) column that could be enlarged before the migration.
<https://support.oracle.com/knowledge/Siebel/476591_1.html>
Hope this helps.
And please report the version of ora2pg when asking for help.

Luca


RE: ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-14 Thread Daulat Ram
Hi Adrian ,

We have the below output. What we need to change. 

bash-4.2$ ora2pg -c ora2pg.bidder.conf -t SHOW_ENCODING

Current encoding settings that will be used by Ora2Pg:
Oracle NLS_LANG AMERICAN_AMERICA.AL32UTF8
Oracle NLS_NCHAR AL32UTF8
Oracle NLS_TIMESTAMP_FORMAT -MM-DD HH24:MI:SS.FF6
Oracle NLS_DATE_FORMAT -MM-DD HH24:MI:SS
PostgreSQL CLIENT_ENCODING UTF8
Perl output encoding ''
Showing current Oracle encoding and possible PostgreSQL client encoding:
Oracle NLS_LANG AMERICAN_AMERICA.WE8MSWIN1252
Oracle NLS_NCHAR WE8MSWIN1252
Oracle NLS_TIMESTAMP_FORMAT -MM-DD HH24:MI:SS.FF6
Oracle NLS_DATE_FORMAT -MM-DD HH24:MI:SS
PostgreSQL CLIENT_ENCODING WIN1252
bash-4.2$

thanks

 
-Original Message-
From: Adrian Klaver  
Sent: Tuesday, August 13, 2019 11:27 PM
To: Daulat Ram ; Luca Ferrari 
Cc: pgsql-performa...@lists.postgresql.org; pgsql-general@lists.postgresql.org
Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

On 8/13/19 10:34 AM, Daulat Ram wrote:
> H,
> 
> We are using  below the ora2pg version and the data types for tables.
> 
> bash-4.2$ ora2pg -v
> Ora2Pg v20.0
> bash-4.2$
> 
> SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;
> 
> DATA_TYPE
> 
> TIMESTAMP(6)
> FLOAT
> CLOB
> NUMBER
> CHAR
> DATE
> VARCHAR2
> BLOB
> 
> SQL>
> 
> We are getting the same issue for tables which are having blob, clob and char 
> data types.

The ora2pg issue below seems to have more information on this:

https://github.com/darold/ora2pg/issues/342

> 
> Thanks,
> Daulat
> 
> -Original Message-
> From: Luca Ferrari 
> Sent: Tuesday, August 13, 2019 8:32 PM
> To: Daulat Ram 
> Cc: pgsql-performa...@lists.postgresql.org; pgsql-general@lists.postgresql.org
> Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg
> 
> On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram  
> wrote:
>> Initially did not have LongReadLen set, so I thought this was the cause. 
>> But, I have set LongReadLen, on the db handle, equal to 9000.
> 
> Apparently this is an oracle problem because it acceppted data longer than 
> its type, so my guess would be that in your table you have a
> char(n) column that could be enlarged before the migration.
> <https://support.oracle.com/knowledge/Siebel/476591_1.html>
> Hope this helps.
> And please report the version of ora2pg when asking for help.
> 
> Luca
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


Exporting and importing table having blob datatype.

2019-09-11 Thread Daulat Ram
Hello Team,

We have a large table having blob data type. Anyone please suggest how we can 
export it from Oracle DB in parts via ora2pg and then import it into postgres.

Thanks,
Daulat


Monitor Postgres database status on Docker

2019-09-24 Thread Daulat Ram
Hi team,

We want to check the postgres database status on docker container just like we 
monitor Postgres (up  / down) via /etc/init.d/postgresql status

But I am not sure how we can do that with docker.

Thanks,
Daulat








RE: Monitor Postgres database status on Docker

2019-09-24 Thread Daulat Ram
Thanks but how we can use it for docker container.

Regards,
Daulat

From: Fan Liu 
Sent: Tuesday, September 24, 2019 3:02 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: RE: Monitor Postgres database status on Docker

Hi,

I am not from PostgreSQL team.
Just let you know that when we run PostgreSQL in Kubernetes, we use below 
command for liveness check.

pg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME


BRs,
Fan Liu


From: Daulat Ram mailto:daulat@exponential.com>>
Sent: Tuesday, September 24, 2019 5:18 PM
To: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: Monitor Postgres database status on Docker

Hi team,

We want to check the postgres database status on docker container just like we 
monitor Postgres (up  / down) via /etc/init.d/postgresql status

But I am not sure how we can do that with docker.

Thanks,
Daulat








RE: Monitor Postgres database status on Docker

2019-09-26 Thread Daulat Ram
Hi Fan Liu,

I am able to make the connection to the Postgres database created in docker 
container via psql from postgres10 client but not able to connect through 
pg_isready.

psql -c 'select count (*) from pg_stat_activity' -h localhost -p 5432 -U 
postgres -W
Password for user postgres:
count
---
 7

Give me suggestions.
 Thanks,


From: Daulat Ram
Sent: Tuesday, September 24, 2019 3:35 PM
To: Fan Liu ; pgsql-general@lists.postgresql.org
Subject: RE: Monitor Postgres database status on Docker

Thanks but how we can use it for docker container.

Regards,
Daulat

From: Fan Liu mailto:fan@ericsson.com>>
Sent: Tuesday, September 24, 2019 3:02 PM
To: Daulat Ram mailto:daulat@exponential.com>>; 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: RE: Monitor Postgres database status on Docker

Hi,

I am not from PostgreSQL team.
Just let you know that when we run PostgreSQL in Kubernetes, we use below 
command for liveness check.

pg_isready --host localhost -p $PG_PORT -U $PATRONI_SUPERUSER_USERNAME


BRs,
Fan Liu


From: Daulat Ram mailto:daulat@exponential.com>>
Sent: Tuesday, September 24, 2019 5:18 PM
To: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: Monitor Postgres database status on Docker

Hi team,

We want to check the postgres database status on docker container just like we 
monitor Postgres (up  / down) via /etc/init.d/postgresql status

But I am not sure how we can do that with docker.

Thanks,
Daulat








Postgres Point in time Recovery (PITR),

2019-10-17 Thread Daulat Ram
Hello All,
Can you please share some ideas and scenarios how we can do the PITR in case of 
disaster.


Thanks,


RE: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Daulat Ram
Hi All,

Thanks for your suggestions.
One more questions is, how backups are useful if we have streaming replication 
. As I know, we can promote the standby as primary in case of disaster at 
primary side. Do we need to schedule backups if we have streaming replication?

Thanks

From: Avinash Kumar 
Sent: Friday, October 18, 2019 5:28 PM
To: David Steele 
Cc: Luca Ferrari ; Andreas Joseph Krogh 
; Daulat Ram ; 
pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),

Hi Daulat,

PITR entirely depends on what type of backups you choose.
Sometimes, to reduce the amount of downtime involved while restoring and 
recovering a backup, you may also use a additional delayed standby.
You could use the PG built-in feature to delay the replication and fast-forward 
it to the safest point to achieve PITR. But this requires you to have an 
additional standby.
https://www.percona.com/blog/2018/06/28/faster-point-in-time-recovery-pitr-postgresql-using-delayed-standby/

If you have several TBs of database, pgBackRest is of course a way to go for 
backups (there are few more open source solutions), but also consider the 
amount of time it takes for recovery. Keeping all of this in mind, your 
approach to PITR changes.

So i would ask you this question, what is the backup tool you use and what is 
your backup strategy ? Are you taking a physical backup and performing 
continuous archiving of WALs ? The answer to your question entirely depends on 
this. :)

Regards,
Avinash Vallarapu.



On Fri, Oct 18, 2019 at 5:17 PM David Steele 
mailto:da...@pgmasters.net>> wrote:
On 10/18/19 11:29 AM, Luca Ferrari wrote:
> On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> mailto:andr...@visena.com>> wrote:
>> We use barman (https://www.pgbarman.org/) for continuous streaming backup 
>> and I had to restore from it once, and it went like this:
>
> Just for the records, here's an example of restore with pgbackrest:
>
> % sudo -u postgres pgbackrest --stanza=miguel \
> --log-level-console=info --delta restore
> ...
> INFO: restore backup set 20190916-125652F
> INFO: remove invalid files/paths/links from /postgres/pgdata/11
> INFO: cleanup removed 148 files, 3 paths
> ...
> INFO: write /postgres/pgdata/11/recovery.conf
> INFO: restore global/pg_control (performed last
>  to ensure aborted restores cannot be started)
> INFO: restore command end: completed successfully (5113ms)

pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr

--
-David
da...@pgmasters.net<mailto:da...@pgmasters.net>



--
9000799060


Barman

2019-10-31 Thread Daulat Ram
Hi All,

We have configured postgres 11.2 in streaming replication (primary & Standby)  
on docker and I am looking to initiate the Postgres backup using barman. As I 
know there are few options for taking backup using barman.

RSYNC backup
Incremental Backups
Streaming Backup with continuous WAL streaming
Centralized and Catalogued Backups

Which is the best option for backup using barman? So that we can keep the 
database safe in case of disaster? I feel the Incremental Backups are most 
useful to perform the PITR but I want to know the experts suggestions.


 Thanks,



RE: Barman

2019-11-01 Thread Daulat Ram
Thanks Tomas for your inputs. Suppose, if we have database in TB's with OLTP 
applications then what will be suitable backup strategy. 


-Original Message-
From: Tomas Vondra  
Sent: Friday, November 1, 2019 12:27 AM
To: Daulat Ram 
Cc: pgsql-general@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: Barman

On Thu, Oct 31, 2019 at 05:29:34PM +0000, Daulat Ram wrote:
>Hi All,
>
>We have configured postgres 11.2 in streaming replication (primary &
>Standby)  on docker and I am looking to initiate the Postgres backup 
>using barman. As I know there are few options for taking backup using 
>barman.
>
>RSYNC backup
>Incremental Backups
>Streaming Backup with continuous WAL streaming Centralized and 
>Catalogued Backups
>
>Which is the best option for backup using barman? So that we can keep 
>the database safe in case of disaster? I feel the Incremental Backups 
>are most useful to perform the PITR but I want to know the experts 
>suggestions.
>

You're mixing a number of topics, here. Firstly, all backups done by barman are 
centralized and catalogued, that's pretty much one of the main purposes of 
barman.

When it comes to backup methods, there are two basic methods. rsync and 
postgres (which means pg_basebackup). This is about creating the initial base 
backup. Both methods then can replicate WAL by either streaming or 
archive_command.

So first you need to decide whether to use rsync and pg_basebackup, where rsync 
allows advanced features like incremental backup, parallel backup and 
deduplication.

Then you need to decide whether to use archive_command or streaming (i.e. 
pg_receivexlog).

The "right" backup method very much depends on the size of your database, 
activity, and so on. By default you should probably go with the default option, 
described as "scenario 1" in the barman docs, i.e.
pg_basebackup (backup_method = postgres) and WAL streaming.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




RE: Postgres Point in time Recovery (PITR),

2019-11-11 Thread Daulat Ram
Hello, 

I am trying to setup barman for my  test environment with the following steps 
but I m not able to start the backup. Please advice !

Postgres 11.2 streaming replication on docker container , Centos
Server A : pg1 (primary)
Server B : pg2 (Standby)

I am using pg2 as a Barman server.  Barman 2.8 is installed on pg2
Also I have installed " yum install barman-cli-2.8-1.el6.noarch.rpm"  on pg1 
(postgres server) for 'barman-wal-archive.

@pg1 

1 . Connect on the server pgsql and log into the postgres account:
sudo -i -u postgres
2 . createuser --interactive -P barman
3 . createuser  barman
4 . Edit postgresql.conf , for 
listen_addresses = '*'   and sudo service postgresql restart
wal_level = replica
archive_mode = on   
archive_command = 'barman-wal-archive pg2 pgsql %p'
5. hostall all pg2/32 trust


@pg2 (Barman Server)
1 .  switch to user barman and generate the keys:   ssh-keygen -t rsa
2 . Copy the key to the user account postgres on pgsql: ssh-copy-id 
postgres@pg1
3 . Barman also requires access to the postgres account on the server pg2. Copy 
the key into the directory of the postgres user and test the connection:

ssh-copy-id postgres@localhost
ssh postgres@localhost -C true

4 . Once this is done, log in as postgres user on pg1and generate an SSH key: 
ssh-keygen -t rsa

5 . Copy the generated key to the list of authorized keys of the user barman on 
pg2: ssh-copy-id barman@pg2

6 . Test the connection to the server: 

@barmaner server /etc/barman.d/pgsql.conf

[test]
; Human readable description
description =  "Example of PostgreSQL Database (via SSH)"

ssh_command = ssh postgres@pg1
conninfo = host=pg1 user=barman dbname=postgres
backup_method = rsync
;reuse_backup = link
; Identify the standard behavior for backup operations: possible values are
; exclusive_backup (default), concurrent_backup
; concurrent_backup is the preferred method with PostgreSQL >= 9.6
backup_options = exclusive_backup

; Number of parallel workers to perform file copy during backup and recover
;parallel_jobs = 1
archiver = on
;archiver_batch_size = 50

barman@vipostgres-db-test:/etc/barman.d$ barman check pgsql
WARNING: No backup strategy set for server 'pgsql' (using default 
'exclusive_backup').
WARNING: The default backup strategy will change to 'concurrent_backup' in the 
future. Explicitly set 'backup_options' to silence this warning.
Server pgsql:
WAL archive: FAILED (please make sure WAL shipping is setup)
PostgreSQL: FAILED
directories: OK
retention policy settings: OK
backup maximum age: OK (no last_backup_maximum_age provided)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 0 backups, expected at least 
0)
ssh: FAILED (Connection failed using 'ssh postgres@vipostgres-cn-test 
-o BatchMode=yes -o StrictHostKeyChecking=no' return code 255)
not in recovery: OK
pg_receivexlog: FAILED
pg_receivexlog compatible: FAILED (PostgreSQL version: None, 
pg_receivexlog version: None)
receive-wal running: FAILED (See the Barman log file for more details)
archiver errors: OK

NOte: If I try to login mannulay on postgres from barman then it is aking for 
password :

 ssh postgres@pg1
Failed to add the host to the list of known hosts (/tmp/.ssh/known_hosts).
== Authorized Access Notice 
==
This computer system is the property of Exponential Interactive Inc. Activities 
are actively monitored
and unauthorized access or use of this computer system is prohibited.
==
Password:


Thanks,


-Original Message-
From: Andreas Kretschmer  
Sent: Friday, October 18, 2019 12:38 PM
To: pgsql-general@lists.postgresql.org; Daulat Ram 
; pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),

On 18 October 2019 07:59:21 CEST, Daulat Ram  wrote:
>Hello All,
>Can you please share some ideas and scenarios how we can do the PITR in 
>case of disaster.
>
>
>Thanks,


Consider Barman.


--
2ndQuadrant - The PostgreSQL Support Company
2019-11-11 02:05:02,204 [10832] barman.wal_archiver INFO: No xlog segments 
found from file archival for pgsql.
2019-11-11 02:05:02,204 [10833] barman.postgres WARNING: Error retrieving 
PostgreSQL status: could not connect to server: Connection refused
2019-11-11 02:05:02,205 [10832] barman.wal_archiver INFO: No xlog segments 
found from streaming for pgsql.
2019-11-11 02:05:02,205 [10833] barman.server ERROR: ArchiverFailure:failed 
opening the PostgreSQL streaming connection for server pgsql
2019-11-11 02:05:02,206 [10834] barman.wal_archiv

How to Change collate & ctype for an existing database?

2019-12-13 Thread Daulat Ram
Hi,

How we can change the   Collate &  Ctype from "c"  to  "C.UTF-8" on existing 
production database.

SELECT datcollate FROM pg_database WHERE datname='wwkidbt';

datcollate



C
postgres=# select version();
 version
--
PostgreSQL 9.5.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.3.1-14ubuntu2) 5.3.1 20160413, 64-bit



Please advice about the below method:

update pg_database set datcollate='C.UTF-8', datctype='C.UTF-8'  where 
datname='wwkidbt';

Is there any impact on data/corruption if we do via update command ?

Thanks,

Daulat









Access privileges

2019-12-16 Thread Daulat Ram
Hello,
Can you please describe the  " =Tc/postgres + postgres=CTc/postgres  
+confluence=CTc/postgres".  I want to give the same permissions to the 
different user on a new database.


   List of databases
Name|  Owner   | Encoding | Collate |  Ctype  |Access privileges
+--+--+-+-+-
confluence | postgres | UTF8 | C   | C   | =Tc/postgres 
  +
   | 
postgres=CTc/postgres  +
   | 
confluence=CTc/postgres

Thanks,


pg_basebackup

2019-12-22 Thread Daulat Ram
Hello,

I am taking pg_basebackup of primary db (docker container env.) using the below 
command :

pg_basebackup -x -h vmzti -U replication -D 
/etc/postgresql/9.5/main/pg_basebkp/basekp1224 -Ft -z -P
Password:
WARNING:  skipping special file "./postgresql.conf"
WARNING:  skipping special file "./postgresql.conf"
4048512/4048512 kB (100%), 1/1 tablespace



  *   Getting the above warning message.
  *   Also the backup has been completed with only the single file  "tar -xvf 
base.tar.gz" as an output but there is no  tar -xvf pg_wal.tar.gz file exist.
  *   If we decompress the file  "tar -xvf base.tar.gz"  , I do not see the 
postmaster.opts file.


Note: We have the different path for the conf files :


data_directory = '/var/lib/postgresql/9.5/main' # use data in another 
directory # (change requires restart)
hba_file = '/etc/postgresql/9.5/main/pg_hba.conf'   # host-based 
authentication file # (change requires restart)
ident_file = '/etc/postgresql/9.5/main/pg_ident.conf'   # ident configuration 
file # (change requires restart)

postgres=# SHOW config_file;
 config_file
--
/var/lib/postgresql/9.5/main/postgresql.conf
(1 row)

postgres=# show hba_file;
   hba_file
--
/etc/postgresql/9.5/main/pg_hba.conf
(1 row)

postgres=#

More details:

postgres@4ed92bac84af:/var/lib/postgresql/9.5/main$ ls -ltr
total 128
-rw--- 1 postgres postgres   208 Sep 13  2017 backup_label.old
drwx-- 8 postgres postgres  4096 Sep 13  2017 base
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_commit_ts
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_dynshmem
drwx-- 4 postgres postgres  4096 Sep 13  2017 pg_multixact
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_replslot
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_serial
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_snapshots
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_stat
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_stat_tmp
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_tblspc
drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_twophase
-rw--- 1 postgres postgres 4 Sep 13  2017 PG_VERSION
-rw--- 1 postgres postgres88 Sep 13  2017 postgresql.auto.conf
-rwxr-xr-x 1 postgres postgres 22116 Sep 13  2017 postgresql.conf.orig
drwx-- 2 postgres postgres 12288 Sep 13  2017 pg_log
lrwxrwxrwx 1 postgres postgres38 Sep 13  2017 recovery.conf -> 
/etc/postgresql/9.5/main/recovery.conf
lrwxrwxrwx 1 postgres postgres40 Sep 13  2017 postgresql.conf -> 
/etc/postgresql/9.5/main/postgresql.conf
-rw--- 1 postgres postgres90 May 21  2019 postmaster.pid
drwx-- 2 postgres postgres  4096 May 21  2019 pg_notify
-rw--- 1 postgres postgres46 May 21  2019 postmaster.opts
drwx-- 2 postgres postgres  8192 Dec 11 08:10 global
drwx-- 2 postgres postgres  4096 Dec 17 20:12 pg_clog
drwx-- 2 postgres postgres  4096 Dec 17 20:16 pg_subtrans
drwxrwxrwx 2 postgres postgres  4096 Dec 18 08:31 backup
drwx-- 3 postgres postgres  4096 Dec 18 15:56 pg_xlog
drwx-- 4 postgres postgres  4096 Dec 18 16:31 pg_logical
postgres@4ed92bac84af:/var/lib/postgresql/9.5/main$
ident.conf

Please advise.

Thanks,




Re: pg_basebackup

2019-12-22 Thread Daulat Ram
thanks Adrian, what about the
postmaster.opts file, this file was also skipped in backup.


We have single  cluster running on the VM.

Thanks.
On 22-Dec-2019 11:19 PM, Adrian Klaver  wrote:
On 12/22/19 1:56 AM, Daulat Ram wrote:
> Hello,
>
> I am taking pg_basebackup of primary db (docker container env.) using
> the below command :
>
> pg_basebackup -x -h vmzti -U replication -D
> /etc/postgresql/9.5/main/pg_basebkp/basekp1224 -Ft -z -P
>
> Password:
>
> WARNING:  skipping special file "./postgresql.conf"
>
> WARNING:  skipping special file "./postgresql.conf"
>
> 4048512/4048512 kB (100%), 1/1 tablespace


https://www.postgresql.org/docs/9.5/app-pgbasebackup.html

"The backup will include all files in the data directory and
tablespaces, including the configuration files and any additional files
placed in the directory by third parties. But only regular files and
directories are copied. Symbolic links (other than those used for
tablespaces) and special device files are skipped. (See Section 50.3 for
the precise details.)"

postgresql.conf is a symlink in the data directory so it is being skipped.

>
>   * Getting the above warning message.
>   * Also the backup has been completed with only the single file  “tar
> -xvf base.tar.gz” as an output but there is no  tar -xvf
> pg_wal.tar.gz file exist.
>   * If we decompress the file  “tar -xvf base.tar.gz”  , I do not see
> the postmaster.opts file.


Do you have more then one instance of Postgres running?

If so are you certain which one pg_basebackup is being pointed at?

>
> Note: We have the different path for the conf files :
>
> data_directory = '/var/lib/postgresql/9.5/main' # use data in
> another directory # (change requires restart)
>
> hba_file = '/etc/postgresql/9.5/main/pg_hba.conf'   # host-based
> authentication file # (change requires restart)
>
> ident_file = '/etc/postgresql/9.5/main/pg_ident.conf'   # ident
> configuration file # (change requires restart)
>
> postgres=# SHOW config_file;
>
>   config_file
>
> --
>
> /var/lib/postgresql/9.5/main/postgresql.conf
>
> (1 row)
>
> postgres=# show hba_file;
>
> hba_file
>
> --
>
> /etc/postgresql/9.5/main/pg_hba.conf
>
> (1 row)
>
> postgres=#
>
> **
>
> *More details:*
>
> postgres@4ed92bac84af:/var/lib/postgresql/9.5/main$ ls -ltr
>
> total 128
>
> -rw--- 1 postgres postgres   208 Sep 13  2017 backup_label.old
>
> drwx-- 8 postgres postgres  4096 Sep 13  2017 base
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_commit_ts
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_dynshmem
>
> drwx-- 4 postgres postgres  4096 Sep 13  2017 pg_multixact
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_replslot
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_serial
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_snapshots
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_stat
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_stat_tmp
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_tblspc
>
> drwx-- 2 postgres postgres  4096 Sep 13  2017 pg_twophase
>
> -rw--- 1 postgres postgres 4 Sep 13  2017 PG_VERSION
>
> -rw--- 1 postgres postgres88 Sep 13  2017 postgresql.auto.conf
>
> -rwxr-xr-x 1 postgres postgres 22116 Sep 13  2017 postgresql.conf.orig
>
> drwx-- 2 postgres postgres 12288 Sep 13  2017 pg_log
>
> lrwxrwxrwx 1 postgres postgres38 Sep 13  2017 recovery.conf ->
> /etc/postgresql/9.5/main/recovery.conf
>
> lrwxrwxrwx 1 postgres postgres40 Sep 13  2017 postgresql.conf ->
> /etc/postgresql/9.5/main/postgresql.conf
>
> -rw--- 1 postgres postgres90 May 21  2019 postmaster.pid
>
> drwx-- 2 postgres postgres  4096 May 21  2019 pg_notify
>
> -rw--- 1 postgres postgres46 May 21  2019 postmaster.opts
>
> drwx-- 2 postgres postgres  8192 Dec 11 08:10 global
>
> drwx-- 2 postgres postgres  4096 Dec 17 20:12 pg_clog
>
> drwx-- 2 postgres postgres  4096 Dec 17 20:16 pg_subtrans
>
> drwxrwxrwx 2 postgres postgres  4096 Dec 18 08:31 backup
>
> drwx-- 3 postgres postgres  4096 Dec 18 15:56 pg_xlog
>
> drwx-- 4 postgres postgres  4096 Dec 18 16:31 pg_logical
>
> postgres@4ed92bac84af:/var/lib/postgresql/9.5/main$
>
> ident.conf
>
> Please advise.
>
> Thanks,
>


--
Adrian Klaver
adrian.kla...@aklaver.com


Postgres streaming replication

2020-01-07 Thread Daulat Ram
Hello team,

I have to implement the streaming replication for our prod environment.
Can you please share the list of parameters to setup the PostgreSQL 11 
streaming replication with continuous archiving and give clarity on the below.


  *   Do we need to enable the archive_mode and archive_command parameters at 
standby side if implementing PostgreSQL 11 streaming replication with 
continuous archiving ?



  *   What is the benefits for implementing streaming replication with 
continuous archiving over the PostgreSQL Streaming replication?


Thanks,





Setting up an environment of EDB Advance server

2020-01-07 Thread Daulat Ram
Hi team,

We would need your help in setting up an environment of EDB Advance Server 11 
on Dev CentOS 7 VM.


  1.  We need to create three database with separate table spaces :

Test1
Test2
Test3
 What would be the good strategy for the setup with regards to the tablespaces?
Can we create a separate cluster for each database:

Thanks,



Can we have multiple tablespaces with in a database.

2020-02-20 Thread Daulat Ram
Hi Team,

Can we have multiple tablespaces with in a database in postgres?

Can we have a table on different tablespace same as Oracle?

Thanks,


RE: Can we have multiple tablespaces with in a database.

2020-02-20 Thread Daulat Ram
Hi Amul ,
Please share the examples how we can create no. of tablespaces for a single 
database and how we can use them.
As I know we can create database on tablespace

  1.  CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION 
'/mnt/pgdatatest/test/pgdata/conn_tbs';
  2.  Create database test tablespace ‘conn_tbs';



Can we have multiple tablespaces with in a database in postgres?

Yes.



From: amul sul 
Sent: Friday, February 21, 2020 11:16 AM
To: Daulat Ram 
Cc: pgsql-general@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: Can we have multiple tablespaces with in a database.



On Fri, Feb 21, 2020 at 11:04 AM Daulat Ram 
mailto:daulat@exponential.com>> wrote:
Hi Team,

Can we have multiple tablespaces with in a database in postgres?

Yes.

Can we have a table on different tablespace same as Oracle?
Yes -- specify TABLESPACE option while creating that table.

Regards,
Amul


RE: Can we have multiple tablespaces with in a database.

2020-02-20 Thread Daulat Ram
That will be great if you  share any doc where it’s mentioned that we can’t use 
multiple tablespace for a single database. I have to assist my Dev team 
regarding tablespaces.

Also , what are the differences between Oracle and Postgres Tablespacs?

Thanks,


From: amul sul 
Sent: Friday, February 21, 2020 11:48 AM
To: Daulat Ram 
Cc: pgsql-general@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: Can we have multiple tablespaces with in a database.



On Fri, Feb 21, 2020 at 11:31 AM Daulat Ram 
mailto:daulat@exponential.com>> wrote:
Hi Amul ,
Please share the examples how we can create no. of tablespaces for a single 
database and how we can use them.
As I know we can create database on tablespace

  1.  CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION 
'/mnt/pgdatatest/test/pgdata/conn_tbs';
  2.  Create database test tablespace ‘conn_tbs';
Maybe I have misunderstood your question; there is no option to specify more
than one tablespace for the database, but you can place the objects of that
database to different tablespaces (if options available for that object).
E.g. you can place a table in than conn_tbs tablespace.

If option is not specified then by default that object will be created
in conn_tbs.

Regards,
Amul





RE: Can we have multiple tablespaces with in a database.

2020-02-20 Thread Daulat Ram
You mean we can have only single default tablespace for a database but the 
database objects can be created on different-2 tablespaces?

Can you please share the Doc URL for your suggestions given in trail mail.

Please correct me.

-Original Message-
From: Christophe Pettus  
Sent: Friday, February 21, 2020 11:57 AM
To: Daulat Ram 
Cc: amul sul ; pgsql-general@lists.postgresql.org
Subject: Re: Can we have multiple tablespaces with in a database.



> On Feb 20, 2020, at 22:23, Daulat Ram  wrote:
> 
> That will be great if you  share any doc where it’s mentioned that we can’t 
> use multiple tablespace for a single database. I have to assist my Dev team 
> regarding tablespaces.

A single PostgreSQL database can have any number of tablespaces.  Each table 
has to be in one specific tablespace, although a table can be in one tablespace 
and its indexes in a different one.

If a PostgreSQL table is partitioned, each partition can be in a different 
tablespace.

Oracle "style" tends to involve a lot of tablespaces in a database; this is 
much less commonly done in PostgreSQL.  In general, you only need to create 
tablespace in a small number of circumstances:

(a) You need more space than the current database volume allows, and moving the 
database to a larger volume is inconvenient;
(b) You have multiple volumes with significantly different access 
characteristics (like an HDD array and some SSDs), and you want to distribute 
database objects to take advantage of that (for example, put commonly-used 
large indexes on the SSDs).

PostgreSQL tablespaces do increase the administrative overhead of the database, 
and shouldn't be created unless there is a compelling need for them./

--
-- Christophe Pettus
   x...@thebuild.com



RE: Can we have multiple tablespaces with in a database.

2020-02-20 Thread Daulat Ram

You mean we can have only single default tablespace for a database but the 
database objects can be created on different-2 tablespaces?

From: amul sul 
Sent: Friday, February 21, 2020 11:48 AM
To: Daulat Ram 
Cc: pgsql-general@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: Can we have multiple tablespaces with in a database.



On Fri, Feb 21, 2020 at 11:31 AM Daulat Ram 
mailto:daulat@exponential.com>> wrote:
Hi Amul ,
Please share the examples how we can create no. of tablespaces for a single 
database and how we can use them.
As I know we can create database on tablespace

  1.  CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION 
'/mnt/pgdatatest/test/pgdata/conn_tbs';
  2.  Create database test tablespace ‘conn_tbs';
Maybe I have misunderstood your question; there is no option to specify more
than one tablespace for the database, but you can place the objects of that
database to different tablespaces (if options available for that object).
E.g. you can place a table in than conn_tbs tablespace.

If option is not specified then by default that object will be created
in conn_tbs.

Regards,
Amul





RE: Can we have multiple tablespaces with in a database.

2020-02-20 Thread Daulat Ram
What are the differences between Oracle and Postgres tablespace.

Can we assign tablespace during Postgres schema creation . As I know in Oracle 
we assign the tablespace during user/schema creation. 

-Original Message-
From: Christophe Pettus  
Sent: Friday, February 21, 2020 12:07 PM
To: Daulat Ram 
Cc: amul sul ; pgsql-general@lists.postgresql.org
Subject: Re: Can we have multiple tablespaces with in a database.



> On Feb 20, 2020, at 22:34, Daulat Ram  wrote:
> 
> You mean we can have only single default tablespace for a database but the 
> database objects can be created on different-2 tablespaces?

Yes.

> Can you please share the Doc URL for your suggestions given in trail mail.

https://www.postgresql.org/docs/current/manage-ag-tablespaces.html

--
-- Christophe Pettus
   x...@thebuild.com





how to find a tablespace for the table?

2020-02-22 Thread Daulat Ram
Hi team,
how to find a tablespace for the table?
See my comments below:
I have created a database with default tablespace like below:

edb=# CREATE DATABASE conndb WITH TABLESPACE = conn_s_tables;

After that I have created a table

CREATE TABLE COMPANY_new(
   ID INT PRIMARY KEY NOT NULL,
   NAME   TEXTNOT NULL,
   AGEINT NOT NULL,
   ADDRESSCHAR(50),
   SALARY REAL,
   JOIN_DATEDATE
) ,


CREATE TABLE COMPANY_new(
   ID INT PRIMARY KEY NOT NULL,
   NAME   TEXTNOT NULL,
   AGEINT NOT NULL,
   ADDRESSCHAR(50),
   SALARY REAL,
   JOIN_DATEDATE
)
tablespace conn_s_tables ;

But I am unable to search the tablespace name where tablespace exist , 
tablespace column is blank.

conndb=# select schemaname,tablename,tableowner,tablespace from pg_tables where 
tablename='company';
schemaname | tablename |  tableowner  | tablespace
+---+--+
conndb | company   | enterprisedb |
(1 row)

conndb=# select schemaname,tablename,tableowner,tablespace from pg_tables where 
tablename='company_new';
schemaname |  tablename  |  tableowner  | tablespace
+-+--+
conndb | company_new | enterprisedb |






Real application clustering in postgres.

2020-03-04 Thread Daulat Ram
Hi team,

Is there any possibility/options to setup a real application clustering in 
Postgres as in Oracle we have a  RAC feature.

What about multi-master replication in Postgres. would you please suggest how 
it is useful and how can setup it.

Thanks.



RE: Real application clustering in postgres.

2020-03-05 Thread Daulat Ram
Thanks for your inputs Laurenz Albe.

Would you please explain single-master failover solution.

Suppose we have promoted  standby (replica) as master after the h/w issue at 
Master. 
If after few hours we recovered  the h/w then how we can switchback on the old 
primary. . 

As in Oracle we have switchover method for Dataguard. How we can do in Postgres.

Thanks,

-Original Message-
From: Laurenz Albe  
Sent: Thursday, March 5, 2020 5:37 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: Re: Real application clustering in postgres.

On Thu, 2020-03-05 at 07:45 +, Daulat Ram wrote:
> Is there any possibility/options to setup a real application clustering in 
> Postgres as in Oracle we have a  RAC feature.

No, and as far as I know nobody feels interested in providing it.

RAC is a complicated architecture that doesn't do much good, so most people 
feel that it would be a waste of time and effort.

RAC ist not really a scaling solution: because of the shared storage, you can 
only scale for more CPUs; I/O remains the bottleneck.

RAC is not really a high availability solution: because of the shared storage, 
it has a sibgle point of failure.

Today, people use shared-nothing architectures for high avaliability, like 
Patroni.

> What about multi-master replication in Postgres. would you please suggest how 
> it is useful and how can setup it.

There is no support for that in core PostgreSQL.

There is a closed-source implementation that you can buy:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/

But multi-master replication is complicated to get right, and an applicatoin 
that uses it has to be specifically designed for that.
Very often a single-master failover solution is a better solution.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com



core. files inside base directory

2020-03-26 Thread Daulat Ram
Hello, I have the core. Files inside the base directory of my postgres 10 setup 
. Would you please let me know the importance of these files . When they 
generate . Can we delete them ?
/base/
du -sh *
24M base
8.1Gcore.26166
1.5Gcore.26258
8.1Gcore.27849
12M core.27951
4.3Gcore.2951
8.1Gcore.324
4.9Gcore.3241
8.1Gcore.3295
5.0Gcore.3470
5.2Gcore.3485
5.3Gcore.3699
5.0Gcore.3724
5.3Gcore.375
5.0Gcore.3935
5.3Gcore.3960
1.2Gcore.408

Thanks,



Point in time recovery

2020-08-18 Thread Daulat Ram
Hello Team,

I want to know the best way to ensure/verify that the Point in time recovery 
has done successfully after the crash and the restore.

Thanks,



Backup and Restore (pg_dump & pg_restore)

2019-04-21 Thread Daulat Ram
Hello Team,

We are getting below error while migrating pg_dump from Postgresql 9.6 to 
Postgresql 11.2 via pg_restore in docker environment.

90d4c9f363c8:~$ pg_restore -d kbcn "/var/lib/kbcn_backup19"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3; 2615 2200 SCHEMA public 
postgres
pg_restore: [archiver (db)] could not execute query: ERROR: schema "public" 
already exists
Command was: CREATE SCHEMA public;

Script used for pg_dump:
-

pg_dump -h 10.26.33.3 -p 5432 -U postgres -W -F c -v -f 
tmp/postgres/backup/backup10/ kbcn_backup19  kbcn >& 
tmp/postgres/backup/backup10/ kbcn_backup19.log; echo $? > 
tmp/postgres/backup/backup10/_'date+%Y-%m-%d.%H:%M:%S'



Please advise.

Regards,
Daulat


Streaming Replication

2019-04-22 Thread Daulat Ram
Hello Team,

I am setting a streaming replication by using two different host there is no 
output of select * from pg_stat_replication; I have set the parameters on both 
side.

Host names are :

(10.29.15.244)
(10.29.15.25)


postgres=# select * from pg_stat_replication;
pid | usesysid | usename | application_name | client_addr | client_hostname | 
client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | 
flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | 
sync_state
-+--+-+--+-+-+-+---+--+---+--+---+---++---+---++---+
(0 rows)



bash-4.4$ ps aux | grep 'postgres.*rec'
  121 postgres  0:00 grep postgres.*rec
bash-4.4$


Parameters on primary are:
-
wal_level = hot_standby
max_wal_senders = 50
wal_keep_segments = 16
max_replication_slots = 16
vacuum_defer_cleanup_age = 4


Parameters on Standby are:
--
max_standby_archive_delay = 30s
max_standby_streaming_delay = 30s
archive_mode = on
archive_timeout = 1800
hot_standby = on

Recovery.conf @ standby:

standby_mode = on
primary_conninfo = 'host=primary host port=5432 user=replication 
password=replication'
trigger_file = '/tmp/touch_me_to_promote_to_me_master'


Hba.conf @primary

hostall all0.0.0.0/0  md5
hostreplication replication samenet md5
hostreplication postgres,kbcn,replication  10.29.0.0/16   md5
hostall kbcn,nagios,postgressamenet md5
hostall postgres0.0.0.0/0  md5
hostall kbcn,nagios,postgres10.29.0.0/16  md5
hostnossl   replication replication,postgres172.17.0.0/16   
md5
hostnossl   replication replication,postgres10.29.0.0/16   
md5
hostnossl   replication replication,postgres10.29.15.25/32
md5

hba.conf @standby


hostall all0.0.0.0/0  md5
hostreplication replication samenet md5
hostreplication postgres,kbcn,replication  10.29.0.0/16   md5
hostall kbcn,nagios,postgressamenet md5
hostall postgres0.0.0.0/0  md5
hostall kbcn,nagios,postgres10.29.0.0/16  md5
hostnossl   replication replication,postgres172.17.0.0/16   
md5
hostnossl   replication replication,postgres10.29.0.0/16   
md5


Please suggest what I have missed.

Regards,
Daulat




How to execute .sql file inside a postgres schema

2019-04-29 Thread Daulat Ram
Hello team,

I have a database name "kbdb" that is having a schema "kb" and I want to 
execute the test.sql file inside this schema,

Please help how we can do that.

Regards,
Daulat


ERROR: operator does not exist: timestamp without time zone + integer

2019-04-30 Thread Daulat Ram
Hi team,

We are getting an ERROR:  "operator does not exist: timestamp without time zone 
+ integer " while creating table in postgres. The same script is working fine 
in Oracle, I know there are some changes in postgres but I am unable to 
identify . Please suggest how we can create it successfully in postgres.

kbdb=# CREATE TABLE motif_site (
kbdb(# topic_match_conf_threshold bigint DEFAULT 3,
kbdb(# retention_period bigint DEFAULT 3,
kbdb(# site_mode_date timestamp NOT NULL DEFAULT date_trunc('day', 
LOCALTIMESTAMP)+7,
kbdb(# reload_submission_date timestamp,
kbdb(# socket_time_out bigint DEFAULT 2500,
kbdb(# reload_date timestamp,
kbdb(# marked_content_tag varchar(1024) DEFAULT 'pagecontent konabody 
intellitxt echotopic contentpaneopen postbody realtext newscontent content 
contentbody posttext##post_message_.*',
kbdb(# crawl_batch_size_lower_limit numeric(38) NOT NULL DEFAULT 20,
kbdb(# site_name varchar(512) NOT NULL,
kbdb(# crawl_batch_size_upper_limit numeric(38) NOT NULL DEFAULT 40,
kbdb(# mtg numeric(38) DEFAULT 2000,
kbdb(# enabled numeric(38) NOT NULL DEFAULT 0,
kbdb(# root_url varchar(1024),
kbdb(# blocked_content_tag varchar(1024) DEFAULT 'nointellitxt 
noechotopic',
kbdb(# match_params varchar(1024),
kbdb(# tf_data_source varchar(256) DEFAULT 'Web',
kbdb(# site_id numeric(38) NOT NULL
kbdb(# ) ;
ERROR:  operator does not exist: timestamp without time zone + integer
HINT:  No operator matches the given name and argument types. You might need to 
add explicit type casts.

Regards,
Daulat


CREATE EXTENSION to load the language into the database

2019-05-03 Thread Daulat Ram
Hello team,

We are getting below issue while creating a function in Potsgres 11.2


nagios=# create or replace function diskf (filesystem text, warn int, err int) 
returns text as $BODY$
nagios$# use warnings;
nagios$# use strict;
nagios$# my $fs = $_[0];
nagios$# my $w = $_[1];
nagios$# my $e = $_[2];
nagios$# my $r = "WARNING";
nagios$# my $output = `df -kP $fs`;
nagios$# $output =~ /.*\s+(\d+)%.*/;
nagios$# $output = $1;
nagios$# if ($output > $w)
nagios$# { $r = "ERROR" if $output > $e;}
nagios$# else { $r = "OK";}
nagios$# return  "$r $output";
nagios$# $BODY$ language plperlu;

ERROR:  language "plperlu" does not exist
HINT:  Use CREATE EXTENSION to load the language into the database.


nagios=# SELECT * FROM pg_language;
lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | 
lanvalidator | lanacl
--+--+-+--+---+---+--+
internal |   10 | f   | f| 0 | 0 |  
   2246 |
c|   10 | f   | f| 0 | 0 |  
   2247 |
sql  |   10 | f   | t| 0 | 0 |  
   2248 |
plpgsql  |   10 | t   | t| 13075 | 13076 |  
  13077 |
(4 rows)

To solve this issue , I am getting the below warnings if creating extensions.

nagios=# CREATE EXTENSION plperl;
ERROR:  could not open extension control file 
"/usr/local/share/postgresql/extension/plperl.control": No such file or 
directory
nagios=# CREATE LANGUAGE plperlu;
ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": Error 
loading shared library libperl.so.5.20: No such file or directory (needed by 
/usr/local/lib/postgresql/plperl.so)
nagios=# CREATE LANGUAGE plperlu;
ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": Error 
loading shared library libperl.so.5.20: No such file or directory (needed by 
/usr/local/lib/postgresql/plperl.so)
nagios=#


Regards,
Dault


RE: CREATE EXTENSION to load the language into the database

2019-05-03 Thread Daulat Ram
Hi Adrian,

Please find the requested details.

What OS(and version) are you using?
Ans:
bash-4.4$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.9.2
PRETTY_NAME="Alpine Linux v3.9"
HOME_URL="https://alpinelinux.org/";
BUG_REPORT_URL="https://bugs.alpinelinux.org/";
bash-4.4$


bash-4.4$ uname -a
Linux psql_primary_kbcn 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 
UTC 2017 x86_64 Linux
bash-4.4$


How did you install Postgres?

Ans:

We did installation via customized docker image provided by our dev ops team.

Regards,
Daulat


-Original Message-
From: Adrian Klaver  
Sent: Friday, May 3, 2019 10:21 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: Re: CREATE EXTENSION to load the language into the database

On 5/3/19 8:56 AM, Daulat Ram wrote:
> Hello team,
> 
> We are getting below issue while creating a function in Potsgres 11.2
> 
> nagios=# create or replace function diskf (filesystem text, warn int, 
> err int) returns text as $BODY$
> 
> nagios$# use warnings;
> 
> nagios$# use strict;
> 
> nagios$# my $fs = $_[0];
> 
> nagios$# my $w = $_[1];
> 
> nagios$# my $e = $_[2];
> 
> nagios$# my $r = "WARNING";
> 
> nagios$# my $output = `df -kP $fs`;
> 
> nagios$# $output =~ /.*\s+(\d+)%.*/;
> 
> nagios$# $output = $1;
> 
> nagios$# if ($output > $w)
> 
> nagios$# { $r = "ERROR" if $output > $e;}
> 
> nagios$# else { $r = "OK";}
> 
> nagios$# return  "$r $output";
> 
> nagios$# $BODY$ language plperlu;
> 
> ERROR:  language "plperlu" does not exist
> 
> HINT:  Use CREATE EXTENSION to load the language into the database.
> 
> nagios=# SELECT * FROM pg_language;
> 
> lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid | 
> laninline
> | lanvalidator | lanacl
> 
> --+--+-+--+---+---+--+
> 
> internal |   10 | f   | f    | 0 | 
> 0
> | 2246 |
> 
> c    |   10 | f   | f    | 0 | 
> 0
> | 2247 |
> 
> sql  |   10 | f   | t    | 0 | 
> 0
> | 2248 |
> 
> plpgsql  |   10 | t   | t    | 13075 | 
> 13076
> |    13077 |
> 
> (4 rows)
> 
> To solve this issue , I am getting the below warnings if creating 
> extensions.
> 
> nagios=# CREATE EXTENSION plperl;
> 
> ERROR:  could not open extension control file
> "/usr/local/share/postgresql/extension/plperl.control": No such file 
> or directory
> 
> nagios=# CREATE LANGUAGE plperlu;
> 
> ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": 
> Error loading shared library libperl.so.5.20: No such file or 
> directory (needed by /usr/local/lib/postgresql/plperl.so)
> 
> nagios=# CREATE LANGUAGE plperlu;
> 
> ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": 
> Error loading shared library libperl.so.5.20: No such file or 
> directory (needed by /usr/local/lib/postgresql/plperl.so)

The plperl(u) extension has not been added to the Postgres installation. 
You need to do that. To help you with that we need to know:

What OS(and version) are you using?

How did you install Postgres?

> 
> nagios=#
> 
> Regards,
> 
> Dault
> 


--
Adrian Klaver
adrian.kla...@aklaver.com




bigint out of range

2019-05-16 Thread Daulat Ram
Hello team ,
We  are getting ERROR:  bigint out of range. Please help on this.



ERROR:  bigint out of range
kbdb=# INSERT INTO kb_dar_ran_url_check 
(url_hash,stat_date,topic_id,site_id,url,status,user_comments,review_comments) 
VALUES 
(72894677781133866997924561390146294513,E'19-04-2019',32793,1035,E'https://bikez.com/search/index.php',1,NULL,NULL);

ERROR:  bigint out of range


Table structure is :

Table "kb_test.kb_dar_ran_url_check"
 Column  |  Type   | Collation | Nullable | Default
-+-+---+--+-
status  | bigint  |   |  |
url_hash| bigint  |   | not null |
url | character varying(4000) |   | not null |
review_comments | character varying(4000) |   |  |
user_comments   | character varying(4000) |   |  |
stat_date   | character varying(128)  |   | not null |
topic_id| numeric(38,0)   |   | not null |
site_id | numeric(38,0)   |   | not null |
Partition key: LIST (stat_date)


Thanks,
Daulat


FATAL: SMgrRelation hashtable corrupted

2019-05-16 Thread Daulat Ram
Hello team

I need your help on this issue.

My Postgres 11.2 container is not started due to the below error message. It is 
in streaming replication environment.

2019-05-17 06:41:08.989 UTC [1] LOG:  listening on Unix socket 
"/var/run/postgresql/.s.PGSQL.5432"
2019-05-17 06:41:09.093 UTC [11] LOG:  database system was interrupted while in 
recovery at 2019-05-17 06:40:24 UTC
2019-05-17 06:41:09.093 UTC [11] HINT:  This probably means that some data is 
corrupted and you will have to use the last backup for recovery.
2019-05-17 06:41:11.260 UTC [12] FATAL:  the database system is starting up
2019-05-17 06:41:11.673 UTC [13] FATAL:  the database system is starting up
2019-05-17 06:41:12.209 UTC [14] FATAL:  the database system is starting up
2019-05-17 06:41:12.427 UTC [15] FATAL:  the database system is starting up
2019-05-17 06:41:15.425 UTC [16] FATAL:  the database system is starting up
2019-05-17 06:41:15.680 UTC [17] FATAL:  the database system is starting up
2019-05-17 06:41:16.059 UTC [18] FATAL:  the database system is starting up
2019-05-17 06:41:16.263 UTC [19] FATAL:  the database system is starting up
2019-05-17 06:41:16.624 UTC [20] FATAL:  the database system is starting up
2019-05-17 06:41:17.471 UTC [21] FATAL:  the database system is starting up
2019-05-17 06:41:18.739 UTC [22] FATAL:  the database system is starting up
2019-05-17 06:41:19.877 UTC [11] LOG:  database system was not properly shut 
down; automatic recovery in progress
2019-05-17 06:41:19.887 UTC [11] LOG:  redo starts at 5E/170349E8
2019-05-17 06:41:19.954 UTC [11] FATAL:  SMgrRelation hashtable corrupted
2019-05-17 06:41:19.954 UTC [11] CONTEXT:  WAL redo at 5E/17061648 for 
Transaction/COMMIT: 2019-05-17 06:39:46.902988+00; rels: base/59265/105367 
base/59265/105349 base/59265/105365 base/59265/105362 base/59265/105360 
base/59265/105349 base/59265/105358 base/59265/105355; inval msgs: catcache 50 
catcache 49 catcache 50 catcache 49 catcache 50 catcache 49 catcache 50 
catcache 49 catcache 50 catcache 49 catcache 50 catcache 49 catcache 50 
catcache 49 catcache 50 catcache 49 catcache 50 catcache 49 catcache 50 
catcache 49 catcache 50 catcache 49 catcache 50 catcache 49 relcache 105365 
relcache 105367 relcache 105367 relcache 105293 relcache 105411 relcache 105411 
relcache 105365 relcache 105293 relcache 105358 relcache 105360 relcache 105360 
relcache 105285 relcache 105413 relcache 105413 relcache 105358 relcache 105285
2019-05-17 06:41:19.955 UTC [1] LOG:  startup process (PID 11) exited with exit 
code 1
2019-05-17 06:41:19.955 UTC [1] LOG:  aborting startup due to startup process 
failure
2019-05-17 06:41:19.961 UTC [1] LOG:  database system is shut down

Regards,
Daulat


no matching entries in passwd file

2019-05-21 Thread Daulat Ram
Hello team,

I have  database & users created inside the docker but we are getting 
connection issue while trying to connect to database using user created in 
postgres.


docker exec -it -u test b8e7ejb1e31d bash

unable to find user test: no matching entries in passwd file

Regards,

Daulat



Cause: org.postgresql.util.PSQLException: ERROR: could not resize shared memory segment "/PostgreSQL.1946998112" to 8388608 bytes: No space left on device

2019-06-03 Thread Daulat Ram
Hello team,

I'm getting below error while accessing postgres11 database. Please suggest the 
solution for this issue.
Cause: org.postgresql.util.PSQLException: ERROR: could not resize shared memory 
segment "/PostgreSQL.1946998112" to 8388608 bytes: No space left on device
at com.caucho.el.ArrayResolverExpr.invoke(ArrayResolverExpr.java:260)

Details from docker :

bash-4.4$ mount | grep /dev/shm
shm on /dev/shm type tmpfs 
(rw,context="system_u:object_r:container_file_t:s0:c127,c569",nosuid,nodev,noexec,relatime,size=65536k)



bash-4.4$ free && ipcs -l && echo "page size:" && getconf PAGE_SIZE
 total   used   free sharedbuffers cached
Mem:  32779840   246123008167540  0 52   23735916
-/+ buffers/cache: 876332   31903508
Swap:  4063228  911363972092

-- Messages: Limits 
max queues system wide = 16384
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

-- Shared Memory Limits 
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (pages) = 18446744073692774399
min seg size (bytes) = 1

-- Semaphore Limits 
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

page size:
4096
bash-4.4$


bash-4.4$ psql
psql (11.2)
Type "help" for help.

postgres=# show max_parallel_workers_per_gather;
max_parallel_workers_per_gather
-
2
(1 row)

postgres=# show max_parallel_workers_per_gather;
max_parallel_workers_per_gather
-
2
(1 row)

postgres=#

Thanks,



How to connect to toad Edge with postgresql running with docker container?

2019-06-14 Thread Daulat Ram
Hello team,

Please suggest how to connect to toad Edge with postgresql running with docker 
container.

Regards,
Daulat



How can generate alter sequence and drop constraints statements via ora2pg

2019-06-19 Thread Daulat Ram
Hi All,

Any one can give me an idea how we can generate the alter sequence and drop 
constraints scripts /statements for a schema tables using ora2pg.conf

Regards,
Daulat



Max_connections limit

2019-06-25 Thread Daulat Ram
Hello team,

We have migrated our database  from Oracle 12c to Postgres 11. I need your 
suggestions , we have sessions limit in Oracle = 3024 . Do we need to set the 
same connection limit in Postgres as well. How we can decide the 
max_connections limit for postgres. Are there any differences in managing 
connections in Oracle and postgres.

SQL> show parameter sessions;

NAME TYPEVALUE
 --- --
java_max_sessionspace_size   integer 0
java_soft_sessionspace_limit integer 0
license_max_sessions integer 0
license_sessions_warning integer 0
sessions integer 3024
shared_server_sessions   integer
SQL>

Regards,
Daulat



Memory settings

2019-06-29 Thread Daulat Ram
Can you please suggest what will be  the suitable memory settings for 
Postgresql11 if we have 80gb RAM, 16 CPU's and OS  Linux.

If we set 25 % of total RAM then shared_buffers value will be 20GB. Will it be 
useful or we can set it any random vale like 8g or 12gb.

According to https://pgtune.leopard.in.ua/#/
below are the suggested memory  values for 80gb RAM and 16 CPU.  I assume the 
values preferred for effective_cache_size = 60GB and shared_buffers = 20GB are 
too large.

max_connections = 500
shared_buffers = 20GB
effective_cache_size = 60GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 300
work_mem = 6553kB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 16
max_parallel_workers_per_gather = 8
max_parallel_workers = 16

Please give your suggestions.

Regards,
Daulat



Memory settings

2019-06-29 Thread Daulat Ram
Hi team,

Can you please suggest what will be  the suitable memory settings for 
Postgresql11 if we have 80gb RAM, 16 CPU's and OS  Linux.

If we set 25 % of total RAM then shared_buffers value will be 20GB. Will it be 
useful or we can set it any random vale like 8g or 12gb.

According to https://pgtune.leopard.in.ua/#/
below are the suggested memory  values for 80gb RAM and 16 CPU.  I assume the 
values preferred for effective_cache_size = 60GB and shared_buffers = 20GB are 
too large.

max_connections = 500
shared_buffers = 20GB
effective_cache_size = 60GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 300
work_mem = 6553kB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 16
max_parallel_workers_per_gather = 8
max_parallel_workers = 16

Please give your suggestions.

Regards,
Daulat



RE: Memory settings

2019-07-01 Thread Daulat Ram
Hello Hans,

Thanks for your reply. Yes, we are facing performance issue.

Current output of query is:

postgres=# SELECT pg_stat_database.datname,
postgres-#pg_stat_database.blks_read,
postgres-#pg_stat_database.blks_hit,
postgres-#round((pg_stat_database.blks_hit::double precision
postgres(#   / (pg_stat_database.blks_read
postgres(#  + pg_stat_database.blks_hit
postgres(#  +1)::double precision * 100::double 
precision)::numeric, 2) AS cachehitratio
postgres-#FROM pg_stat_database
postgres-#   WHERE pg_stat_database.datname !~ 
'^(template(0|1)|postgres)$'::text
postgres-#   ORDER BY round((pg_stat_database.blks_hit::double precision
postgres(#  / (pg_stat_database.blks_read
postgres(# + pg_stat_database.blks_hit
postgres(# + 1)::double precision * 100::double 
precision)::numeric, 2) DESC;
   datname| blks_read | blks_hit  | cachehitratio
--+---+---+---
kbcc_eng_ret |  1192 |26 | 99.56
nagios   |   178 | 37185 | 99.52
kccm |  1431 |214501 | 99.34
kbbm |   1944006 | 157383222 | 98.78


Thanks,
Daulat

From: Hans Schou mailto:hans.sc...@gmail.com>>
Sent: Sunday, June 30, 2019 11:35 AM
To: Daulat Ram mailto:daulat@exponential.com>>
Cc: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: Re: Memory settings


Try run postgresqltuner.pl<http://postgresqltuner.pl> as suggested on 
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and also look at 
the other info there.

After running a few days with live data run cache_hit_ratio.sql by Melvin 
Davidson:
SELECT pg_stat_database.datname,
   pg_stat_database.blks_read,
   pg_stat_database.blks_hit,
   round((pg_stat_database.blks_hit::double precision
  / (pg_stat_database.blks_read
 + pg_stat_database.blks_hit
 +1)::double precision * 100::double precision)::numeric, 2) AS 
cachehitratio
   FROM pg_stat_database
  WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
  ORDER BY round((pg_stat_database.blks_hit::double precision
 / (pg_stat_database.blks_read
+ pg_stat_database.blks_hit
+ 1)::double precision * 100::double precision)::numeric, 
2) DESC;

The real question is: Is your system slow?


On Sun, Jun 30, 2019 at 5:14 AM Daulat Ram 
mailto:daulat@exponential.com>> wrote:
Hi team,

Can you please suggest what will be  the suitable memory settings for 
Postgresql11 if we have 80gb RAM, 16 CPU’s and OS  Linux.

If we set 25 % of total RAM then shared_buffers value will be 20GB. Will it be 
useful or we can set it any random vale like 8g or 12gb.

According to https://pgtune.leopard.in.ua/#/
below are the suggested memory  values for 80gb RAM and 16 CPU.  I assume the 
values preferred for effective_cache_size = 60GB and shared_buffers = 20GB are 
too large.
 max_connections = 500
shared_buffers = 20GB
effective_cache_size = 60GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 300
work_mem = 6553kB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 16
max_parallel_workers_per_gather = 8
max_parallel_workers = 16
Please give your suggestions.
 Regards,
Daulat