Re: Tracking DDL and DML changes in Postgresql and different versions of database (advance)

2018-05-29 Thread Pavan Teja
On Wed, May 30, 2018, 11:03 AM Łukasz Jarych  wrote:

> Hi Guys,
>
> i am using Access FE and Postgresql BE.
> To track DML changes i have history log table wotking like here:
>
> https://www.fontstuff.com/access/acctut21.htm
>
> It is ok but this doesnt solve the problem with trakich DDL changes.
>
> For DDL changes i can use flyway system but how to combine this with my
> DML trackinh system?
>
> Do you have any solutions for this?
>
> My boss wants to have possibility to save the current version (state) of
> database with DML and DDL changes. And to easily revert and go back to
> previous version (rollback all changes).
>
> I do not think so that postgresql has triggers on DDL changes and - this
> is the most optimal solution?
>
> Best Wishes,
> Luke
>


> Hi Lukasz,

I think by using triggers(for DML-> Insert, Update, Delete, Truncate), you
can achieve this. You can log the entire query which does this change.

And regarding DDL(Create, Alter), you can use concept of event trigger
where the tags like "ddl_command_start","table_rewrite","ddl_command_end"
are available which let's you to get all the ddl changes into a table.

So by using above things, I hope you can achieve. Feel free to ask if
anything is unclear.

Regards,
Pavan


>


Re: How to get postmaster shut down time in postgres?

2018-06-04 Thread Pavan Teja
Thank you so much for your replies.😀

On Mon, Jun 4, 2018, 7:25 PM Adrian Klaver 
wrote:

> On 06/04/2018 06:44 AM, pavan95 wrote:
> > Hi Adrian/Melvin,
> >
> > Thanks for your prompt replies. Yeah, I'm aware of that way.
> >
> > But my requirement is to get the server shutdown time whenever that event
> > occurs and insert into a table dynamically!! Is it possible?
>
> A quick scan shows no built in function to do this. Looks like you will
> need to parse the log file.
>
> >
> > The main reason for my requirement is to find the time swing between
> server
> > stop and start.
> >
> > Thanks in advance.
> >
> > Regards,
> > Pavan
> >
> >
> >
> > --
> > Sent from:
> http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Performance problem postgresql 9.5

2018-06-08 Thread Pavan Teja
Could you please give more briefing about the queries executed.

Let me know whether they are dml or ddl.
Provide information like how long and from what time the queries are in
running state, so that we can find a way to find the exact pain area.

What is the size of the database??
When were the statistics gathered??


Regards,
Pavan

On Sat, Jun 9, 2018, 1:27 AM Miguel Angel Sanchez Sandoval <
massan...@gmail.com> wrote:

>
> Hi guys, migrate from 8.4 to 9.5, all OK except that 2-3 days pass and the
> database experiences slowness, I execute the linux top command and it shows
> me a postgres user process executing a strange command (2yhdgrfrt63788)
> that I consume a lot of CPU, I see the querys active and encounter select
> fun ('./ 2yhdgrfrt63788') , this did not happen in version 8.4, any help
> would appreciate it.
>
> regards
>
>


Re: First query on each connection is too slow

2018-06-13 Thread Pavan Teja
Hi Vadim,

Too much of shared buffers allocation also causes problem. Read the
documentation.


Regards,
Pavan

On Wed, Jun 13, 2018, 3:23 PM Vadim Nevorotin  wrote:

> I have a very strange problem. I'm using PostgreSQL 9.6 with PostGIS 2.3
> (both from Debian Strecth repos) to store DB for OSM server (but actually
> it doesn't matter). And I've noticed, that on each new connection to DB
> first query is much slower (10x) than all others. E.g.:
>
> $ psql test_gis
> psql (9.6.7)
> Type "help" for help.
> test_gis=# \timing
> Timing is on.
> test_gis=# SELECT srid FROM geometry_columns WHERE
> f_table_name='planet_osm_polygon' AND f_geometry_column='way';
>  srid
> --
>  3857
> (1 row)
> Time: 52.889 ms
> test_gis=# SELECT srid FROM geometry_columns WHERE
> f_table_name='planet_osm_polygon' AND f_geometry_column='way';
>  srid
> --
>  3857
> (1 row)
> Time: 2.070 ms
> test_gis=#
>
> If I run others instances of psql in parallel, when the first is active -
> they has absolutely the same problem. In one instance of psql query is fast
> (if it's not the first query), in others - first query is slow, but all
> others is fast.
>
> EXPLAIN (ANALYZE, BUFFERS) for the first query shows very strange timings:
>
> QUERY PLAN
>
>
> --
>
>  Nested Loop Left Join  (cost=0.82..26.41 rows=1 width=4) *(actual
> time=49.290..49.297 rows=1 loops=1)*
>Join Filter: ((s_2.connamespace = n.oid) AND (s_2.conrelid = c.oid) AND
> (a.attnum = ANY (s_2.conkey)))
>Buffers: shared hit=18
>->  Nested Loop Left Join  (cost=0.82..25.28 rows=1 width=14) (actual
> time=0.135..0.141 rows=1 loops=1)
>  Join Filter: ((s_1.connamespace = n.oid) AND (s_1.conrelid =
> c.oid) AND (a.attnum = ANY (s_1.conkey)))
>  Buffers: shared hit=17
>  ->  Nested Loop Left Join  (cost=0.82..24.16 rows=1 width=14)
> (actual time=0.126..0.132 rows=1 loops=1)
>Join Filter: ((s.connamespace = n.oid) AND (s.conrelid =
> c.oid) AND (a.attnum = ANY (s.conkey)))
>Buffers: shared hit=16
>->  Nested Loop  (cost=0.82..23.05 rows=1 width=14) (actual
> time=0.083..0.089 rows=1 loops=1)
>  Buffers: shared hit=15
>  ->  Nested Loop  (cost=0.55..17.75 rows=1 width=18)
> (actual time=0.078..0.082 rows=1 loops=1)
>Join Filter: (c.relnamespace = n.oid)
>Rows Removed by Join Filter: 5
>Buffers: shared hit=12
>->  Nested Loop  (cost=0.55..16.61 rows=1
> width=18) (actual time=0.072..0.074 rows=1 loops=1)
>  Buffers: shared hit=11
>  ->  Index Scan using
> pg_class_relname_nsp_index on pg_class c  (cost=0.27..8.30 rows=1 width=8)
> (actual time=0.049..0.050 rows=1 loops=1)
>Index Cond: (relname =
> 'planet_osm_polygon'::name)
>Filter: ((NOT
> pg_is_other_temp_schema(relnamespace)) AND has_table_privilege(oid,
> 'SELECT'::text) AND (relkind = ANY ('{r,v,m,f}'::"char"[])))
>Buffers: shared hit=8
>  ->  Index Scan using
> pg_attribute_relid_attnam_index on pg_attribute a  (cost=0.28..8.30 rows=1
> width=14) (actual time=0.021..0.021 rows=1 loops=1)
>Index Cond: ((attrelid = c.oid) AND
> (attname = 'way'::name))
>Filter: (NOT attisdropped)
>Buffers: shared hit=3
>->  Seq Scan on pg_namespace n
>  (cost=0.00..1.06 rows=6 width=4) (actual time=0.003..0.004 rows=6 loops=1)
>  Buffers: shared hit=1
>  ->  Index Scan using pg_type_oid_index on pg_type t
>  (cost=0.27..5.29 rows=1 width=4) (actual time=0.005..0.007 rows=1 loops=1)
>Index Cond: (oid = a.atttypid)
>Filter: (typname = 'geometry'::name)
>Buffers: shared hit=3
>->  Seq Scan on pg_constraint s  (cost=0.00..1.09 rows=1
> width=31) (actual time=0.041..0.041 rows=0 loops=1)
>  Filter: (consrc ~~* '%geometrytype(% = %'::text)
>  Rows Removed by Filter: 7
>  Buffers: shared hit=1
>  ->  Seq Scan on pg_constraint s_1  (cost=0.00..1.09 rows=1
> width=31) (actual time=0.009..0.009 rows=0 loops=1)
>Filter: (consrc ~~* '%ndims(% = %'::text)
>Rows Removed by Filter: 7
>Buffers: shared hit=1
>->  Seq Scan on pg_constraint s_2  (cost=0.00..1.09 rows=1 width=77)
> (actual time=0.011..0.011 rows

Re: Merging two database dumps

2018-06-13 Thread Pavan Teja
Hi Alex,
For storing duplicate rows, dropping primary and unique indexes is the only
way.

One alternative is create a column with timestamp which updates on every
insert/update so that timestamp will be primary. Hope it helps.

Regards,
Pavan

On Wed, Jun 13, 2018, 4:47 PM Alex O'Ree  wrote:

> I have a situation with multiple postgres servers running all with the
> same databases and table structure. I need to periodically export the data
> from each of there then merge them all into a single server. On  occasion,
> it's feasible for the same record (primary key) to be stored in two or more
> servers
>
> I was using pgdump without the --insert option however I just noticed that
> pgrestore will stop inserting into a table when the conflict occurs,
> leaving me with an incomplete set.
>
> Question is what are my other options to skip over the conflicting record
> when merging?
>
> From the docs, it appears that making dumps with the --insert option may
> be the only way to go however performance is an issue. In this case would
> dropping all indexes help?
>


Re: Drop Default Privileges?

2018-06-19 Thread Pavan Teja
Hi Louis,

In order to remove the default privileges for any particular user/role, we
should know the list of default privileges.

If we know them we can revoke them as a super user. Once I tried finding
the list of default privileges, but left with no clue. But I strongly
believe that if we know the list of default privileges that will be
assigned we may revoke them as a super user.

Regards,
Pavan

On Tue, Jun 19, 2018, 11:55 PM Louis Battuello 
wrote:

>
>
> On Jun 19, 2018, at 11:38 AM, Fabio Pardi  wrote:
>
> Hi Louis,
>
> I think 'alter user' can do the job for you.
>
> https://www.postgresql.org/docs/current/static/sql-alteruser.html
>
> Else, as alternative: before running pg_restore, you could edit the dump
> and replace the string 'ROLE postgres' with the correct user on the RDS
> instance.
>
> regards,
>
> fabio pardi
>
>
> Thanks for your replies, David and Fabio.
>
> I thought about editing the dump file or attempting some sort of
> reassignment of the default privileges, but that still leaves the larger
> question: can default privileges ever be removed specific to a single
> schema?
>
> If I set a default of GRANT SELECT, is my only option to change it to
> REVOKE SELECT? Is there a way to “get rid of the default privileges entry
> for the role,” as referenced in the Notes section of the of the ALTER
> DEFAULT PRIVILEGES documentation? Reversing the change from GRANT to REVOKE
> still leaves a catalog reference to the postgres user oid in pg_default_acl.
>
> I don’t want to reverse the default behavior. I’d like to remove it
> entirely.
>
> Thanks,
> Louis
>


Re: Schema/Data conversion opensource tools from MySQL to PostgreSQL

2018-06-25 Thread Pavan Teja
Hi Chiru,

You can use MySQL foreign data wrapper to achieve this.

Regards,
Pavan

On Mon, Jun 25, 2018, 10:18 PM chiru r  wrote:

> Hi All,
>
>
> Please suggest Schema/Data conversion opensource tools from MySQL to
> PostgreSQL.
>
>
> Thanks,
> Chiranjeevi
>


Re: Please let me know which configuration setting we need to modify to speedup the pg_dump backup.

2018-10-10 Thread Pavan Teja
Hi,

You can use -j jobs option to speed up the process.

Hope it works.

Regards,
Pavan

On Thu, Oct 11, 2018, 8:02 AM Raghavendra Rao J S V <
raghavendra...@gmail.com> wrote:

> Hi All,
>
> pg_dump is taking more time. Please let me know which configuration
> setting we need to modify to speedup the pg_dump backup.We are using 9.2
> version on Centos Box.
>
> --
> Regards,
> Raghavendra Rao J S V
>
>


Re: logical replication problem

2019-01-28 Thread Pavan Teja
And that's the reason for the error. In the subscriber no need of any data.
Jus structure sync is enough. Try truncating the database and start the
replication from fresh ull get rid of the errors

Regards,
Pavan

On Mon 28 Jan, 2019, 3:50 PM Thomas Schweikle  On Mon, Jan 28, 2019 at 11:15 AM Pavan Teja 
> wrote:
> >
> > Does the db_server_B has data in it??
>
> Yes, it has -- about 51GiByte ... Changed frequently.
>
> > Regards,
> > Pavan
> >
> > On Mon 28 Jan, 2019, 3:42 PM Thomas Schweikle  wrote:
> >>
> >> Hi!
> >>
> >> Setup:
> >>
> >> - db-server_A on port 5432
> >> - db-server_B on port 5433
> >>
> >> on db-server_A:
> >> postgres=# \dRp
> >>  Liste der Publikationen
> >>  Name | Eigentümer | Alle Tabellen | Inserts | Updates | Deletes
> >> --++---+-+-+-
> >>  mig1 | postgres   | t | t   | t   | t
> >>
> >> on db-server_B:
> >> postgres=# \dRs
> >> Liste der Subskriptionen
> >>  Name | Eigentümer | Eingeschaltet | Publikation
> >> --++---+-
> >>  sub1 | postgres   | t | {mig1}
> >>
> >> in db-server_A pg_hba.conf:
> >> local   replication postgrespeer
> >> hostreplication postgres127.0.0.1/32ident
> >> hostreplication postgres::1/128 ident
> >>
> >> on db-server_A:
> >> postgres# CREATE PUBLICATION mig1 FOR ALL TABLES;
> >> CREATE PUBLICATION
> >>
> >> the command on db-server_B:
> >> postgres# CREATE SUBSCRIPTION sub1 CONNECTION 'host=127.0.0.1
> >> port=5432 dbname=mydb user=postgres PUBLICATION mig1;
> >> NOTICE:  created replication slot "sub1" on publisher
> >> CREATE SUBSCRIPTION
> >>
> >> worked as expected.
> >> But: instead of starting replication I find Errors within db-server_B
> logs:
> >> 2019-01-24 10:57:58.549 CET [28956] LOG:  Apply-Worker für logische
> >> Replikation für Subskription »sub1« hat gestartet
> >> 2019-01-24 10:57:58.553 CET [28956] FEHLER:  konnte keine Daten vom
> >> WAL-Stream empfangen: FEHLER:  Publikation »mig1« existiert nicht
> >> KONTEXT:  Slot »sub1«, Ausgabe-Plugin »pgoutput«, im Callback
> >> change, zugehörige LSN 47/B4BCA2A8
> >> 2019-01-24 10:57:58.554 CET [5982] LOG:  Background-Worker »logical
> >> replication worker« (PID 28956) beendete mit Status 1
> >>
> >> Configuration was set on both servers to include
> >> wal_level = logical
> >>
> >> Any ideas why this does not work as expected? Any further ideas what
> >> to prove on db-server_A and db-server_B?
> >>
> >>
> >> --
> >> Thomas
> >>
>
>
> --
> Thomas
>
>


Re: Seeded Replication

2019-04-03 Thread Pavan Teja
Hi Lou,

Did you restore data on the subscriber during the first time. If yes this
error is likely to occur.

As a remedy, you need to restore only structural dump during initial
building of subscription.

Earlier the same issue was faced by me later it got resolved following the
above said approach.

Kindly revert back if any queries.

Regards,
Pavan Teja,
9841380956

On Wed, 3 Apr, 2019, 5:15 PM Lou Tseng,  wrote:

> Hi,
>
> We are working on a project to set up replication from on premises
> Postgresql 10.4 to Azure Postgresql services. However, our database is
> about 200G and it will take a long time to complete the initial data copy.
> We would like to manually seed the subscriber database with data dump and
> then turn on the subscription like depesz showed in this post
> https://www.depesz.com/2017/02/07/waiting-for-postgresql-10-logical-replication/
>  .
> It works for the small testing database but when I applied same steps with
> entire database, I am getting "ERROR:  duplicate key value violates unique
> constraint" errors. Basically postgresql does not recognize the primary key.
>
> Any thought / suggestion is helpful!
>
> Lou Tseng
>
> lts...@advancedpricing.com
> <http://www.advancedpricing.com/>
> Advanced Medical Pricing Solutions <http://advancedpricing.com/>
> 35 Technology Parkway South, Suite. 100
> Peachtree Corners, GA 30092
>
>


Re: Logical replication failed recovery

2019-04-07 Thread Pavan Teja
Hi Lou,

Try setting wal_keep_segments parameter to a higher value. I think it will
suffice to solve the problem.

Regards,
Pavan Teja,
9841380956

On Sun, 7 Apr, 2019, 6:53 PM Lou Tseng,  wrote:

> Hi folks,
>
> Is there a good tutorial to recover from logical replication out of sync?
> We ran into this error stating WAL has been removed and now replica is out
> of sync.  Also, is there a way to increase the number of WAL kept on the
> master?
>
> Thanks!
>
> 2019-04-07 12:28:37.180 UTC [22292] ERROR:  could not receive data from
> WAL stream: ERROR:  requested WAL segment 0001520800EB has
> already been removed
>
> 2019-04-07 12:28:37.182 UTC [114186] LOG:  worker process: logical
> replication worker for subscription 8907992 (PID 22292) exited with exit
> code 1
>
>
> Lou Tseng
>
> lts...@advancedpricing.com
> <http://www.advancedpricing.com/>
> Advanced Medical Pricing Solutions <http://advancedpricing.com/>
> 35 Technology Parkway South, Suite. 100
> Peachtree Corners, GA 30092
>
>


Re: FW: Setting up streaming replication problems

2018-01-23 Thread Pavan Teja
On Jan 23, 2018 11:34 PM, "Martin Goodson"  wrote:



On 23/01/2018 07:36, Thiemo Kellner, NHC Barhufpflege wrote:

> 1) I am not sure whether to put the md5 value of the repuser password
>
>> into primary conninfo or the plain one. I don't feel the documentation
>>> or the book is clear on that.
>>>
>>
> Anyone two dimes on that?
>
Password, not an md5. Or, at least, that's what's been working for me :)

Even better, I guess, would be to not put a password in there at all and
use a .pgpass.

-- 
Martin Goodson

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."


Hello,

Instead you can keep trust connection for that user from only that ip
instead specifying in  .Pgpass file.

Anything can be done as per your​convenience.

Regards,
Pavan


Re: FATAL: failed to create a backend connection

2018-02-02 Thread Pavan Teja
On Feb 2, 2018 5:57 PM, "Vikas Sharma"  wrote:

Hi There,

We are using Postgresql 9.3 with pgpool-II-93-3.5.10.
One Master replicating to 3 Standbys using streaming replication. Pgpool is
used for load balancing only.

Lately we are seeing below on application servers.

Caused by: org.postgresql.util.PSQLException: FATAL: failed to create a
backend connection
Detail: executing failover on backend

Are we hitting the max_connections in PgSql instance or pgpool has reached
to max_connections and can't spawn more

Please advise.

Regards
Vikas


Hi,
   It seems like there is a disagreement between postgres and
pgpool as to where the Unix domain sockets for talking to postgres ought to
be found.
 It's a situation like it can connect to postgres buy not pgpool, as it was
directing all traffic to the local postgres node.
1) Check whether any of the postgres backends are cancelling query with
'statement time out' error.

2) If statement time out is not set in postgres.conf file then the
application might have altered.

3) Check for any long running queries are being cancelled by Server.

Regards,
Pavan


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread Pavan Teja
Thank you for your timely response 😊

On Feb 14, 2018 8:18 PM, "Melvin Davidson"  wrote:

>
>
> On Wed, Feb 14, 2018 at 9:42 AM, pavan95 
> wrote:
>
>> Hi all,
>>
>> Is it possible to upgrade an existing postgresql 9.1 production system to
>> latest Postgres 10.0 version?
>>
>> The main requirement is to get rid of downtime. Please help me out!
>>
>> Thanks in Advance.
>>
>> Regards,
>> Pavan
>>
>>
>>
>> --
>> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f184378
>> 0.html
>>
>> > Is it possible to upgrade an existing postgresql 9.1 production system
> to latest Postgres 10.0 version?
> This is specifically covered in the documention
>
> 8.6. Upgrading a PostgreSQL Cluster
>
> *https://www.postgresql.org/docs/current/static/upgrading.html
> *--
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread Pavan Teja
Yeah David,

Even I'm thinking the same

Regards,
Pavan

On Feb 14, 2018 8:34 PM, "David G. Johnston" 
wrote:

>
>
> On Wednesday, February 14, 2018, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wednesday, February 14, 2018, pavan95 
>> wrote:
>>
>>> Hi all,
>>>
>>> Is it possible to upgrade an existing postgresql 9.1 production system to
>>> latest Postgres 10.0 version?
>>>
>>> The main requirement is to get rid of downtime. Please help me out!
>>>
>>
>> Zero downtime is only possible by standing up a hot-standby then failing
>> over to it.  Same-server upgrade you can do via pg_upgrade but it does
>> involve downtime.  There are lots of material and options online, including
>> the docs, for setting up hot-standby replication.
>>
>>
> To clarify, you need to use logical replication here since the WAL format
> is not usable across versions.
>
> pg_upgrade is your simplest option if you can handle its downtime.
>
> David J.
>