Re: Urgent :: Postgresql streaming replication issue - sync mode

2019-10-03 Thread Jason Wang
I read this
https://www.2ndquadrant.com/en/blog/evolution-fault-tolerance-postgresql-synchronous-commit/

But don't see why your primary would have more records than the standby?

If killall was issued before commit returned, that means the transaction
wasn't completed so yes you would lose records after last commit but that's
expected; if commit was returned both primary and standby should have the
transaction.

Are you sure in your case you end up with primary and standby with
different records from a single transaction?


On Thu, 3 Oct 2019, 9:41 pm Laurenz Albe,  wrote:

> On Wed, 2019-10-02 at 23:58 +0530, Shital A wrote:
> > We are seeing a strange issue with postgresql streaming application
> > in sync mode.
> >
> > We are using postgresql 9.6. Old version because of some specific
> > requirements.  We have setup cluster with master-standby using
> > pacemaker.
> >
> > When we kill master using killall -9 postgres. The failed primary has
> > few records extra than standby node. We have done setup with
> > synchronous_commit = remote_apply and have set
> > synchronous_standby_names=server_name.
> >
> > As the failed primary is having more data, How is it possible that
> > primary is committing transaction before they were applied on standby
> > with synchronous_commit=remote_apply?
> >
> >
> > Please share if you have any thoughts. Are we missing any config ?
>
> This is to be expected.
>
> The transaction will be committed on the primary, then on the standby,
> and COMMIT will only return once the standby reports success.
>
> But the transacaction still has to be committed on the primary first.
>
> If the standby sis promoted while COMMIT is waiting for the standby,
> you can end up with the transaction committed on the primary,
> but not yet committed on the standby.
>
> You should use "pg_rewind" on the failed primary if you want to use
> it as new standby for the promoted server.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>
>
>


Re: Basic question about structuring SQL

2020-07-07 Thread Jason Wang
I don't think nested commit is supported however you might want to put
logic in do-something-useful.sql into a stored procedure without commit and
your BIGGER task just calls this SP and commits at the end; you can run the
SP by itself to have transaction protected.

/Jason

On Tue, 7 Jul 2020 at 21:41, Robert Inder  wrote:

> I'm an experienced programmer but really new to SQL,
> and I'm wanting a pointer to "the SQL way" to structure/organise chunks of
> code.
>
> A while back, I wrote a chunk of SQL to Do Something Useful.
> I put it in a file (do-something-useful.sql).
> And, to protect against getting into a weird state, I wrapped the code in
> my file with
>BEGIN;
>UPDATE
>DELETE...
>COMMIT;
> With the idea that I can do
>psql my_database
>\i do-something-useful.sql
> And be sure that either my task will be have been completed, or nothing
> with have changed.
>
> NOW, I want to do the same for a BIGGER task.
> BUT I realise that if I create bigger-task.sql like this...
>BEGIN;
>   <>
>\i do-something-useful.sql
>   <>
>COMMIT;
> ...the COMMIT inside "do-something-useful.sql" closes the transaction
> started in "bigger-task.sql"
> So I can get some things committed even if other things (in tidy-up) fail.
>
> So how SHOULD I tackle this?
> PostgreSQL does not do nested transactions (right?)
>
> So how should I structure my chunks of SQL so that I can have "safe"
> (all-or-nothing) blocks,
> AND use them from within one another?
>
> Robert
>
>