We have to close the cursor or rollback the transaction in case of failure?

2019-12-05 Thread M Tarkeshwar Rao
Hi all,



I am using cursors to fetch the data from table using  C++. Initially were 
errors as highlighted  as "current transaction isaborted" in /var/logs during 
full vacuum.



We are little bit confused in failure handling. In case of any failure of 
following function.

PQexec(connection, myFetchSql)



We have to close the cursor or rollback the transaction?



postgres[10709]: [6-1] < 2019-10-29 11:24:22.550 CET > ERROR:  canceling 
statement due to conflict with recovery

postgres[10709]: [6-2] < 2019-10-29 11:24:22.550 CET > DETAIL:  User query 
might have needed to see row versions that must be removed.

postgres[10709]: [6-3] < 2019-10-29 11:24:22.550 CET > STATEMENT:  FETCH 1000 
IN mycursor_3461662464_1572368715367985819

postgres[10709]: [7-1] < 2019-10-29 11:24:22.550 CET > ERROR:  current 
transaction is aborted, commands ignored until end of transaction block

postgres[10709]: [7-2] < 2019-10-29 11:24:22.550 CET > STATEMENT:  CLOSE 
mycursor_3461662464_1572368715367985819

O


Regards
Tarkeshwar


Schema Copy

2019-12-05 Thread Sonam Sharma
Can someone please help in schema copy in same database ?

I have taken db dump of current schema and in the ddl file , I have edited
it with new schema.
Will this work?


Re: Schema Copy

2019-12-05 Thread Justin
Hi Sonam

As long as the edited sql script  has been changed from  oldschema.tables
or oldschema.functions etc...  to newschema.functions, newschema.functios
etc...

This does not move  data

On Thu, Dec 5, 2019 at 5:07 AM Sonam Sharma  wrote:

> Can someone please help in schema copy in same database ?
>
> I have taken db dump of current schema and in the ddl file , I have edited
> it with new schema.
> Will this work?
>


Re: Schema Copy

2019-12-05 Thread Sonam Sharma
Hi Justin,

What can be done to move the data..

On Thu, Dec 5, 2019, 7:57 PM Justin  wrote:

> Hi Sonam
>
> As long as the edited sql script  has been changed from  oldschema.tables
> or oldschema.functions etc...  to newschema.functions, newschema.functios
> etc...
>
> This does not move  data
>
> On Thu, Dec 5, 2019 at 5:07 AM Sonam Sharma  wrote:
>
>> Can someone please help in schema copy in same database ?
>>
>> I have taken db dump of current schema and in the ddl file , I have
>> edited it with new schema.
>> Will this work?
>>
>


Re: Schema Copy

2019-12-05 Thread Justin
Hi Sonam

If you want a copy the data do a Select * into  (newschema.table) from
oldschema.table https://www.postgresql.org/docs/9.1/sql-selectinto.html

If you want to just move the table

https://www.postgresql.org/docs/current/sql-altertable.html

Alter Table oldschema.table Set Schema  newschema

On Thu, Dec 5, 2019 at 9:31 AM Sonam Sharma  wrote:

> Hi Justin,
>
> What can be done to move the data..
>
> On Thu, Dec 5, 2019, 7:57 PM Justin  wrote:
>
>> Hi Sonam
>>
>> As long as the edited sql script  has been changed from
>> oldschema.tables  or oldschema.functions etc...  to newschema.functions,
>> newschema.functios etc...
>>
>> This does not move  data
>>
>> On Thu, Dec 5, 2019 at 5:07 AM Sonam Sharma  wrote:
>>
>>> Can someone please help in schema copy in same database ?
>>>
>>> I have taken db dump of current schema and in the ddl file , I have
>>> edited it with new schema.
>>> Will this work?
>>>
>>


AW: secure deletion of archived logs

2019-12-05 Thread Zwettler Markus (OIZ)
> Greetings,
> 
> * Zwettler Markus (OIZ) (markus.zwett...@zuerich.ch) wrote:
> > with Oracle we use "backup archivelog all delete all input".
> > this is a kind of atomic transaction.
> > everything backuped for sure is deleted.
> >
> > with Postgres we archive to a local host directory
> 
> ... how?  Do you actually sync the files after you copy them with an fsync to 
> be
> sure that they're durably stored there?  If not, then there's a pretty good 
> chance
> that you'll lose some WAL if a crash happens because if your archive command
> returns successful, PG will removed its copy of the WAL file.
> 
> Hint: using 'cp' as an archive command is a very bad idea.
> 
> > we do a Networker backup of this directory afterwards and delete the
> > archived logs but this is not an atomic transaction so there is a
> > small risk that something gets deleted which is not backuped
> 
> That would definitely be quite bad, particularly if a WAL file that was 
> needed for a
> backup to be consistent was removed or missed, as that backup would no longer
> be valid then.
> 
> > how to you prevent this?
> 
> I would strongly recommend that you use a tool that's actually built for the 
> purpose
> of backing up PG systems, like pgbackrest or similar.
> Writing your own custom code for managing WAL archives and backup sets is
> likely to result in issues.
> 
> > Is there any backup tool which can do backups analogous Oracle?
> 
> There's quite a few different tools available for backing up PG systems, with
> various features and performance- from simple things like pg_basebackup (which
> you can set up to include all the WAL for the backup to be consistent, though 
> that
> doesn't do anything to help you with managing WAL for PITR), to much more
> sophisticated tools like pgbackrest, wal-g, and others that help with managing
> WAL and dealing with expiring out backups and such.  The biggest thing is- 
> don't
> try to roll your own.
> 
> Thanks,
> 
> Stephen




We use "rsync" on XFS with "wsync" mount mode. I think this should do the job?


The tools mentioned will all do backup to disk.
We are required to do backup to tape.


Markus






AW: archiving question

2019-12-05 Thread Zwettler Markus (OIZ)
> 
> Greetings,
> 
> * Zwettler Markus (OIZ) (markus.zwett...@zuerich.ch) wrote:
> > When there is a Postgres archiver stuck because of filled pg_xlog and 
> > archive
> directories...
> >
> > ... and the pg_xlog directory had been filled with dozens of GBs of xlogs...
> >
> > ...it takes ages until the archive_command had moved all xlogs from the
> pg_xlog directory to the archive directory afterwards...
> >
> > ... and you get crazy if you have a 8GB archive directory while the
> > pg_xlog directory had been pumped up to 100GB :(
> >
> >
> > Any idea on this one?
> 
> Parallelizing the archive-push operation can be quite helpful to address this.
> 
> Thanks,
> 
> Stephen


What do you mean hear?

Afaik, Postgres runs the archive_command per log, means log by log by log.

How should we parallelize this?





Re: We have to close the cursor or rollback the transaction in case of failure?

2019-12-05 Thread Tom Lane
M Tarkeshwar Rao  writes:
> I am using cursors to fetch the data from table using  C++. Initially were 
> errors as highlighted  as "current transaction isaborted" in /var/logs during 
> full vacuum.
> We are little bit confused in failure handling. In case of any failure of 
> following function.
> PQexec(connection, myFetchSql)
> We have to close the cursor or rollback the transaction?

Once the transaction has had an error, you have to issue ROLLBACK
before you can do anything else.  That will automatically close
any cursors opened in the current transaction.  There's no point
in issuing an explicit CLOSE (and indeed, if you try, it'll just
produce the same "current transaction is aborted" error).

regards, tom lane




Re: Schema Copy

2019-12-05 Thread Melvin Davidson
> Can someone please help in schema copy in same database ?

A while ago, I created the clone_schema function which will duplicate one
schema to a new schema with the option to copy or not copy data. However,
since there are two versions depending on the version of PostgreSQL you are
using, and you have not specified which version of PostgreSQL you have, I
cannot help you at this time.

On Thu, Dec 5, 2019 at 9:37 AM Justin  wrote:

> Hi Sonam
>
> If you want a copy the data do a Select * into  (newschema.table) from
> oldschema.table
> https://www.postgresql.org/docs/9.1/sql-selectinto.html
>
> If you want to just move the table
>
> https://www.postgresql.org/docs/current/sql-altertable.html
>
> Alter Table oldschema.table Set Schema  newschema
>
> On Thu, Dec 5, 2019 at 9:31 AM Sonam Sharma  wrote:
>
>> Hi Justin,
>>
>> What can be done to move the data..
>>
>> On Thu, Dec 5, 2019, 7:57 PM Justin  wrote:
>>
>>> Hi Sonam
>>>
>>> As long as the edited sql script  has been changed from
>>> oldschema.tables  or oldschema.functions etc...  to newschema.functions,
>>> newschema.functios etc...
>>>
>>> This does not move  data
>>>
>>> On Thu, Dec 5, 2019 at 5:07 AM Sonam Sharma 
>>> wrote:
>>>
 Can someone please help in schema copy in same database ?

 I have taken db dump of current schema and in the ddl file , I have
 edited it with new schema.
 Will this work?

>>>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Schema Copy

2019-12-05 Thread Sonam Sharma
The version, I am using is 11.2

On Thu, Dec 5, 2019, 9:29 PM Melvin Davidson  wrote:

> > Can someone please help in schema copy in same database ?
>
> A while ago, I created the clone_schema function which will duplicate one
> schema to a new schema with the option to copy or not copy data. However,
> since there are two versions depending on the version of PostgreSQL you are
> using, and you have not specified which version of PostgreSQL you have, I
> cannot help you at this time.
>
> On Thu, Dec 5, 2019 at 9:37 AM Justin  wrote:
>
>> Hi Sonam
>>
>> If you want a copy the data do a Select * into  (newschema.table) from
>> oldschema.table
>> https://www.postgresql.org/docs/9.1/sql-selectinto.html
>>
>> If you want to just move the table
>>
>> https://www.postgresql.org/docs/current/sql-altertable.html
>>
>> Alter Table oldschema.table Set Schema  newschema
>>
>> On Thu, Dec 5, 2019 at 9:31 AM Sonam Sharma  wrote:
>>
>>> Hi Justin,
>>>
>>> What can be done to move the data..
>>>
>>> On Thu, Dec 5, 2019, 7:57 PM Justin  wrote:
>>>
 Hi Sonam

 As long as the edited sql script  has been changed from
 oldschema.tables  or oldschema.functions etc...  to newschema.functions,
 newschema.functios etc...

 This does not move  data

 On Thu, Dec 5, 2019 at 5:07 AM Sonam Sharma 
 wrote:

> Can someone please help in schema copy in same database ?
>
> I have taken db dump of current schema and in the ddl file , I have
> edited it with new schema.
> Will this work?
>

>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>


Date created for tables

2019-12-05 Thread Chloe Dives
Having moved to PostgreSQL from Oracle a few years ago I have been generally 
very impressed by Postgres, but there are a few things that I still miss. One 
of those is being able to see the created and last modified dates for database 
objects.

Is this something that has been considered for implementation?

Thanks in advance,
Chloe




Re: Date created for tables

2019-12-05 Thread Tom Lane
Chloe Dives  writes:
> Having moved to PostgreSQL from Oracle a few years ago I have been generally 
> very impressed by Postgres, but there are a few things that I still miss. One 
> of those is being able to see the created and last modified dates for 
> database objects.
> Is this something that has been considered for implementation?

It's been considered, and rejected, many times.  Aside from the overhead
involved, there are too many different ideas of what such dates ought to
mean (e.g., what should happen during dump/restore? does a failed
transaction update last-modified? etc etc).  You can search the
project's mailing list archives if you want to read the prior discussions.

regards, tom lane




Re: Date created for tables

2019-12-05 Thread Justin
Hi Tom

can't we get access to this information in a backwards way by using
pg_xact_commit_timestamp()  then query the system catalog tables xmin entry
for the relevant object???

this requires turning on pg_xact_commit_timestamp
https://www.postgresql.org/docs/current/runtime-config-replication.html

will not show the creation date as it will be lost after an update and
vacuum ...





On Thu, Dec 5, 2019 at 12:10 PM Chloe Dives 
wrote:

> Having moved to PostgreSQL from Oracle a few years ago I have been
> generally very impressed by Postgres, but there are a few things that I
> still miss. One of those is being able to see the created and last modified
> dates for database objects.
>
>
>
> Is this something that has been considered for implementation?
>
>
>
> Thanks in advance,
>
> Chloe
>
>
>
>
>


Re: Date created for tables

2019-12-05 Thread Adrian Klaver

On 12/5/19 9:10 AM, Chloe Dives wrote:
Having moved to PostgreSQL from Oracle a few years ago I have been 
generally very impressed by Postgres, but there are a few things that I 
still miss. One of those is being able to see the created and last 
modified dates for database objects.


Is this something that has been considered for implementation?


An alternative?:

https://www.postgresql.org/message-id/ABA5562F-56A9-4AB1-95D6-398215015DBD%40gmail.com



Thanks in advance,

Chloe




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Date created for tables

2019-12-05 Thread Ron




On 12/5/19 1:01 PM, Tom Lane wrote:

Chloe Dives  writes:

Having moved to PostgreSQL from Oracle a few years ago I have been generally 
very impressed by Postgres, but there are a few things that I still miss. One 
of those is being able to see the created and last modified dates for database 
objects.
Is this something that has been considered for implementation?

It's been considered, and rejected, many times.  Aside from the overhead
involved, there are too many different ideas of what such dates ought to
mean (e.g., what should happen during dump/restore? does a failed
transaction update last-modified? etc etc).  You can search the
project's mailing list archives if you want to read the prior discussions.


All the other RDBMSs seem to have figured it out.

--
Angular momentum makes the world go 'round.




Re: Date created for tables

2019-12-05 Thread Michael Paquier
On Thu, Dec 05, 2019 at 07:12:22PM -0600, Ron wrote:
> On 12/5/19 1:01 PM, Tom Lane wrote:
>> It's been considered, and rejected, many times.  Aside from the overhead
>> involved, there are too many different ideas of what such dates ought to
>> mean (e.g., what should happen during dump/restore? does a failed
>> transaction update last-modified? etc etc).  You can search the
>> project's mailing list archives if you want to read the prior discussions.
> 
> All the other RDBMSs seem to have figured it out.

It does not necessarily mean that Postgres has to do it.  FWIW, you
can track that using an even trigger for CREATE TABLE or other objects
which inserts the following in a table of your choice for a given
database:
- The timestamp of the transaction.
- The object name.
- Its class ID, say pg_class::regclass for a table, etc.
--
Michael


signature.asc
Description: PGP signature


Re: archiving question

2019-12-05 Thread Michael Paquier
On Thu, Dec 05, 2019 at 03:04:55PM +, Zwettler Markus (OIZ) wrote:
> What do you mean hear?
> 
> Afaik, Postgres runs the archive_command per log, means log by log by log.
> 
> How should we parallelize this?

You can, in theory, skip the archiving for a couple of segments and
then do the operation at once without the need to patch Postgres.
--
Michael


signature.asc
Description: PGP signature


Re: Date created for tables

2019-12-05 Thread Ron

On 12/5/19 7:40 PM, Michael Paquier wrote:

On Thu, Dec 05, 2019 at 07:12:22PM -0600, Ron wrote:

On 12/5/19 1:01 PM, Tom Lane wrote:

It's been considered, and rejected, many times.  Aside from the overhead
involved, there are too many different ideas of what such dates ought to
mean (e.g., what should happen during dump/restore? does a failed
transaction update last-modified? etc etc).  You can search the
project's mailing list archives if you want to read the prior discussions.

All the other RDBMSs seem to have figured it out.

It does not necessarily mean that Postgres has to do it.


That's not what I wrote.

--
Angular momentum makes the world go 'round.