Re: Must re-connect to see tables

2018-03-27 Thread Ron
If it worked in 9.3.22 and now it's failed in 9.3.22, then  I'd look to see 
if something has changed in data.sql.


On 03/27/2018 06:22 AM, Blake McBride wrote:

Hi,

I have been using PostgreSQL for many years but all of a sudden a db load 
script I've been using no longer works.  What id does (essentially) is:


psql postgres postgres
drop database test;
create database test;
\c test
\i data.sql
\dt
\q


I tested this with a small data.sql and it worked fine. However, when I 
use a real (much larger) data.sql then \dt replies "No relations"  !  I 
found this if I stick an extra '\c test' after the '\i data.sql' it works.


The data.sql is a PostgreSQL dump from another database. When the '\i 
data.sql' is done, the prompt still shows 'test#' so it's not like I've 
changed databases.


I only have one schema, 'public'.

Can't figure out why a simple example works but a large file requires an 
extra \c.


Sure appreciate any help.

Blake McBride



--
Angular momentum makes the world go 'round.



Re: [EXTERNAL]Re: pg_dump -Fd -j2 on standby in 9.6.6

2018-03-28 Thread Ron

On 03/28/2018 03:05 AM, Andreas Kretschmer wrote:
[snip]

> This e-mail message, including any attachments,

this is a public mailing list ...


The intended recipient is the public mailing list, no?

--
Angular momentum makes the world go 'round.



Re: Postgresql Split Brain: Which one is latest

2018-04-10 Thread Ron
You need to find out when the split happened, and whether each new master 
have records since then.


On 04/10/2018 11:47 AM, Vikas Sharma wrote:
Thanks Adrian and Edison, I also think so. At the moment I have 2 masters, 
as soon as slave is promoted to master it starts its own timeline and 
application might have added data to either of them or both, only way to 
find out correct master now is the instance with max count of data in 
tables which could incur data loss as well. Correct me if wrong please?


Thanks and Regards
Vikas

On Tue, Apr 10, 2018, 17:29 Adrian Klaver > wrote:


On 04/10/2018 08:04 AM, Vikas Sharma wrote:
> Hi Adrian,
>
> This can be a good example: Application server e.g. tomcat having two
> entries to connect to databases, one for master and 2nd for Slave
> (ideally used when slave becomes master). If application is not able to
> connect to first, it will try to connect to 2nd.

So the application server had a way of seeing the new master(old slave),
in spite of the network glitch, that the original master database did not?

If so and it was distributing data between the two masters on an unknown
schedule, then as Edison pointed out in another post, you really have a
split brain issue. Each master would have it's own view of the data and
latest update would really only be relevant for that master.

>
> Regards
> Vikas
>
> On 10 April 2018 at 15:26, Adrian Klaver mailto:adrian.kla...@aklaver.com>
> >> wrote:
>
>     On 04/10/2018 06:50 AM, Vikas Sharma wrote:
>
>         Hi,
>
>         We have postgresql 9.5 with streaming replication(Master-slave)
>         and automatic failover. Due to network glitch we are in
>         master-master situation for quite some time. Please, could you
>         advise best way to confirm which node is latest in terms of
>         updates to the postgres databases.
>
>
>     It might help to know how the two masters received data when they
>     where operating independently.
>
>
>         Regards
>         Vikas Sharma
>
>
>
>     --
>     Adrian Klaver
> adrian.kla...@aklaver.com 
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com 



--
Angular momentum makes the world go 'round.


Re: pg_basebackup restore a single table

2018-04-11 Thread Ron



On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:



Am 11.04.2018 um 15:53 schrieb camarillo:

Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.


No, but you can use a spare machine to restore the hole database 
(point-in-time-recovery) and restore the table from that using pg_dump or 
copy.


Not everyone has a spare machine and a few TB lying around...

--
Angular momentum makes the world go 'round.



Re: Archiving Data to Another DB?

2018-04-11 Thread Ron



On 04/11/2018 11:15 AM, Don Seiler wrote:

Let's say I have two DBs: main (9.6.6) and archive (10.2).

I have a table in main where I want to archive data older then 60 days. 
For various reasons, the table is not partitioned, so for now we must use 
DELETE. The destination table in the archive DB is partitioned with the 
new Pg10 partitioning.


My initial plan was to have a stored procedure on the archive DB use 
postgres_fdw to do an INSERT INTO / SELECT to select the data remotely and 
insert into the local archive table. It would then issue a single DELETE 
command to remotely remove the data from the main DB. However I found that 
doing this resulted in the main DB calling thousands (perhaps millions if 
it's one-per-row) of individual DELETE statements based on a ctid column. 
Aside from WAL behavior concerns, it is flooding my postgresql server logs 
since I log any DML.


On top of that, I'm told that a remote DELETE wouldn't be transactional, 
so if I were to compare inserted rows vs deleted rows and found a 
mismatch, I couldn't just rollback the DELETE. I plan to verify this with 
a small test case later but for now I'll assume this to be true.


Right now I'm thinking of falling back to the far-less-elegant method of 
dumping the data to a flat file via COPY, running psql to connect to the 
archive DB remotely and running a COPY to load the data (or maybe 
transferring the flat file to the archive DB to load it there, offloading 
that part of the workload), then deleting the data from the main DB. I 
could capture the rows dumped in a control table and compare the rows 
deleted against that and then rollback the delete if necessary.


Like I said, not elegant, but I don't want to risk losing data that wasn't 
successfully archived to the archive DB. I'm very interested to hear what 
others might be doing for tasks like this.


It might not be elegant, but a COPY / DELETE / LOAD is granular, so you can 
restart at any point.



--
Angular momentum makes the world go 'round.



Re: how to securely delete the storage freed when a table is dropped?

2018-04-13 Thread Ron



On 04/13/2018 12:48 PM, Jonathan Morgan wrote:
For a system with information stored in a PostgreSQL 9.5 database, in 
which data stored in a table that is deleted must be securely deleted 
(like shred does to files), and where the system is persistent even though 
any particular table likely won't be (so can't just shred the disks at 
"completion"), I'm trying to figure out my options for securely deleting 
the underlying data files when a table is dropped.


As background, I'm not a DBA, but I am an experienced implementor in many 
languages, contexts, and databases. I've looked online and haven't been 
able to find a way to ask PostgreSQL to do the equivalent of shredding its 
underlying files before releasing them to the OS when a table is DROPped. 
Is there a built-in way to ask PostgreSQL to do this? (I might just not 
have searched for the right thing - my apologies if I missed something)


A partial answer we're looking at is shredding the underlying data files 
for a given relation and its indexes manually before dropping the tables, 
but this isn't so elegant, and I'm not sure it is getting all the 
information from the tables that we need to delete.


We also are looking at strategies for shredding free space on our data 
disk - either running a utility to do that, or periodically replicating 
the data volume, swapping in the results of the copy, then shredding the 
entire volume that was the source so its "free" space is securely 
overwritten in the process.


Are we missing something? Are there other options we haven't found? If we 
have to clean up manually, are there other places we need to go to shred 
data than the relation files for a given table, and all its related 
indexes, in the database's folder? Any help or advice will be greatly 
appreciated.


I'd write a program that fills all free space on disk with a specific 
pattern.  You're probably using a logging filesystem, so that'll be far from 
perfect, though.


--
Angular momentum makes the world go 'round.



Re: how to securely delete the storage freed when a table is dropped?

2018-04-13 Thread Ron

After you drop a table, aren't the associated files dropped?

On 04/13/2018 02:29 PM, Ozz Nixon wrote:

There are free utilities that do government leave wipes. The process would be, 
drop the table, shrink the old table space then (if linux based), dd fill the 
drive, and use wipe, 5x or 8x deletion to make sure the drive does not have 
readable imprints on the platers.

Now what Jonathan mentions - sounds like he wants to do the same to the 
physical table. Never dabbling into PSQL’s storage and optimization algorithms, 
I would first assume, a script to do a row by row update table set 
field1…fieldx, different data patterns, existing field value length and field 
max length. Run the script at least 5 to 8 times, then drop the table .. the 
problem will be, does PSQL use a new page as you do this, then you are just 
playing with yourself. Let alone, how does PSQL handle indexes - new pages, or 
overwrite the existing page? And is any NPI (Non-Public-Info) data in the index 
itself?

* So any PSQL core-engine guys reading?

O.


On Apr 13, 2018, at 3:03 PM, Ron  wrote:



On 04/13/2018 12:48 PM, Jonathan Morgan wrote:

For a system with information stored in a PostgreSQL 9.5 database, in which data stored 
in a table that is deleted must be securely deleted (like shred does to files), and where 
the system is persistent even though any particular table likely won't be (so can't just 
shred the disks at "completion"), I'm trying to figure out my options for 
securely deleting the underlying data files when a table is dropped.

As background, I'm not a DBA, but I am an experienced implementor in many 
languages, contexts, and databases. I've looked online and haven't been able to 
find a way to ask PostgreSQL to do the equivalent of shredding its underlying 
files before releasing them to the OS when a table is DROPped. Is there a 
built-in way to ask PostgreSQL to do this? (I might just not have searched for 
the right thing - my apologies if I missed something)

A partial answer we're looking at is shredding the underlying data files for a 
given relation and its indexes manually before dropping the tables, but this 
isn't so elegant, and I'm not sure it is getting all the information from the 
tables that we need to delete.

We also are looking at strategies for shredding free space on our data disk - either 
running a utility to do that, or periodically replicating the data volume, swapping in 
the results of the copy, then shredding the entire volume that was the source so its 
"free" space is securely overwritten in the process.

Are we missing something? Are there other options we haven't found? If we have 
to clean up manually, are there other places we need to go to shred data than 
the relation files for a given table, and all its related indexes, in the 
database's folder? Any help or advice will be greatly appreciated.

I'd write a program that fills all free space on disk with a specific pattern.  
You're probably using a logging filesystem, so that'll be far from perfect, 
though.

--
Angular momentum makes the world go 'round.



--
Angular momentum makes the world go 'round.



pg_dump to a remote server

2018-04-16 Thread Ron
We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump 
file will be more than 1TB, and there's not enough disk space on the current 
system for the dump file.


Thus, how can I send the pg_dump file directly to the new server while the 
pg_dump command is running?  NFS is one method, but are there others 
(netcat, rsync)?  Since it's within the same company, encryption is not 
required.


Or would it be better to install both 8.4 and 9.6 on the new server (can I 
even install 8.4 on RHEL 6.9?), rsync the live database across and then set 
up log shipping, and when it's time to cut over, do an in-place pg_upgrade?


(Because this is a batch system, we can apply the data input files to bring 
the new database up to "equality" with the 8.4 production system.)


Thanks

--
Angular momentum makes the world go 'round.



Re: pg_dump to a remote server

2018-04-16 Thread Ron

On 04/16/2018 07:47 PM, Gao Jack wrote:

-Original Message-
From: Ron 
Sent: Tuesday, April 17, 2018 7:59 AM
To: pgsql-general 
Subject: pg_dump to a remote server

We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump
file will be more than 1TB, and there's not enough disk space on the current
system for the dump file.

Thus, how can I send the pg_dump file directly to the new server while the
pg_dump command is running?  NFS is one method, but are there others
(netcat, rsync)?  Since it's within the same company, encryption is not
required.

Or would it be better to install both 8.4 and 9.6 on the new server (can I
even install 8.4 on RHEL 6.9?), rsync the live database across and then set
up log shipping, and when it's time to cut over, do an in-place pg_upgrade?

(Because this is a batch system, we can apply the data input files to bring
the new database up to "equality" with the 8.4 production system.)

Thanks

--
Angular momentum makes the world go 'round.

Hi

https://www.postgresql.org/docs/current/static/backup-dump.html#BACKUP-DUMP-RESTORE

...
...

The ability of pg_dump and psql to write to or read from pipes makes it 
possible to dump a database directly from one server to another, for example:

pg_dump -h host1 dbname | psql -h host2 dbname


But that assumes --format=plain which will send a whole lot of uncompressed 
text across the wire.


--
Angular momentum makes the world go 'round.



Re: pg_dump to a remote server

2018-04-16 Thread Ron



On 04/16/2018 07:18 PM, Adrian Klaver wrote:

On 04/16/2018 04:58 PM, Ron wrote:
We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump 
file will be more than 1TB, and there's not enough disk space on the 
current system for the dump file.


Thus, how can I send the pg_dump file directly to the new server while 
the pg_dump command is running?  NFS is one method, but are there others 
(netcat, rsync)?  Since it's within the same company, encryption is not 
required.


Maybe?:

pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out'


That looks promising.  I could then "pg_restore -jX".

--
Angular momentum makes the world go 'round.



Re: pg_dump to a remote server

2018-04-17 Thread Ron

On 04/16/2018 11:07 PM, Adrian Klaver wrote:

On 04/16/2018 06:43 PM, Ron wrote:



On 04/16/2018 07:18 PM, Adrian Klaver wrote:

On 04/16/2018 04:58 PM, Ron wrote:
We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The 
dump file will be more than 1TB, and there's not enough disk space on 
the current system for the dump file.


Thus, how can I send the pg_dump file directly to the new server while 
the pg_dump command is running?  NFS is one method, but are there 
others (netcat, rsync)?  Since it's within the same company, encryption 
is not required.


Maybe?:

pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out'


That looks promising.  I could then "pg_restore -jX".


More promising would be the suggestion from Michael Nolan:

https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com 



"Can you run pg_dump on the new server, connecting remotely to the current 
one?"


It eliminates two programs(ssh and cat) and a pipe.


Is that supported?

--
Angular momentum makes the world go 'round.



A couple of pg_dump questions

2018-04-19 Thread Ron


$ pg_dump --host=farawaysrvr -Fc $REMOTEDB > /local/disk/backups/$REMOTEDB.dump

Is the data compressed on the remote server (thus minimizing traffic on the 
wire), or locally?  (I'd test this myself, but the company has really strict 
firewall rules in place.)


$ pg_dump --host=farawaysrvr -Fc -j4 $REMOTEDB > 
/local/disk/backups/$REMOTEDB.dump


Will parallel backups work if pg_dump is v9.6 and the remote system is v8.4?

Thanks

--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Ron

On 04/20/2018 03:55 PM, Vick Khera wrote:
On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma > wrote:


Hello Guys,

Could someone throw light on the postgresql instance wide or database
wide encryption please? Is this possible in postgresql and been in use
in production?.


For anyone to offer a proper solution, you need to say what purpose your 
encryption will serve. Does the data need to be encrypted at rest? Does it 
need to be encrypted in memory? Does it need to be encrypted at the 
database level or at the application level? Do you need to be able to 
query the data? There are all sorts of scenarios and use cases, and you 
need to be more specific.


For me, using whole-disk encryption solved my need, which was to ensure 
that the data on disk cannot be read once removed from the server.


Someone really needs to explain that to me. My company-issued laptop has 
WDE, and that's great for when the machine is shut down and I'm carrying it 
from place to place, but when it's running, all the data is transparently 
decrypted for every process that wants to read the data, including malware, 
industrial spies,


Thus, unless you move your DB server on a regular basis, I can't see the 
usefulness of WDE on a static machine.


For certain fields in one table, I use application level encryption so 
only the application itself can see the original data. Anyone else 
querying that table sees the encrypted blob, and it was not searchable.


--
Angular momentum makes the world go 'round.


Re: Postgresql database encryption

2018-04-20 Thread Ron



On 04/20/2018 06:11 PM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:

On 04/20/2018 03:55 PM, Vick Khera wrote:

On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma 
Someone really needs to explain that to me. My company-issued laptop has
WDE, and that's great for when the machine is shut down and I'm carrying it
from place to place, but when it's running, all the data is transparently
decrypted for every process that wants to read the data, including malware,
industrial spies,

Thus, unless you move your DB server on a regular basis, I can't see the
usefulness of WDE on a static machine.

The typical concern (aka, attack vector) isn't around moving the DB
server on a regular basis or about someone breaking into your data
center and stealing your drives, it's making sure that disposal of
equipment doesn't result in valuable data being retained on the
drives when they leave the data center for replacement or disposal.


That makes some sense, but years of added CPU overhead to mitigate a problem 
that could be solved by writing zeros to the disk as a step in the decomm 
process seems more than a bit wasteful.



--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Ron

On 04/20/2018 10:24 AM, Vikas Sharma wrote:

Hello Guys,

Could someone throw light on the postgresql instance wide or database wide 
encryption please? Is this possible in postgresql and been in use in 
production?.


What about encrypted backups?

--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Ron


Also, Percona (a MySQL fork) 5.7.

On 04/20/2018 07:31 PM, Ozz Nixon wrote:
PS. the following database servers do offer internal encryption on a 
page/block oriented read/write (for encrypted data at rest security 
requirements)


PremierSQL TDE
MariaDB 10.1.3+
*MySQL* 5.7.11+
Microsoft uses TDE
Oracle AdvSec uses TDE
DB2 v7.2 UDB
MangoDB uses AES-256
PostgreSQL does - but the key is public (dumb) 
https://www.postgresql.org/message-id/ca%2bcsw_tb3bk5i7if6inzfc3yyf%2b9hevnty51qfboeuk7ue_v%...@mail.gmail.com


Just because you do not see the reason for it, does not make the reason a 
bad idea.


On Fri, Apr 20, 2018 at 8:19 PM Ozz Nixon > wrote:


Well, actually since 2003, this has been a standard requirement from
the Credit Card industry. And it does make sense in the field of
"while at rest" the data still cannot be accessed.

Requirement 1. No NPI data should be displayed without controls - e.g.
reports, PDF, etc.
Requirement 2. Same data, must be secured during transmission -
fetching to client screen etc.
Requirement 3. NPI data should not be logged nor stored on a physical
device in non-encrypted mode.

There are more steps to this, but, to chalk it off as another
half-assed required is typical. Hashing is a useful one-way technique,
however, trapping the hash made using a hash useless! When I worked
for the credit bureaus we ran encrypted drive arrays, DB/2 encrypted,
SSL/TLS encryption over P2P VPN connections, and masked output fields
when the data would go to reports or screens to PCs outside our control.

Anyone with Linux and use LUKS encryption on an LVM partition to
achieve security where the database may not, or logs or something may
exist where NPI might be see. Oh yeah, NPI (Non-Pubic Information,
like your social, you bank account, you paycheck information, etc.
things that should not exist outside of controls)...

PS. You cannot simply take a drive from one machine to another, when
doing proper RAID and LUKS encryption.

Ozz
15 years experience with federal data security requirements.

On Fri, Apr 20, 2018 at 7:55 PM Tim Cross mailto:theophil...@gmail.com>> wrote:


Vikas Sharma mailto:shavi...@gmail.com>> writes:

> Hello Guys,
>
> Could someone throw light on the postgresql instance wide or
database wide
> encryption please? Is this possible in postgresql and been in use in
> production?.
>
> This is a requirement in our production implementation.
>

This sounds like a lazy management requirement specified for
'security'
purposes by people with little understanding of either technology or
security. I suspect it comes form a conversation that went along the
lines of 

"There has been lots in the news about cyber threats"

"Yes, we need our system to be secure"

"I know, lets make one of the requirements that everything must be
encrypted, that will stop them"

"Great idea, I'll add it as requirement 14".

This is a very poor requirement because it is not adequately
specified,
but more critically, because it is specifying a 'solution' rather than
articulating the requirement in a way which would allow those with the
necessary expertise to derive an appropriate solution - one which
may or
may not involve encryption or hashing of data and which may or may not
be at the database level.

What you really need to do is go back to your stakeholders and ask
them
a lot of questions to extract what the real requirement is. Try to
find
out what risk they are trying to mitigate with encryption. Once
this is
understood, then look at what the technology can do and work out the
design/implementation from there.

It is extremely unlikely you just want all the data in the database
encrypted. When you think about it, such an approach really
doesn't make
sense. In basic terms, if the data is encrypted, the database engine
will need to be able to decrypt it in order to operate (consider how a
where clause needs to be able to interpret actions etc). If the db can
read the data, the keys must be in the database. If the keys are
in the
database and your database is compromised, then your keys are
compromised. So provided you protect your database from
compromise, you
achieve the same level of security as you do with full data encryption
EXCEPT for access to the underlying data files outside of the database
system. For this, you will tend to use some sort of file system
encryption, which is typically managed at the operating system
level. Again, for the operating system to 

Re: Rationale for aversion to the central database?

2018-04-27 Thread Ron



On 04/27/2018 05:52 PM, g...@luxsci.net wrote:


On April 24, 2018 07:27:59 am PDT, "Sam Gendler" 
 wrote:
On Sun, Apr 8, 2018 at 15:37 g...@luxsci.net  
mailto:g...@luxsci.net>> wrote:



On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" mailto:guy...@gmail.com>> wrote:

One advantage to using logic and functions in  the db is that you can
fix things immediately without having to make new application builds.
That in itself is a huge advantage, IMO.

I doubt most of us would consider this any kind of advantage outside of 
the momentary temptation to do it when an app is completely broken and 
needs to be up in a hurry. Application changes, whether in the dB or in 
application logic, need to be tested, and they need to be revision 
controlled and released in a manner that can be easily rolled back in an 
automated manner. The fact that putting logic in the database can 
effectively allow developers to make unreleased changes to production apps 
is specifically one of the problems that I am trying to avoid when I keep 
most logic in the app instead of the dB. It’s a whole lot harder to make 
arbitrary manual changes to code in the app, whether interpreted or 
compiled, if it is running inside a container that cannot be updated. Even 
if you go in with a shell and update an interpreted file, the next time 
that container is launched the change will be lost, which is usually 
sufficient motivation to keep devs from doing that kind of thing.
I’ll put some things in the db, either for performance or because I want 
that logic to be built into the data and not be part of the application, 
but I choose those contexts carefully and I write them in as portable a 
manner as possible. And for those who say migrations don’t happen, I’ve 
certainly been through a few, usually as part of an acquisition or the 
like, but sometimes simply because another dB server better meets our 
needs after a time. And migrating stored procs can be really difficult. 
Such code usually has less complete unit and integration tests, which 
makes validating those changes more difficult, too.
But the biggest reason is that databases often have to scale up rather 
than out, so keeping as much logic in the application code allows my 
scaling requirements for the dB server to be as minimal as possible. Sure, 
there are workloads where pushing raw data across the wire will be more 
work than processing it in the dB, and in those cases, I may do that, but 
I consider it premature optimization to just assume that is necessary 
without hard evidence from production examples to suggest otherwise.
Finally, there’s the consistency argument. I want to find all of the logic 
in one place. Either entirely in the source code or entirely in the dB. 
Having to trace things from the code to the dB and back again can make it 
a whole lot harder to see, at a glance, what is happening in the code. 
Having logic in the dB also means it can be difficult or impossible to 
have two releases talking to the same schema version at the same time - so 
canary builds and rolling deployments can be difficult. Of course, schema 
changes can cause this problem, regardless of whether there are stored 
procs, but the more of your logic that lives in the db, the more likely it 
is that your releases will conflict over the db. So I’m more likely to be 
able to do a rolling release if I keep the db as a dumb data store and 
keep logic in the application code.


===

I could have worded that better but I think that we're coming at it from 
different directions. You think of your application as the "master" 
operator. I think of a PG db as the "master", not a slave. I believe that 
we shouldn't _have_ to use an external application for the database to be 
useful and coherent.  I like to think of external applications as 
subservient to the db and not the other way around. Yeah, I know, probably 
not a popular viewpoint.


Sorry, I don't really understand why it would be so hard to migrate, say 
pl/pgsql functions. You can maybe expect to write some likely convoluted 
application code, though. :) Reusable functions in the db that are solid 
also means that developers don't have to reinvent the wheel in whatever 
language and debugging also becomes simpler.


And it's not like the developers don't rewrite the code every time they 
migrate to the Latest and Greatest Language...



--
Angular momentum makes the world go 'round.


Re: CSVQL? CSV SQL? tab-separated table I/O? RENAME COLUMN

2018-05-02 Thread Ron

On 05/02/2018 04:49 PM, David G. Johnston wrote:
[snip]


- the microsoft patented CSV would be required for implementation. it
handles special data with commas and double-quotes in them


​If true this seems like a show-stopper to anything PostgreSQL would implement


If MSFT really holds a patent on the CSV format, then Postgresql is already 
in a world of hurt.


--
Angular momentum makes the world go 'round.


pg_dump with compressible and non-compressible tables

2018-05-05 Thread Ron

Hi,

v9.6

We've got big databases where some of the tables are highly compressible, 
but some have many bytea fields containing PDF files.


When the data format is custom, directory or tar, how feasible would a 
"--no-blob-compression" option be (where pg_dump just tells the zlib library 
to just Store tables with bytea columns, while compressing all other tables 
at the specified -Z level)?


Thanks

--
Angular momentum makes the world go 'round.



Re: pg_dump with compressible and non-compressible tables

2018-05-05 Thread Ron

On 05/05/2018 12:13 PM, Adrian Klaver wrote:

On 05/05/2018 07:14 AM, Ron wrote:

Hi,

v9.6

We've got big databases where some of the tables are highly compressible, 
but some have many bytea fields containing PDF files.


Can you see a demonstrable difference?


Very much so.  The ASCII hex representations of the PDF files are 
compressible, but take a *long* time to compress. Uncompressed backups are 
50% faster.




When the data format is custom, directory or tar, how feasible would a 
"--no-blob-compression" option be (where pg_dump just tells the zlib 
library to just Store tables with bytea columns, while compressing all 
other tables at the specified -Z level)?


In pg_dump blob refers to large objects:

https://www.postgresql.org/docs/10/static/app-pgdump.html
"
-b
--blobs

    Include large objects in the dump. This is the default behavior except 
when --schema, --table, or --schema-only is specified. The -b switch is 
therefore only useful to add large objects to dumps where a specific 
schema or table has been requested. Note that blobs are considered data 
and therefore will be included when --data-only is used, but not when 
--schema-only is.

"

These are different critters then bytea.


Ok.  I need the data in my backups anyway, so excluding them is 100% 
contrary to what I need.


--
Angular momentum makes the world go 'round.


Re: Enhancement to psql command, feedback.

2018-05-09 Thread Ron

On 05/09/2018 02:59 AM, John McKown wrote:
I just wanted to throw this out to the users before I made a complete fool 
of myself by formally requesting it. But I would like what I hope would be 
a minor change (enhancement) to the psql command. If you look on this 
page, https://wiki.postgresql.org/wiki/Shared_Database_Hosting ,

you will see a number of example which look like:

psql -U postgres template1 -f - << EOT

REVOKE ALL ON DATABASE template1 FROM public;
REVOKE ALL ON SCHEMA public FROM public;
GRANT ALL ON SCHEMA public TO postgres;
CREATE LANGUAGE plpgsql;

EOT

To me this looks similar to a UNIX shell script.


Because it *is* a Unix shell script.  The "<< EOT" is part of a heredoc, 
which is designed to keep everything in one place instead of needing a 
second file for the SQL commands.


https://en.wikipedia.org/wiki/Here_document

(The concept is as old as computing.  Anyone who's worked on mainframes or 
proprietary minicomputers from DEC will instantly recognize it.)


--
Angular momentum makes the world go 'round.


Re: posgresql.log

2018-05-21 Thread Ron

On 05/21/2018 04:40 PM, Bartosz Dmytrak wrote:


Hi Gurus,

Looking into my postgresql.log on one of my test servers I found scary entry:

--2018-05-19 05:28:21-- http://207.148.79.161/post0514/post

Connecting to 207.148.79.161:80... connected.

HTTP request sent, awaiting response... 200 OK

Length: 1606648 (1.5M) [application/octet-stream]

Saving to: ‘/var/lib/postgresql/10/main/postgresq1’

0K .. .. .. .. ..  3% 71.0K 21s

    50K .. .. .. .. ..  6% 106K 17s

   100K .. .. .. .. ..  9% 213K 13s

   150K .. .. .. .. .. 12% 213K 11s


[snip]

1500K .. .. .. .. .. 98% 11.8M 0s

  1550K ..  100% 12.5M=2.6s

2018-05-19 05:28:25 (598 KB/s) - ‘/var/lib/postgresql/10/main/postgresq1’ 
saved [1606648/1606648]


Downloaded file is not posgresql but postgresq1(one).

It was pure pg instalation without any contrib modules addons etc, 
istalled on ubuntu box by apt manager using repos:


http://apt.postgresql.org/pub/repos/apt xenial-pgdg/main

http://apt.postgresql.org/pub/repos/apt xenial-pgdg

I have never seen such entry on other my other servers…

Could you be so kind and explain me what is it? I am afraid my postgres 
has been hacekd.




This looks like what happens when the adobe flash player package downloads 
the closed-source binary installer.  Thus, I wouldn't be surprised if the 
repository package isn't downloading the installation binaries from 
http://207.148.79.161/post0514/post.


--
Angular momentum makes the world go 'round.


Re: [GENERAL] Postgre compatible version with RHEL 7.5

2018-05-23 Thread Ron



On 05/23/2018 08:13 AM, Adrian Klaver wrote:

On 05/23/2018 03:59 AM, Deepti Sharma S wrote:

Hi David,

“9.6.6 is compatible but not supported”, what does this means?

For details see:

https://www.postgresql.org/support/versioning/

Basically it is supported by the community, but keeping up to date with 
the latest minor release(9.6.9) is strongly advised. The bigger issue is 
that the repo is going to be at the latest release.


Not supported because only 9.6.9 is supported?

--
Angular momentum makes the world go 'round.



Re: computing z-scores

2018-05-24 Thread Ron

On 05/24/2018 10:15 AM, Martin Mueller wrote:

You construct a z-score for a set of values by subtracting the average from the 
value and dividing the result by the standard deviation. I know how to do this 
in a two-step procedure. First, I compute the average and standard deviation. 
In a second run I use the formula and apply it to each value.

Is there a way of doing this in a single-step procedure or can you chain the 
two parts together in one query?  This goes beyond my SQL competence.


What about this?

SELECT value, (value - AVG(value))/stddev(value) as zvalue
FROM sometable
WHERE some conditions
GROUP by value


--
Angular momentum makes the world go 'round.



Re: Question on disk contention

2018-05-31 Thread Ron

On 05/31/2018 08:52 AM, Melvin Davidson wrote:



On Thu, May 31, 2018 at 1:13 AM, Charles Clavadetscher 
mailto:clavadetsc...@swisspug.org>> wrote:


Hi Melvin

As an answer to a previous post you wrote:

"Also, your main problem is that when you have two exact same queries
executing at the same time, they will cause contention in
the disk, and neither one will make much progress."

Could you elaborate a little more on the meaning of "contention in the
disk"?
What is it that happens?

Thank you and have a good day.
Regards
Charles



>Could you elaborate a little more on the meaning of "contention in the disk"?
>What is it that happens?

To simplify, you have two users/jobs, both wanting the exact same 
information. So the system instructs the disk to get
that information from the disk, which causes the disk head to "seek" to 
the position of the first eligible row and
continues positioning to other eligible rows. Now the job is not 
exclusive, so the system temporarily switches to the
other job, which causes the disk to go back to the first row and work from 
there. The switching back and forth continues,
so that instead of one job finishing quickly, they both have to take turns 
waiting for needed information. That takes

a lot longer,

Try this, Select a table that has a lot of rows, ideally 1M+. Then start a 
query with a WHERE clause and see how long
it takes. Then submit the same query from 5 separate connections 
simultaneously and see how long that takes.


Why isn't the OS caching the disk blocks, and why isn't Postgres using the 
cached data?



--
Angular momentum makes the world go 'round.


Re: Insert UUID GEN 4 Value

2018-05-31 Thread Ron

On 05/31/2018 07:39 PM, tango ward wrote:
On Thu, May 31, 2018 at 12:32 PM, tango ward > wrote:



On Thu, May 31, 2018 at 12:18 PM, David G. Johnston
mailto:david.g.johns...@gmail.com>> wrote:

On Wednesday, May 30, 2018, tango ward mailto:tangowar...@gmail.com>> wrote:

Okay I will try it.


When I tried it, I am getting an error: Invalid input syntax
for UUID: uuid_generate_v4(),


Avoid references to "it" and just show the code you tried to run.

David J.



I'm testing this code:

curr_pgsql.execute('''CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; ''')
curr_pgsql.execute('''
    INSERT INTO enrollmentinfo (
created, modified,
secure_id,
relationship,
tuition_bill,
cashier_name,
cashier_phone_number
)
    VALUES (current_timestamp, current_timestamp,
    uuid_generate_v4(), '', '', '',)
''')


curr_pgsql.execute('''CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; ''')
curr_pgsql.execute('''
    INSERT INTO enrollmentinfo (
created, modified,
secure_id,
relationship,
tuition_bill,
cashier_name,
cashier_phone_number
)
    VALUES (current_timestamp, current_timestamp,
    uuid_generate_v4(), '', '', '',)
''')


Hi, this is the query that I am trying to run. I'll try your suggestions 
when I arrived in the office.


Honestly, I'd try it in psql first.  That will eliminate all possible 
problems due to quoting errors, etc.


--
Angular momentum makes the world go 'round.


Re: Whither 1:1?

2018-06-01 Thread Ron

On 06/01/2018 12:25 PM, Guyren Howe wrote:
On Jun 1, 2018, at 10:16 , Olivier Gautherot > wrote:


You will get a benefit in terms of space only if the optional fields in 
the second table exist in a reduced number of instances - and the second 
table is significantly wider. This can make a difference on big tables 
but this gain may be offset by the cost of the join. In this perspective, 
I don’t think that there is a clear benefit or drawback: it should be 
evaluated on a case-by-case basis.


It seems to me that people take time to *catch up with modern hardware 
reality. SSDs reduce seek time to virtually zero.* Surely, joins are now 
much, much cheaper. If so, I’m inclined to describe wide tables as a 
premature optimization.


Sure, SSDs are uber-wonderful, but a rack full of rotating media is still 
going to be a lot cheaper and have a lot more capacity than a rack full of 
SSDs, and that makes all the difference...


--
Angular momentum makes the world go 'round.


Re: Code of Conduct plan

2018-06-03 Thread Ron

On 06/03/2018 04:54 PM, Berend Tober wrote:

Tom Lane wrote:

Two years ago, there was considerable discussion about creating a
Code of Conduct for the Postgres community...

We are now asking for a final round of community comments...


I really like that this was included: "Any allegations that prove not to 
be substantiated...will be viewed as a serious community offense and a 
violation of this Code of Conduct."


Good attempt to prevent the CoC being used as vindictive weaponry.


But a futile attempt: "A lie can travel half way around the world while the 
truth is putting on its shoes."



--
Angular momentum makes the world go 'round.



Re: How to get postmaster shut down time in postgres?

2018-06-04 Thread Ron

On 06/04/2018 08:44 AM, pavan95 wrote:

Hi Adrian/Melvin,

Thanks for your prompt replies. Yeah, I'm aware of that way.

But my requirement is to get the server shutdown time whenever that event
occurs and insert into a table dynamically!! Is it possible?


You want to trap the shutdown action and write to an audit table just before 
it happens?


--
Angular momentum makes the world go 'round.



Re: Code of Conduct plan

2018-06-04 Thread Ron

On 06/03/2018 07:57 PM, Jonathan S. Katz wrote:
[snip]

Anyway, a big +1 to the effort of everyone who worked on the CoC for
the past several years. From feedback in other forums through the years,
I know it does make a difference to have a code of conduct in terms of
helping people to feel more welcome and knowing that there is an
avenue for them to voice feedback in the case of an unfortunate incident.


How will New Users know that the CoC exists, much less read it? Will there 
be a click-through "you must read and accept the CoC before being allowed to 
join a mailing list"?  What about the people already on the mailing list?



--
Angular momentum makes the world go 'round.



Re: Code of Conduct plan

2018-06-04 Thread Ron
If there's been so much Bad Behavior that's so Weakened the Community, then 
someone's done an excellent job of hiding that Bad Behavior.


On 06/04/2018 09:57 AM, Evan Macbeth wrote:
I just want to chime in and thank all those who worked on this Code of 
Conduct. It's well thought out, and I'm personally very glad to see it. I 
think this just makes our community and its work stronger. I strongly 
support it being put into effect.


Evan Macbeth

On Sun, Jun 3, 2018 at 2:29 PM, Tom Lane > wrote:


Two years ago, there was considerable discussion about creating a
Code of Conduct for the Postgres community, as a result of which
the core team announced a plan to create an exploration committee
to draft a CoC [1].  That process has taken far longer than expected,
but the committee has not been idle.  They worked through many comments
and many drafts to produce a version that seems acceptable in the view
of the core team.  This final(?) draft can be found at

https://wiki.postgresql.org/wiki/Code_of_Conduct


We are now asking for a final round of community comments.
Please send any public comments to the pgsql-general list (only).
If you wish to make a private comment, you may send it to
c...@postgresql.org .

The initial membership of the CoC committee will be announced separately,
but shortly.

Unless there are substantial objections, or nontrivial changes as a result
of this round of comments, we anticipate making the CoC official as of
July 1 2018.

                        regards, tom lane

[1]
https://www.postgresql.org/message-id/56a8516b.8000...@agliodbs.com





--
Evan Macbeth - Director of Support - Crunchy Data
+1 443-421-0343 - evan.macb...@crunchydata.com 



--
Angular momentum makes the world go 'round.


Re: Pgagent is not reading pgpass file either in Windows or Linux.

2018-06-04 Thread Ron
I've noticed that .pgpass is case sensitive, so am not surprised that it 
also wouldn't note the difference between 127.0.0.1 and localhost.


On 06/04/2018 05:31 PM, nageswara Bandla wrote:

I have figured out the issue with pgAgent both in Windows and Linux.

PgAgent seems to ignore pgpass.conf/.pgpass whenever it has 127.0.0.1 
(127.0.0.1:5432:*:postgres:postgres) throws an error:


*DEBUG: Creating DB connection: user=postgres port=5432 hostaddr=127.0.0.1 
dbname=linuxpostgresdb*


*WARNING: Couldn't create the primary connection (attempt 1): fe_sendauth: 
no password supplied*


*
*

The solution could be update .pgpass to have ( 
localhost:5432:*:postgres:postgres ) and then pgagent works fine without 
issues.



I think, pgagent is not inline with libpq.dll while passing host address 
parameter. I have raised this concern with pgagent github where exactly 
they need to change


the code in order for pgagent to be in line with psql program.


https://github.com/postgres/pgagent/issues/14


On Fri, Jun 1, 2018 at 9:43 AM, nageswara Bandla > wrote:




On Thu, May 31, 2018 at 5:16 PM, George Neuner mailto:gneun...@comcast.net>> wrote:

On Thu, 31 May 2018 15:40:21 -0500, nageswara Bandla
mailto:nag.ban...@gmail.com>> wrote:

>On Thu, May 31, 2018 at 12:57 PM, George Neuner
mailto:gneun...@comcast.net>>
>wrote:
>
>> It just occurred to me that you said PGPASSFILE was set to
>>
>>     %APPDATA%/postgresql/pgpass.conf
>>
>>
>> The problem may be that when LocalSystem expands %APPDATA%, it is
>> finding its own directory, which might be any of:
>>
>>    C:\Windows\ServiceProfiles\LocalService\appdata
>>    C:\Windows\System32\config\systemprofile\AppData
>>    C:\Windows\SysWOW64\config\systemprofile\AppData
>>
>> depending on your Windows version, policies (if any), and
whether the
>> executable is 32 or 64 bit.
>>
>>
>> I wouldn't try messing with any of these directories. Instead try
>> setting PGPASSFILE to the full path to your file.
>>
>>
>I have tried all of them, pgagent is not recognizing any of the above
>locations. In fact, I have tried both options
>
> #1. By defining PGPASSFILE to the above locations one after the
other.
> #2. By copying pgpass.conf to all the three locations by creating
>Roaming/postgresql directories.
>
>And also I have defined PGPASSFILE=C:\pgpass.conf; I think, this
should be
>accessible to any system account. This also not working.


One more stupid question and then I'm out of ideas ...


Have you rebooted after changing the environment variable?

Global environment changes normally don't take effect until the user
logs out/in again.  LocalSystem is not an interactive user - you have
to restart the system to let it see environment changes.  PITA.


Yes, I did. But no luck..I guess, we have to live with this problem
for pgagent running as a Local System account.
We need to run pgagent service as  "Logon user account" and provide
user logon credentials for running pgagent service.

In Linux case, pgagent is not even reading .pgpass itself. The issue
here is that the logs (debug level log) are no help. It don't have
much information.
Which password file it is trying to read.




George






--
Angular momentum makes the world go 'round.


Re: Code of Conduct plan

2018-06-07 Thread Ron

On 06/07/2018 04:55 AM, Gavin Flower wrote:
[snip]
The Americans often seem to act as though most people lived in the USA, 
therefore we should all be bound by what they think is correct!


"You" are wearing a tee-shirt (or hoodie), blue jeans and Nikes, while 
eating a fast food hamburger, drinking a Coke, listening to rock and roll, 
emailing us over the Internet from your Mac, thinking all Men are created 
equal, and feeling glad that NZ isn't an English colony.


That kind of cultural dominance makes one think the US truly is exceptional.

--
Angular momentum makes the world go 'round.



Re: Code of Conduct plan

2018-06-07 Thread Ron




On 06/08/2018 12:09 AM, Gavin Flower wrote:

On 08/06/18 16:55, Ron wrote:

On 06/07/2018 04:55 AM, Gavin Flower wrote:
[snip]
The Americans often seem to act as though most people lived in the USA, 
therefore we should all be bound by what they think is correct!


"You" are wearing a tee-shirt (or hoodie), blue jeans and Nikes, while 
eating a fast food hamburger, drinking a Coke, listening to rock and 
roll, emailing us over the Internet from your Mac, thinking all Men are 
created equal, and feeling glad that NZ isn't an English colony.


That kind of cultural dominance makes one think the US truly is exceptional.

Only two of those things you said about me are currently true, and some 
are never true.


That's why I put "you" in quotes.



Perhaps accusing someone as being a Mac user should be banned by the CoC?


--
Angular momentum makes the world go 'round.



Re: Can you make a simple view non-updatable?

2018-06-08 Thread Ron



On 06/08/2018 04:17 AM, Ryan Murphy wrote:


maybe it is time to overhaul the security concept.


I could see how I could revoke permissions from, say, all users that 
aren't superusers to INSERT or UPDATE certain views.  However, if possible 
it would be nice to get an error message about the VIEW not being 
updatable, rather than a user access error, which could be misleading.


When I try to insert into a non-updatable VIEWs, I get this message:

ERROR:  cannot insert into view "test_view"
DETAIL:  Views containing GROUP BY are not automatically updatable.
HINT:  To enable inserting into the view, provide an INSTEAD OF INSERT 
trigger or an unconditional ON INSERT DO INSTEAD rule.


It would be great to see something like this when trying to insert into a 
simple VIEW that I had made non-updatable:


ERROR:  cannot insert into view "test_view2"
DETAIL:  This view has manually been made non-updatable.


Something like CREATE READ ONLY VIEW test_view2 AS SELECT 

--
Angular momentum makes the world go 'round.


Re: What does Natvie Posgres mean?

2018-06-12 Thread Ron
This, to me, is the true meaning of "native PostgreSQL" (as opposed to 
"stock PostgreSQL", which is uncustomized code).  However, if the job wanted 
post was written by an HR flunky, it could mean anything.



On 06/12/2018 01:11 PM, Benjamin Scherrey wrote:
In my experience it refers to *development directly via SQL against the 
Postgres server* rather than via an ORM like Django or the like. A 
remarkably high percentage of applications backed by Postgres have been 
written by developers that have never actually seen or written SQL code 
directly. It's all generated (often quite naively) by the object 
relational mapper. Requesting "native" developers means that they want you 
to understand how the DB actually behaves and to be able to generate 
optimal SQL code and proper DDLs that fit the application domain correctly.


  - - Ben Scherrey

On Wed, Jun 13, 2018, 12:59 AM David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Tuesday, June 12, 2018, bto...@computer.org
 mailto:bto...@broadstripe.net>> wrote:


When someone, e.g., as appeared in a recent and some older
pgsql-jobs messages, says "Native Postgres", what do you suppose
that means?

Does it mean something different than just "PostgreSQL"?


Likely it means the open source product built directly from the source
code published here (or packages derived there-from).  As opposed to
say AWS RDS or EnterpriseDB or various other forks of the product
available in the wild.

David J.



--
Angular momentum makes the world go 'round.


Re: PostgreSQL Volume Question

2018-06-19 Thread Ron

On 06/15/2018 11:26 AM, Data Ace wrote:


Well I think my question is somewhat away from my intention cause of my 
poor understanding and questioning :(


Actually, I have 1TB data and have hardware spec enough to handle this 
amount of data, but the problem is that it needs too many join operations 
and the analysis process is going too slow right now.


I've searched and found that graph model nicely fits for network data like 
social data in query performance.




If your data is hierarchal, then storing it in a network database is 
perfectly reasonable.  I'm not sure, though, that there are many network 
databases for Linux.  Raima is the only one I can think of.


Should I change my DB (I mean my DB for analysis)? or do I need some other 
solutions or any extension?



Thanks



--
Angular momentum makes the world go 'round.


Re: using pg_basebackup for point in time recovery

2018-06-20 Thread Ron

On 06/21/2018 12:27 AM, Michael Paquier wrote:
[snip]

Attached is a patch which includes your suggestion.  What do you think?
As that's an improvement, only HEAD would get that clarification.


You've *got* to be kidding.

Fixing an ambiguously or poorly worded bit of *documentation* should 
obviously be pushed to all affected versions.


--
Angular momentum makes the world go 'round.


Re: Return select statement with sql case statement

2018-07-04 Thread Ron

On 07/04/2018 07:48 AM, hmidi slim wrote:

Hi,
I need to use conditional expression in my query, So I want to make a 
query like this:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id

However, in the documentation I found that the return of case was a value, 
not like in my case I want to return a select statement.

How can I use a conditional expression in a sql query?
Best Regards.


The CASE clause is used to return one of many choices.  Based on this 
example, you need to do this:


select numberOfPremiumDays,
   product_id,
   price
   from product
where occupation_type_id = 1
  and  numberOfPremiumDays = date_part('day', ('2018-11-05'::timestamp) - 
('2018-11-01'::timestamp))

group by product_id, occupation_type_id



--
Angular momentum makes the world go 'round.



Re: Return select statement with sql case statement

2018-07-04 Thread Ron

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the numberOfPremiumDays 
there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', ('2018-11-05'::timestamp) - 
('2018-11-01'::timestamp)) then

       premium_price
   else
   period_price
   end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


--
Angular momentum makes the world go 'round.



Re: Return select statement with sql case statement

2018-07-04 Thread Ron




On 07/04/2018 05:08 PM, Adrian Klaver wrote:

On 07/04/2018 03:03 PM, Ron wrote:

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the 
numberOfPremiumDays there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

        premium_price
    else
    period_price
    end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


The issue with the above is that table changes from product to product1 in 
the OP's desired behavior so the price switch alone will not work:(


Ah, didn't notice that.  Then... dynamic sql constructed by the programming 
language executing the query?



--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 03:21 PM, Christopher Browne wrote:

I have built one that I call Mahout
(https://github.com/cbbrowne/mahout) which has the merit of involving
just two shell scripts, one of which is an auditing tool (pgcmp).

It implements a "little language" to indicate dependencies between the
SQL scripts that implement the DDL changes.

The notable thing that I have seen "downthread" that it does not
attempt to implement is "rollback scripts."  I find that "necessity"
to be a ruby-on-rails dogma that does not seem to fit what I see
people doing.


Where I work, the requirement to have rollback scripts is part of the ITIL 
requirement for Changes to have a backout procedure.


--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 03:39 PM, Ravi Krishna wrote:

Where I work, the requirement to have rollback scripts is part of the ITIL 
requirement for Changes to have a backout procedure.


Liquibase provides that ability, but IMO rollback for RDBMS is always bit 
tricky.  Certain DDL operations can take long time if it involves
a table rewrite.  PG is actually better than others.


Yeah, that's true.  Sometimes I just dump the whole table, and reload if a 
rollback is necessary.




--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 04:10 PM, Christopher Browne wrote:
[snip]

ITIL surely does NOT specify the use of database rollback scripts as
THE SPECIFIED MECHANISM for a backout procedure.

In practice, we tend to take database snapshots using filesystem
tools, as that represents a backout procedure that will work regardless
of the complexity of an upgrade.

It is quite possible for an upgrade script to not be reversible.

After all, not all matrices are invertible; there are a surprisingly large
number of preconditions that are required for that in linear algebra.

And in databases, not all upgrades may be reversed via rollback scripts.


Does "rollback script" truly mean undoing what you just did in a 
transaction-like manner?


--
Angular momentum makes the world go 'round.



Improving pg_dump performance

2018-07-23 Thread Ron

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that 
needs to be migrated to a new data center and then restored to v9.6.9.


The database has many large tables full of bytea columns containing pdf 
images, and so the dump file is going to be more than 2x larger than the 
existing data/base...



The command is:
$ pg_dump -v -Z0 -Fc $DB --file=${TARGET}/${DATE}_${DB}.dump 2> 
${DATE}_${DB}.log


Using -Z0 because pdf files are already compressed.

Because of an intricate web of FK constraints and partitioned tables, the 
customer doesn't trust a set of "partitioned" backups using --table= and 
regular expressions (the names of those big tables all have the year in 
them), and so am stuck with a single-threaded backup.


Are there any config file elements that I can tweak (extra points for not 
having to restart postgres) to make it run faster, or deeper knowledge of 
how pg_restore works so that I could convince them to let me do the 
partitioned backups?


Lastly, is there any way to not make the backups so large (maybe by using 
the --binary-upgrade option, even though the man page says, "in-place 
upgrades only")?


--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 02:32 AM, Andreas Kretschmer wrote:



Am 23.07.2018 um 09:23 schrieb Ron:

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that 
needs to be migrated to a new data center and then restored to v9.6.9. 


you can use the pg_dump from the newer version (9.6) to dump the old 
database, over the net. In this way you can also use parallel backups 
(directory format)


That DC circuit is too slow, and also used by lots of other production data.

--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 08:27 AM, Andreas Kretschmer wrote:


Am 23.07.2018 um 15:06 schrieb Ron:

On 07/23/2018 02:32 AM, Andreas Kretschmer wrote:



Am 23.07.2018 um 09:23 schrieb Ron:

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database 
that needs to be migrated to a new data center and then restored to 
v9.6.9. 


you can use the pg_dump from the newer version (9.6) to dump the old 
database, over the net. In this way you can also use parallel backups 
(directory format)


That DC circuit is too slow, and also used by lots of other production data.



install the 9.6 also on the old server, 


Are there 9.6 packages for RHEL 5.10?


or use an other server in the same DC.


An interesting idea.  To clarify: it's possible to parallel backup a running 
8.4 cluster remotely from a 9.6 system?


--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 08:46 AM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:

An interesting idea.  To clarify: it's possible to parallel backup a running
8.4 cluster remotely from a 9.6 system?

Yes, you can do a parallel backup, but you won't be able to get a
consistent snapshot.  You'll need to pause all changes to the database
while the pg_dump processes connect and start their transactions to
have the backup be consistent.


I can do that!!!

--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron




On 07/23/2018 08:56 AM, Adrian Klaver wrote:

On 07/23/2018 06:47 AM, Ron wrote:

On 07/23/2018 08:46 AM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:
An interesting idea.  To clarify: it's possible to parallel backup a 
running

8.4 cluster remotely from a 9.6 system?

Yes, you can do a parallel backup, but you won't be able to get a
consistent snapshot.  You'll need to pause all changes to the database
while the pg_dump processes connect and start their transactions to
have the backup be consistent.


I can do that!!!


Assuming you can get this setup, have you tested some subset of your data 
on 9.6.9?:


1) Going from 8.4 --> 9.6 is jumping 7 major versions of Postgres. Do you 
know that the data/code will work in 9.6.9?


2) Does the transfer have time built in for fixing problems on the 9.6.9 end?

3) If the answer 2) is no, then is there a plan to deal with changes in 
the 8.4 database while working on the 9.6.9 database?


Yes, we've migrated CAT and Staging databases, and the application has been 
tested.


And this is a test conversion of the prod databases...

--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 09:11 AM, Andres Freund wrote:

Hi,

On 2018-07-23 02:23:45 -0500, Ron wrote:

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that
needs to be migrated to a new data center and then restored to v9.6.9.

Have you considered using pg_upgrade instead?


Yes, but:
1. can't find 9.6 packages on the RHEL 5.10 server.
2. The window is only 8 hours, and the data also has to be moved to a new DC 
in that window.


--
Angular momentum makes the world go 'round.



Speccing a remote backup server

2018-07-24 Thread Ron

Hi,

v9.6 backing up v8.4

Where does the gzip run (where the database lives, or the remote server 
where the pg_dump runs from)?  I ask this because I need to know how beefy 
to make the backup server.  (It'll just store backups for a version upgrade.)


--
Angular momentum makes the world go 'round.



Order in which tables are dumped

2018-07-25 Thread Ron

Hi,

v8.4 if it matters.

It looked like the tables were being backed up in alphanumeric order, but 
now I see that table "docformat" is being dumped *after* "doc_image".


Are there some other rules besides alphabetical sorting?

--
Angular momentum makes the world go 'round.


Re: Order in which tables are dumped

2018-07-25 Thread Ron

On 07/25/2018 10:28 AM, Tom Lane wrote:

Ron  writes:

It looked like the tables were being backed up in alphanumeric order, but
now I see that table "docformat" is being dumped *after* "doc_image".

Looks like standard C-locale (ASCII) sort order to me ...


I hate spreadsheets.  And gnu sort, and unicode...  :(

--
Angular momentum makes the world go 'round.



Re: Order in which tables are dumped

2018-07-25 Thread Ron

On 07/25/2018 10:43 AM, Vick Khera wrote:
On Wed, Jul 25, 2018 at 11:15 AM, Ron <mailto:ronljohnso...@gmail.com>> wrote:


Hi,

v8.4 if it matters.

It looked like the tables were being backed up in alphanumeric order,
but now I see that table "docformat" is being dumped *after* "doc_image".

Are there some other rules besides alphabetical sorting?


Is there some concern about the order? Lower case f comes after _ in ascii.


I'm tracking the progress of a very long backup, and the spreadsheet I using 
(and gnu sort, too, and IIRC Postgres' own ORDER BY) sort based on the 
current locale (en_US), whereas pg_dump sorts based on the C locale.  Thus, 
things aren't as I expected.


The only time it could possibly matter is on restore when there are 
references for foreign keys, but on a restore those are all done after the 
data is restored.


--
Angular momentum makes the world go 'round.


Re: Design of a database table

2018-07-30 Thread Ron

On 07/30/2018 09:37 AM, hmidi slim wrote:
I'm trying to design a database table. First of all there are two 
alternatives:

1-) Divide the table into two tables and make a join.
2-) Design a single table.

1rst alternative:
Create table data_periods(
id serial primary key not null,
period daterange,
project_id integer
)

create table data_periods_info(
id serial primary key not null,
data_periods_id integer,
data_sub_periods daterange,
stock1 integer,
stock2 integer,
CONSTRAINT data_periods_allotment_id_fkey FOREIGN KEY (data_periods_id)
    REFERENCES data_periods (id) MATCH SIMPLE
    ON UPDATE NO ACTION
    ON DELETE NO ACTION
)


Are you absolutely 100% positive that there will NEVER be more than two 
stock numbers?  (People say "yes" to this kind of question all the time and 
then discover that they need more stock numbers when the business changes.)


--
Angular momentum makes the world go 'round.



Settings for fast restores

2018-07-31 Thread Ron

Hi,

http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html

shared_buffers = 1/2 of what you'd usually set
maintenance_work_mem = 1GB-2GB
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0

How many of these 4 year old setting recommendations are still valid for 9.6?

Thanks

--
Angular momentum makes the world go 'round.



Re: Settings for fast restores

2018-08-01 Thread Ron

On 08/01/2018 09:11 AM, Vick Khera wrote:
On Wed, Aug 1, 2018 at 2:03 AM, Ron <mailto:ronljohnso...@gmail.com>> wrote:


Hi,

http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html
<http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html>

shared_buffers = 1/2 of what you'd usually set
maintenance_work_mem = 1GB-2GB
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0

How many of these 4 year old setting recommendations are still valid
for 9.6?


They all look still valid to me. I personally also set fsync=off since I 
can always start over if the machine crashes and corrupts the data.


Right.  I didn't mention these, because they seem version-agnostic:

fsync = off
synchronous_commit = off
archive_mode = off
autovacuum = off
all activity logging settings disabled

--
Angular momentum makes the world go 'round.


ALTER TABLE .. SET STATISTICS

2018-08-04 Thread Ron



v9.6.9

For columns of type bytea which store image data (PDFs, JPGs, etc) would it 
speed up the ANALYZE process to SET STATISTICS = 0?


That way, default_statistics_target could be cranked higher -- giving better 
statistics for needed columns -- without polluting pg_statistics with 
unneeded data?


Thanks

--
Angular momentum makes the world go 'round.



Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread Ron

On 08/14/2018 08:38 AM, pavan95 wrote:

Hi Adrian,

I tried to use
*"COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-`date --date="0 days ago"
+%Y-%m-%d`_*.csv' WITH csv;"*

But it resulted in an error. How to issue such that it is understandable by
psql?

And I am completely unaware of python & psycopg2. Anything which suits my
requirement is enough!!ostgresql-archive.org/PostgreSQL-general-f1843780.html


Why not:
cat /tmp/abc/xyz/postgresql-`date --date="0 days ago"+%Y-%m-%d`_*.csv' | \
   psql YOURDB -c "COPY postgres_log1 FROM STDIN WITH csv;"

--
Angular momentum makes the world go 'round.



Re: pg_basebackup failed to read a file

2018-08-14 Thread Ron




On 08/14/2018 11:14 AM, Tom Lane wrote:

Mike Cardwell  writes:

pg_basebackup: could not get write-ahead log end position from server:
ERROR:  could not open file "./postgresql.conf~": Permission denied
Now, I know what this error means. There was a root owned file at
"/var/lib/pgsql/10/data/postgresql.conf~" which contained an old
version of our postgres config and was not readable by the postgres
user. I'll delete this file and try again. However, in the mean time: I
feel like it would be useful for pg_basebackup to check that it has
read access to all of the existing files in the source directory at the
start, before it begins it's copy.

That seems like a pretty expensive thing to do, if there are lots of
files ... and you'd still end up failing, so it's not moving the ball
very far.


Why is checking a bunch of file permissions anywhere close to being as 
expensive as transferring 1.5TB over a WAN link?


--
Angular momentum makes the world go 'round.



Re: [External] Multiple COPY on the same table

2018-08-20 Thread Ron

Maybe he just has a large file that needs to be loaded into a table...

On 08/20/2018 11:47 AM, Vijaykumar Jain wrote:

Hey Ravi,

What is the goal you are trying to achieve here.
To make pgdump/restore faster?
To make replication faster?
To make backup faster ?

Also no matter how small you split the files into, if network is your 
bottleneck then I am not sure you can attain n times the benefit my simply 
sending the files in parallel but yeah maybe some benefit.
But then for parallel processing you also need to ensure your server is 
having relevant resources or else it will just be a lot of context 
switching I guess ?

Pg dump has an option to dump in parallel
pgbasebackup is single threaded I read but pgbackrest can allow better 
parallel processing in backups.
There is also logical replication where you can selectively replicate your 
tables to avoid bandwidth issues.
I might have said a lot and nothing may be relevant, but you need to let 
us know the goal you want to achieve :)


Regards,
Vijay

*From:* Ravi Krishna 
*Sent:* Monday, August 20, 2018 8:24:35 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* [External] Multiple COPY on the same table
Can I split a large file into multiple files and then run copy using each 
file.  The table does not contain any
serial or sequence column which may need serialization. Let us say I split 
a large file to 4 files.  Will the

performance boost by close to 4x??

ps: Pls ignore my previous post which was without a subject (due to mistake)


--
Angular momentum makes the world go 'round.


pg_dump order of operation

2018-08-25 Thread Ron

Hi,

In v8.4, I noticed that the tables seemed to be dumped in alphabetical 
order.  Not so much, though, in a multithreaded 9.6 dump of an 8.4 database; 
there's no pattern that I can discern.


In what order does the 9.6 pg_dump dump tables?

Thanks

--
Angular momentum makes the world go 'round.



Re: pg_dump order of operation

2018-08-26 Thread Ron

On 08/26/2018 10:24 AM, Tom Lane wrote:

Ron  writes:

In v8.4, I noticed that the tables seemed to be dumped in alphabetical
order.  Not so much, though, in a multithreaded 9.6 dump of an 8.4 database;
there's no pattern that I can discern.
In what order does the 9.6 pg_dump dump tables?

I don't believe the ordering rules have changed materially since 8.4;
it's intended to be by object kind, and within that by name, except
where dependencies force doing something else.


I thought that didn't matter, since FK and PK constraints were added (in the 
required order) after all data was loaded).



But what are you thinking constitutes the "dump order" in a parallel dump?


I don't understand your question.


--
Angular momentum makes the world go 'round.



Re: pg_dump order of operation

2018-08-26 Thread Ron



On 08/26/2018 01:42 PM, Tom Lane wrote:

Ron  writes:

On 08/26/2018 10:24 AM, Tom Lane wrote:

Ron  writes:

In what order does the 9.6 pg_dump dump tables?

I don't believe the ordering rules have changed materially since 8.4;
it's intended to be by object kind, and within that by name, except
where dependencies force doing something else.

I thought that didn't matter, since FK and PK constraints were added (in the
required order) after all data was loaded).

But what are you thinking constitutes the "dump order" in a parallel dump?

I don't understand your question.

Perhaps I don't understand *your* question.  What concrete problem are you
having?


I want to track the progress of pg_dump so as to estimate completion time.

--
Angular momentum makes the world go 'round.



Re: pg_dump order of operation

2018-08-26 Thread Ron

On 08/26/2018 02:44 PM, Tom Lane wrote:

Ron  writes:

On 08/26/2018 01:42 PM, Tom Lane wrote:

Perhaps I don't understand *your* question.  What concrete problem are you
having?

I want to track the progress of pg_dump so as to estimate completion time.

Well, if you don't use --jobs then you should get more or less the same
behavior as in 8.4.  If you do use that, then it's hardly surprising that
things are more complicated.


I'm not demanding that it be simple, but just asking what the pattern is.

--
Angular momentum makes the world go 'round.



dat names generated by pg_dump

2018-09-02 Thread Ron

Hi,

I can associate these dat names with their source tables through a bunch of 
bash and vim manual operations, but I was wondering if there's any automated 
method (maybe some SQL query of some catalog table; pg_class didn't seem to 
have the relevant data) of making the association.


If relevant, the source database is v8.4, but the backup was done by 9.6 on 
a separate server.


$ ls -1 CDSLBXW/*dat | head
CDSLBXW/8412.dat
CDSLBXW/8414.dat
CDSLBXW/8416.dat
CDSLBXW/8418.dat
CDSLBXW/8420.dat
CDSLBXW/8422.dat
CDSLBXW/8423.dat
CDSLBXW/8425.dat
CDSLBXW/8427.dat
CDSLBXW/8428.dat

Thanks

--
Angular momentum makes the world go 'round.



Re: dat names generated by pg_dump

2018-09-02 Thread Ron

On 09/02/2018 08:41 PM, Adrian Klaver wrote:

On 09/02/2018 05:40 PM, Ron wrote:

Hi,

I can associate these dat names with their source tables through a bunch 
of bash and vim manual operations, but I was wondering if there's any 
automated method (maybe some SQL query of some catalog table; pg_class 
didn't seem to have the relevant data) of making the association.


Some background would be helpful:

1) What is producing the *.dat files?


pg_dump, as described in the Subject.



2) What is their structure?


They're pg_dump files.



3) What do the numbers refer to?


That's what I'm asking the list.





If relevant, the source database is v8.4, but the backup was done by 9.6 
on a separate server.


$ ls -1 CDSLBXW/*dat | head
CDSLBXW/8412.dat
CDSLBXW/8414.dat
CDSLBXW/8416.dat
CDSLBXW/8418.dat
CDSLBXW/8420.dat
CDSLBXW/8422.dat
CDSLBXW/8423.dat
CDSLBXW/8425.dat
CDSLBXW/8427.dat
CDSLBXW/8428.dat

Thanks






--
Angular momentum makes the world go 'round.



Re: dat names generated by pg_dump

2018-09-02 Thread Ron

On 09/02/2018 09:26 PM, Tom Lane wrote:

Ron  writes:

I can associate these dat names with their source tables through a bunch of
bash and vim manual operations, but I was wondering if there's any automated
method (maybe some SQL query of some catalog table; pg_class didn't seem to
have the relevant data) of making the association.

Those numbers are the "dump object IDs" generated by pg_dump.  They don't
have any significance on the server side, and typically would vary from
one pg_dump run to another.  You have to look at the dump TOC (table of
contents) to figure out what corresponds to what.  For example,

$ pg_dump -Fd -f dumpd regression
$ ls -1 dumpd
6143.dat.gz
6144.dat.gz
6145.dat.gz
...
blob_3001.dat.gz
blobs.toc
toc.dat
$ pg_restore -l dumpd
;
; Archive created at 2018-09-02 22:14:48 EDT
...
6573; 2613 119655 BLOB - 119655 postgres


Thanks. That's exactly what I needed.

--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 03/09/2018 08:56 AM, David Steele wrote:
[snip]

About pgBarman, I like :
- be able restore on a remote server from the backup server

This a good feature, and one that has been requested for pgBackRest. You
can do this fairly trivially with ssh, however, so it generally hasn't
been a big deal for people.  Is there a particular reason you need this
feature?


(Sorry to dredge up this old thread.)

Do you just change the IP address of the "restore target"?


- use replication slots for backingup wal on the backup server.

Another good feature.  We have not added it yet because pgBackRest was
originally written for very high-volume clusters (100K+ WAL per day) and
our parallel async feature answers that need much better.  We recommend
a replicated standby for more update-to-date data.


Every N minutes you copy the WAL files to the backup server?


--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 07:14 AM, Thomas Poty wrote:

> Do you just change the IP address of the "restore target"?
Do you expect a typical restore command?


I'm investigating barman and pgBackRest to replace our exitsing NetBackup 
system, so don't know what you mean by "typical restore command".


Here are our typical use cases:

1. If my barman backup server has full backups, diffs and WALs for database 
server MAIN_PG_SERVER, which hosts databases D1, D2 and D3, how much work is 
it to do a PITR restore of D2 to a *different* Pg server?


2. Can I restore an older copy of database D2 to MAIN_PG_SERVER, *giving it 
a new name* (so that now there would be databases D1, D2, D3 *and D2_OLD*)? 
That's pretty trivial on SQL Server, and something I've had to do before so 
the operations staff can research a problem.)


Thanks

--
Angular momentum makes the world go 'round.


Re: Barman versus pgBackRest

2018-09-04 Thread Ron


That was about barman, in the barman group.  This is asking about 
pgbackrest...  :)


So: does pgbackrest have this ability which barman does not have? The 
"--db-include" option seems to indicate that you can restore a single db, 
but does indicate whether or not you can rename it.



On 09/04/2018 08:44 AM, Thomas Poty wrote:

Your problem looks like this one ;-)
https://groups.google.com/forum/#!topic/pgbarman/kXcEpSLhw8w 
<https://groups.google.com/forum/#%21topic/pgbarman/kXcEpSLhw8w>

answer may help

Physical backup/restore operates on a whole cluster...

Le mar. 4 sept. 2018 à 14:47, Ron <mailto:ronljohnso...@gmail.com>> a écrit :


On 09/04/2018 07:14 AM, Thomas Poty wrote:

> Do you just change the IP address of the "restore target"?
Do you expect a typical restore command?


I'm investigating barman and pgBackRest to replace our exitsing
NetBackup system, so don't know what you mean by "typical restore
command".

Here are our typical use cases:

1. If my barman backup server has full backups, diffs and WALs for
database server MAIN_PG_SERVER, which hosts databases D1, D2 and D3,
how much work is it to do a PITR restore of D2 to a *different* Pg server?

2. Can I restore an older copy of database D2 to MAIN_PG_SERVER,
*giving it a new name* (so that now there would be databases D1, D2,
D3 *and D2_OLD*)?  That's pretty trivial on SQL Server, and something
I've had to do before so the operations staff can research a problem.)

Thanks

-- 
Angular momentum makes the world go 'round.




--
Angular momentum makes the world go 'round.


Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 09:24 AM, Joshua D. Drake wrote:

On 09/04/2018 07:14 AM, Ron wrote:


That was about barman, in the barman group.  This is asking about 
pgbackrest...  :)


So: does pgbackrest have this ability which barman does not have? The 
"--db-include" option seems to indicate that you can restore a single db, 
but does indicate whether or not you can rename it.


https://pgbackrest.org/configuration.html#section-restore/option-db-include



Which implies that you can't do it?

(Postgres backup/restore capabilities are quite limited, which is disapointing.)

--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 10:24 AM, Joshua D. Drake wrote:

On 09/04/2018 07:52 AM, Ron wrote:

On 09/04/2018 09:24 AM, Joshua D. Drake wrote:

On 09/04/2018 07:14 AM, Ron wrote:


That was about barman, in the barman group.  This is asking about 
pgbackrest...  :)


So: does pgbackrest have this ability which barman does not have? The 
"--db-include" option seems to indicate that you can restore a single 
db, but does indicate whether or not you can rename it.


https://pgbackrest.org/configuration.html#section-restore/option-db-include



Which implies that you can't do it?


You can restore a single database and then issue a simple ALTER DATABASE 
command to change the DB name.




(Postgres backup/restore capabilities are quite limited, which is 
disapointing.)


Not sure I agree with that. If you want to restore and then rename a DB, 
rename it.


ALTER DATABASE foo RENAME TO bar;


But restoring an old "foo" overwrites the existing "foo".  On SQL Server 
databases, we occasionally need to restore an old foo backup "foo_old" along 
side production foo.



--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 10:51 AM, David Steele wrote:
[snip]

This will work, but I don't think it's what Ron is getting at.

To be clear, it is not possible to restore a database into an *existing*
cluster using pgBackRest selective restore.  This is a limitation of
PostgreSQL file-level backups.

To do what Ron wants you would need to restore it to a new cluster, then
use pg_dump to logically dump and restore it to whatever cluster you
want it in.  This still saves time since there is less to restore but is
obviously not ideal.


That's exactly what I'm referring to.

Presumably I could restore it to a new cluster on the same VM via initdb on 
a different port and PGDATA directory?


--
Angular momentum makes the world go 'round.



Re: PostgreSQL: Copy from File missing data error

2018-09-04 Thread Ron

You might want to try pg_bulkload, and have it kick out malformed rows.

It's packaged for RHEL6 and above, plus various other distros.

On 09/04/2018 01:13 PM, Holly Gibons wrote:

I'm using PostgreSQL 9.0 via pgAdmin III

I'm trying to build a PostgreSQL/PostGIS database using Entire country 
files dataset  but I'm 
getting missing data error


I'm wondering if the copy command is affected by diacritics or I've not 
set the database up properly

Created a new database with UTF8 encoding

I built the table schema based on the given format 
 (but using type 
text for NM_MODIFY_DATE not varchar,  having these as dates didn't make a 
difference)


I used large text viewer to open the uncompressed countries.txt file and 
copied the top 5 rows into a test file


Using   PostgreSQL Copy this test file imported correctly so I know my 
schema is correct
|copy my_table(List of columns ) from '\\Path\\To\\TestFile.txt' WITH 
delimiter E'\t' csv HEADER; |
However when I tried to ingest the larger  countries.txt (2.9GB) file I 
get an error "missing data" for column xyz at line 12345 (Last column with 
 data in that row, NM_MODIFY_DATE)


Using large text viewer again I located this line and copied together with 
proceeding and following lines into my test file and tried the copy again 
but get the same error


I opened the test file in excel to  see if there is columns  missing 
although not every row has data in each column they do  all match


the problem row has UFI -3373955 & UNI 6329083

I don't know if this is relevant but looking at the database properties , 
in pgAdmin, the 'collection' &  'Character type' are both set as 
"English_United Kingdom, 1252 " I  didn't set this and creating a  new DB 
the options are "C",  "English_United Kingdom, 1252 " or "POSIX"


Could someone suggest what I'm doing wrong?
Thank you


--
Angular momentum makes the world go 'round.


Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Ron

On 09/05/2018 12:39 PM, Raghavendra Rao J S V wrote:

Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me 
which decides number of max WAL files in PG_XLOG directory for Postgres 
9.2 Database,please?


If you're doing WAL file replication, and -- for whatever reason -- the 
standby system isn't applying them fast enough, there could be tens of 
thousands of files in pg_xlog.  (It would drain pretty quickly when you 
resolve the problem, though.)



--
Angular momentum makes the world go 'round.


Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Ron



Maybe https://github.com/chanks/que is what you need.

On 09/05/2018 02:35 PM, Thiemo Kellner wrote:
I have seen pg_cron but it is not what I am looking for. It schedules 
tasks only by time. I am looking for a fifo queue. pg_cron neither 
prevents from simultaneous runs I believe.


Quoting Thomas Kellerer :


There is no built-in scheduler, but there is an extension that supplies that

https://github.com/citusdata/pg_cron







--
Angular momentum makes the world go 'round.



pgbackrest when data/base is symlinked to another volume

2018-09-06 Thread Ron

Hi,

Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog

(I'm not just defining data_directory because the DBAs are used to looking 
in $PGDATA and seeing all the relevant files.)


Thanks
--
Angular momentum makes the world go 'round.



Re: pgbackrest when data/base is symlinked to another volume

2018-09-07 Thread Ron

On 09/07/2018 05:22 PM, David Steele wrote:

Hi Ron,

On 9/6/18 11:21 PM, Ron wrote:


Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog


Yes, this will work.  Note that restore does not recreate symlinks by 
default so you'll need to specify --link-all to enable symlink creation.


See 
https://pgbackrest.org/configuration.html#section-restore/option-link-all 
for details.


Using symlinks in this way will make management of your clusters more 
difficult, mostly because systems need more provisioning before restores 
can be performed.  In general I'd recommend against it unless there are 
performance considerations.


Also, you might consider using log_directory to relocate log files rather 
than a symlink.  This will exclude log files from your backup which is 
usually preferable -- primary logs restored to a standby are out of 
context and can cause confusion.


Thanks for the tips.  I'll probably implement that on our new systems.

--
Angular momentum makes the world go 'round.



Re: pgbackrest when data/base is symlinked to another volume

2018-09-07 Thread Ron

On 09/07/2018 05:22 PM, David Steele wrote:

Hi Ron,

On 9/6/18 11:21 PM, Ron wrote:


Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog


Yes, this will work.  Note that restore does not recreate symlinks by 
default so you'll need to specify --link-all to enable symlink creation.


See 
https://pgbackrest.org/configuration.html#section-restore/option-link-all 
for details.


Using symlinks in this way will make management of your clusters more 
difficult, mostly because systems need more provisioning before restores 
can be performed.  In general I'd recommend against it unless there are 
performance considerations.


Now that I'm thinking more about what you wrote... "data" isn't on it's own 
partition.  data/*base* has it's own partition.


What's the recommended method for putting *base**/* on a partition different 
from data/?  Or is that not recommended?



--
Angular momentum makes the world go 'round.


Volume partitioning (was Re: pgbackrest when data/base is symlinked to another volume)

2018-09-08 Thread Ron

On 09/08/2018 03:07 PM, David Steele wrote:

On 9/7/18 8:47 PM, Ron wrote:

On 09/07/2018 05:22 PM, David Steele wrote:


On 9/6/18 11:21 PM, Ron wrote:


Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog


Yes, this will work.  Note that restore does not recreate symlinks by 
default so you'll need to specify --link-all to enable symlink creation.


See 
https://pgbackrest.org/configuration.html#section-restore/option-link-all 
for details.


Using symlinks in this way will make management of your clusters more 
difficult, mostly because systems need more provisioning before restores 
can be performed.  In general I'd recommend against it unless there are 
performance considerations.


Now that I'm thinking more about what you wrote... "data" isn't on it's 
own partition.  data/*base* has it's own partition.


What's the recommended method for putting *base**/* on a partition 
different from data/?  Or is that not recommended?


All the user data goes in base so there's really no need to separate it 
out of data.  Typically pg_wal and tablespaces are relocated onto 
different devices for performance (or to get more space).  If the 
partitions are on the same device then there's no performance benefit, 
just admin hassle.




Googled "postgresql disk partitioning" and "postgresql volume partitioning" 
without much success.


Is the best practice volume partitioning:
/Database/9.6/data
/Database/9.6/data/pg_log
/Database/9.6/data/pg_xlog

where /var/lib/pgsql/9.6 (on RHEL6) is a symlink to /Database/9.6/data and 
PGDATA=/Database/9.6/data


*or *

/Database/9.6/data/base
/Database/9.6/data/pg_log
/Database/9.6/data/pg_xlog

where PGDATA=/var/lib/pgsql/9.6/data and base, pg_log and px_xlog are 
symlinks to the partitions?


Thanks

--
Angular momentum makes the world go 'round.


Re: PG8.3->10 migration data differences

2018-09-11 Thread Ron

Then fix your field-based data comparing mechanism.

On 09/11/2018 03:41 AM, Csaba Ragasits wrote:

Hello,

We would like to migrate from 8.3 to 10 version. We've hundreds databases 
with different structures. That reason we're working on an automatic data 
comparing process.


I've found the following storage settings:
- pg83: Date/time type storage:   64-bit integers
- pg10: Date/time type storage:   64-bit integers

When I running the following select from psql (Select '09/10/18 
07:10:25.110'::timestamp;)

The results are same:
-pg83: 2018-09-10 07:10:25.11
-pg10: 2018-09-10 07:10:25.11

When I select it from a table (Select v.entry_timestamp from t_vis v):
The results are different:
- pg83: 2015-08-28 21:25:07.70
- pg10: 2015-08-28 21:25:07.7

The field type:
entry_timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL,

Our difference every time the last 0 values. The time values are same, but 
our field based data comparing mechanism every time mark it as error.


thx,
Csaba


--
Angular momentum makes the world go 'round.



Re: PG9.1 migration to PG9.6, dump/restore issues

2018-09-12 Thread Ron

On 09/12/2018 08:55 AM, Scot Kreienkamp wrote:


Hi Everyone,

I am working on a migration from PG9.1 to PG9.6.  Hoping some people can 
chime in on my plans as I am running into some restore issues.


We are upgrading to a new version of PG and migrating to new hardware with 
RHEL 7, so I am planning on doing a dump and restore to get moved to the 
new server.  My database is about 300 gigs, not huge but big enough that 
doing a single threaded dump with multi-threaded restore is going to take 
too much time for the window of opportunity I’ve been given.  I know I can 
use multi-threaded restore on PG9.6 using the custom or directory formats, 
but PG9.1 only supports single threaded dump.  To get around this I’m 
going to disable all database access to the PG9.1 databases, then use the 
PG9.6 tools to do a multi-threaded dump and then multi-threaded restore.


These are the commands I was using:

pg_dump -vj 4 -F d -h $OLDSERVER  $DATABASE -f $BACKUPPATH/DATABASE 
--no-synchronized-snapshots


created $DATABASE

pg_restore -evj 4 -d $DATABASE $BACKUPPATH/$DATABASE   --disable-triggers




This is almost exactly what I did when migrating from 8.4 to 9.6. As Adrian 
Klaver mentioned, you need to dump the globals and then run that script on 
the new database.


No need to disable triggers, since it's "relevant only when performing a 
data-only restore", and you aren't doing a data-only restore.  Besides, 
pg_restore adds all that metadata -- including PKs, FKs, indexes, etc. to 
the db *after* the data is loaded.



--
Angular momentum makes the world go 'round.


Re: Select rows when all all ids of its children records matches

2018-09-12 Thread Ron

Maybe this:
select p.id, p.name
from posts p,
 posts_tags pt,
 tags t
where t.id in (1, 2, 3)
  and t.id = pt.tag_id
  and pt.post_id = p.id;


On 09/12/2018 10:23 AM, Arup Rakshit wrote:
I have a table posts(id, name), posts_tags(post_id, tag_id) and tags (id, 
name) ... I want to get all posts which has tag id 1, 2 and 3 for example. 
How should I do this? I tried ALL, but it didn’t work.


Those tag ids comes from UI by users, so I am looking for generic approach.


Thanks,

Arup Rakshit
a...@zeit.io 





--
Angular momentum makes the world go 'round.


Re: Convert interval to hours

2018-09-14 Thread Ron

On 09/14/2018 11:10 AM, Steven Lembark wrote:

On Fri, 14 Sep 2018 11:55:18 -0400
Peter Kleiner  wrote:

On Fri, Sep 14, 2018 at 11:51 AM David Gauthier
 wrote:

Hi:

In perl/DBI, I have code that's getting me an "age" which returns
something like... "-17 days -08:29:35".  How can I convert that to
a number of hours (as a float I would presume) ?

Suggest using one of the date modules. One issue is that not all
days are 86400s long: "leap second" is used to keep atomic clocks
in sync with siderial time so that telescopes report consistent
values over time. Catch is that simply dividing by 3600 doesn't
always work if the times fall across the wrong days.


Can you give us a hard example of when this won't work?

select extract(epoch from '-17 days -08:29:35'::interval)/3600 as hours;

hours
---
 -416.49305556
(1 row)


--
Angular momentum makes the world go 'round.



Re: How to maintain the csv log files in pg_log directory only for past 30 days

2018-09-28 Thread Ron

On 09/28/2018 12:03 AM, Raghavendra Rao J S V wrote:


Hi All,

Log file will be generated in *csv* format at *pg_log* directory in our 
PostgreSQL. Every day we are getting one log file. We would like to 
maintain only max 30 days. Which setting need to modify by us in 
“postgresql.conf” in order to recycle the log files after 30 days.


Does it have to be in postgresql.conf?  A cron job which runs a few minutes 
after midnight works just fine.


Compresses yesterday's log file and deletes files older than 30 days:
#!/bin/bash
DIR=/var/lib/pgsql/data/pg_log
cd $DIR
PREVDT=$(date -d "-1 day" +"%F")
bzip2 -9 postgresql-${PREVDT}.log
OLDFILES=$(find $DIR/postgresql-*log* -mtime +30)
rm -v $OLDFILES



--
Angular momentum makes the world go 'round.


Re: COPY from a remote machine in Datastage

2018-10-05 Thread Ron

On 10/05/2018 09:18 AM, Ravi Krishna wrote:


Hello, if you need to use COPY command from remote machine and you use 
some libpq bindings (aka ruby pg gem for example), you can use functions 
associated with COPY command 
(https://www.postgresql.org/docs/10/static/libpq-copy.html). They should 
be provided by bindings of postgres library you're using.


If you can share more info, at least how do you access postgres (via Ruby 
gem for example or from client's console), I can try to be more descriptive.


We not writing any Ruby/Python code.  We are using Datastage. Datastage 
has in built features for most of the stuff. For example to bulk load data 
from csv files
into Oracle/DB2 etc, it has a BULK loader feature. However DS has no 
support for PG directly and we are piggy backing on ODBC where there is no 
BULK loader.


The only recourse for us is to type in SQL as DS allows user code.


Can you install the postgres client software (psql) on the client machine 
and then have Datastage spawn "psql -c 'COPY ...'"?


--
Angular momentum makes the world go 'round.


Re: Please let me know which configuration setting we need to modify to speedup the pg_dump backup.

2018-10-10 Thread Ron

On 10/10/2018 09:32 PM, Raghavendra Rao J S V wrote:

Hi All,

pg_dump is taking more time. Please let me know which configuration 
setting we need to modify to speedup the pg_dump backup.We are using 9.2 
version on Centos Box.


Is it taking "more time" because your database is bigger?


--
Angular momentum makes the world go 'round.


Re: something weird happened - can select by column value although column value exist

2018-10-11 Thread Ron

On 10/11/2018 03:17 PM, Dmitry O Litvintsev wrote:

Hi,

Today the following happened:

Found this error in my production log:

< 2018-10-11 13:31:52.587 CDT >ERROR:  insert or update on table "file" violates foreign 
key constraint "$1"
< 2018-10-11 13:31:52.587 CDT >DETAIL:  Key (volume)=(155303) is not present in table 
"volume".
< 2018-10-11 13:31:52.587 CDT >STATEMENT:
 INSERT INTO file 
(sanity_crc,uid,pnfs_id,crc,deleted,cache_mod_time,drive,volume,sanity_size,cache_status,gid,location_cookie,cache_location,original_library,bfid,pnfs_path,size)
 VALUES (
 4002229874,0,'',256655919,'u','2018-10-11 13:31:52','',(SELECT 
id FROM volume where label='A'),65536,'B',0,'C','D','E','F','',197243) 
RETURNING *

file table references volume table on file.volume = volume.id and file.volume 
is FK to volume.id. I doctored the query for privacy replacing string values 
with 'A', 'B'. ...

(Queries similar to the above quoted  are executed by an application and run 
thousand of times every day for years)

So, the problem:

SELECT id FROM volume where label='A';
id

  155303
(1 row)

BUT:

select * from volume where id = 155303;
  ...
(0 rows)

?!

id is a sequence:

id| integer | not null default 
nextval(('volume_seq'::text)::regclass)


This entry id = 155303 has existed for some time and has a lot of existing file 
entries holding
FK reference to volume id = 155303

I "fixed" the issue just by:

update volume set id  = 155303 where label='A';

BUT It did not work  right away. Meaning I did this once:

update volume set id  = 155303 where label='A';

no effect.

I did it again, I also did it;

update volume set id = (select id from volume where  label='A');

and then again

update volume set id  = 155303 where label='A';

eventually it worked. Now,


select count(*) from volume where label='A';
  count
---
  1
(1 row)


What is this? Version 9.3.9,. running on Linux RH6.


Index corruption?  Maybe rebuild the FK.


--
Angular momentum makes the world go 'round.



Re: Vacuum and freeing dead rows

2019-07-05 Thread Ron

On 7/5/19 3:16 AM, Simon T wrote:

Hi,

I have a very heavily updated table in a Postgres 9.6.10 database with
lots of disk bloat. Every row is updated about once a minute, and
little to no inserts. Approx 18k rows total. The table has bloated
from ~1700 KB to about 6 GB over a few weeks time. I'm trying to
understand why vacuum hasn't made dead rows available for re-use.


[snip]

And in case it is relevant:

appdb=# SELECT pid, datname, usename, state, backend_xmin
appdb-# FROM pg_stat_activity
appdb-# WHERE backend_xmin IS NOT NULL
appdb-# ORDER BY age(backend_xmin) DESC;
   pid | datname | usename | state | backend_xmin
---+---+--+-+--
  10921 | appdb | app | idle*in transaction*  | 3501305052


"idle IN TRANSACTION" is never good.  Transactions should always be as short 
as possible.



  10919 | appdb | app | idle in transaction | 3501305052
  10916 | appdb | app | idle in transaction | 3501305052
  27935 | appdb | app | idle in transaction | 3501305052
  24500 | appdb | postgres | active | 3501305052
  10914 | appdb | app | active | 3501305052
  20671 | appdb | postgres | active | 3501305052
  11817 | appdb | app | active | 3501305052
   1988 | appdb | app | active | 3501305052
  15041 | appdb | postgres | active | 3501305052
   9916 | appdb | postgres | active | 3501305052
  10912 | appdb | app | idle in transaction | 3501305052
  10909 | appdb | app | idle in transaction | 3501305052
(13 rows)


Add backend_start to that query.  I'd kill any idle transactions are more 
than 30 minutes old.  (Of course, since they're "idle IN TRANSACTION", you'd 
lose stuff.


--
Angular momentum makes the world go 'round.


Re: Measuring the Query Optimizer Effect: Turning off the QO?

2019-07-07 Thread Ron

On 7/7/19 6:49 PM, Tom Mercha wrote:

On 08/07/2019 01:46, Rob Sargent wrote:

On Jul 7, 2019, at 5:22 PM, Tom Mercha  wrote:

Hi All

As we know, a query goes through number of stages before it is executed.
One of these stages is query optimization (QO).

There are various parameters to try and influence optimizer decisions
and costs. But I wanted to measure the effect of such a stage by turning
it off completely and I can't find such a parameter which explicitly
does that. Then I could execute a query to get the effect of "QO active
and "QO inactive" and compare.

Obviously, I know well what the results would generally look like but I
am just interested in measuring the differences for various types of
queries. I am also aware that this is a simple comparison - there are
more interesting comparisons to perform with QO tweaks, but right now I
am interested in something basic.

So how would one shut down QO? Or at least, obtaining the guarantee of
generating the worst plan possible, ideally without touching many
parameters?

Best,
Tom

Drop all indices?


Sorry, maybe my question wasn't clear enough.

A query can be rewritten in various ways by applying rules and costs of
relational algebra operators, as well as their parallelisation. I am
talking about turning off this query optimization, so I am already
assuming that indexes aren't present.


It seems as though you're asking what the "First Approximation" plan is, 
before it tries to get Too Clever.  However, I don't think there's a 
separation between Query *Planner* and Query *Optimizer*. 
https://www.postgresql.org/docs/9.6/planner-optimizer.html


--
Angular momentum makes the world go 'round.


Re: pg_dump and search_path

2019-07-09 Thread Ron

On 7/9/19 2:22 AM, Laurenz Albe wrote:

On Mon, 2019-07-08 at 23:54 -0700, Igal @ Lucee.org wrote:

I have a custom search_path:

# show search_path;
search_path
--
  "staging, transient, pg_catalog"
(1 row)

I ran `pg_dump --schema-only` and the only reference in the output to
search_path is:

   SELECT pg_catalog.set_config('search_path', '', false);

Then one of my functions which does not reference the full name of a
table with its schema fails with "relation [rel-name] does not exist".

Is that a bug?  I have seen some old posts about this issue but am not
sure if there is a ticket or why it still is an issue.


Looks like this might be by design.  I will follow the links at
https://www.postgresql.org/message-id/flat/MWHPR14MB160079A6D9DC64A2F60E9004C0D00%40MWHPR14MB1600.namprd14.prod.outlook.com
and ask more questions if I have them.

I might need to add the schema name to the table in my function.

Right.

Allowing object lookup along the search_path during pg_restore opens
doors to abuse, because it can make a superuser inadvertedly execute
code crafted by an attacker.


Then should search_path be set at the end of pg_restore?

--
Angular momentum makes the world go 'round.




Re: how to execute pgsql2shp exe in sql

2019-07-14 Thread Ron

On 7/14/19 10:54 PM, sunpeng wrote:

Could I use the following in psql or in a sql :
"D:\PostgreSQL\9.6\bin\pgsql2shp "



This *should* work, though you might have to fiddle with the quotes:

yourdb=> \! "D:\PostgreSQL\9.6\bin\pgsql2shp "


--
Angular momentum makes the world go 'round.


Re: Why no CREATE TEMP MATERIALIZED VIEW ?

2019-07-16 Thread Ron

On 7/16/19 11:56 AM, David G. Johnston wrote:
On Tue, Jul 16, 2019 at 9:29 AM Ivan Voras > wrote:


Out of curiosity, since there's CREATE TEMP VIEW, any particular
reason there's no CREATE TEMP MATERIALIZED VIEW?

Seems like it could be similar to a temp table.


Probably a lack (absence) of use cases resulted in people deciding (or 
defaulting) to not spend any effort in that area. Incremental maintenance 
and refresh seem considerably less useful when only the current session 
can see the table. Temp views and temp tables seem to provide sufficient 
options in the session lifetime space.


How different is a "*temp* materialized view" from a regular view?


--
Angular momentum makes the world go 'round.


Re: Queries on QMF to POSTGRE

2019-07-22 Thread Ron

On 7/23/19 12:30 AM, Oleksandr Shulgin wrote:

-hackers
+pgsql general 

On Sun, Jul 21, 2019 at 7:33 PM JVM . > wrote:



I’m looking to convert QMF Queries , QMF forms and QMF procedure to
the POSTGRESQL will it support all of them.

If yes please help us with the sample example. Or any Documentation.


What would help anyone willing to help you, is you providing documentation 
or definition of QFM and some examples of those.


OP might be referring to 
https://en.wikipedia.org/wiki/IBM_Query_Management_Facility



--
Angular momentum makes the world go 'round.


Re: Hardware for writing/updating 12,000,000 rows per hour

2019-07-26 Thread Ron

On 7/26/19 2:56 PM, Arya F wrote:
Would it be possible to achieve 12,000,000 writes/updates on a single 
server? If so what kind of hardware should I be looking for?


That's only 3,333 modifications/second.  How big are your records?

--
Angular momentum makes the world go 'round.




Re: How do I create a Backup Operator account ?

2019-07-30 Thread Ron

On 7/30/19 12:58 AM, Luca Ferrari wrote:

On Tue, Jul 30, 2019 at 2:50 AM Marcos Aurelio Nobre
 wrote:

I was wondering if it was possible to configure the pgAdmin4 menus to be available 
for a particular login, only Backup & Restore items. But I'm not sure how to 
associate a bank login account with a specific menu setting of a client tool.

I think that hiding features from a GUI is a poor idea to protect your
data (from what?).
If you are talking of a single database (or a restricted set of), you
can provide a role with the less privileges, like only SELECT and use
such account to do the backup. But you need all the grants on another
account to restore the backup.

Anyway, I think you should rethink about your aim: what's the point of
having a restricted user who can backup ALL the data?


Luca, it is common some large Enterprise environments to have Operations 
staff that can run backups without being able to do anything else.  For 
example, SQL Server has a per-database user mapping named db_backupoperator.


--
Angular momentum makes the world go 'round.




Re: adding more space to the existing server

2019-07-31 Thread Ron

On 7/31/19 5:21 PM, Julie Nishimura wrote:

Hello postgres folks,

We're tossing around the idea of upgrading a replicated postgres cluster 
(37 dbs) by breaking the replication, adding different size (larger) data 
disks to the hot-spare, then turning replication back on, letting it fully 
populate, then breaking replication, making the standby the primary, 
upgrade the disks on the other system, bring it back up, replicate 
backwards until fully replicated then failing-back to the original 
primary. Is this feasible?


Our current size is 22 tb, and it is 97% full
(PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.3.1-14ubuntu2) 5.3.1 20160413, 64-bit)



Thank you for your suggestions


Can we correctly infer that you aren't using LVM?  (Or in there no more room 
on the rack/controller for new drives?)


--
Angular momentum makes the world go 'round.


Re: PGAdmin4.11.1 on Ubuntu 18.04

2019-08-01 Thread Ron
I think the real question is "how do I make pgadmin4 use the locally 
installed psycopg instead of the system version?"


On 8/1/19 10:32 AM, Tony Shelver wrote:
The article at stackoverflow is here 



On Thu, 1 Aug 2019 at 17:28, Tony Shelver > wrote:


Thanks for the replies: the version is PG11.4.

As for needing to upgrade:  I just want to fix the problem.  This was
a working environment until I updated pgAdmin4.

The fix suggested on Stackoverflow was to move to psycopg2 2.8, but it
doesn't seem to be possible.

If there is another fix that anyone is aware of, I would love to know.

I tried to uninstall pgAdmin4, then install psycopg 2.8 via pip, then
reinstall pgadmoin4 again using the Synatpic / Ubunto package manager,
but it just installs pscopg2.7 over the top.

On Thu, 1 Aug 2019 at 16:50, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 8/1/19 7:39 AM, Tony Shelver wrote:
> I am getting an 'able oid' when querying any tables using the
Query tool.
> When using the view / edit data option, everything works fine.

Should have asked in previous post:

What Postgres version(s)?

>
> I came across this
> <<<
>
> The problem is due to python3-psycopg2. The latest pgadmin4 version
> requires psycopg2-2.8. But if you're on Debian/Ubuntu stable, apt
> installed v2.7. So you need to update it with pip :
>
> |sudo pip3 install -U psycopg2

> |
>
> |The problem is that the upgrade to psycopg 2.8 doesn't work, as
it's
> installed as part of the dtsutils package along with pgadmin4.
>
> |
>
> |Any ideas?
> |
>


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Angular momentum makes the world go 'round.


Re: Compression In Postgresql 9.6

2019-08-05 Thread Ron

On 8/5/19 1:30 AM, Shital A wrote:

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP 
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6. 
Postgres version is 9.6 and not latest because of specs of blockchain 
component.


There is a requirement for data compression on DB level. Please provide 
your inputs on how this can be best achieved.


Column-level compression?  Because some data just doesn't compress well.



Checked in-build Toast, it compressed the data provided exceed the 2kb 
pagesize? If the data values are small and even if there are billion 
records they wont be compressed, this is what I understood.


Are there any suggestions of compressing older data irrespective of row 
size transparently?


Are your tables partitioned?

--
Angular momentum makes the world go 'round.




  1   2   3   4   5   6   7   8   9   10   >