Uninstall postgres

2019-09-05 Thread Sonam Sharma
I have installed postgres with the source code option using configure
--prefix and then make install. Can someone please help in uninstalling
this. How to uninstall the postgres now. The installation was done as
postgres user.


Sv: Uninstall postgres

2019-09-05 Thread Andreas Joseph Krogh
På torsdag 05. september 2019 kl. 10:53:01, skrev Sonam Sharma <
sonams1...@gmail.com >: I have installed postgres 
with the source code option using configure --prefix and then make install. Can 
someone please help in uninstalling this. How to uninstall the postgres now. 
The installation was done as postgres user. make uninstall -- Andreas Joseph 
Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
 www.visena.com   


Re: Uninstall postgres

2019-09-05 Thread Sonam Sharma
It's saying gmake *** No rule to make Target uninstall.

On Thu, Sep 5, 2019, 2:27 PM Andreas Joseph Krogh 
wrote:

> På torsdag 05. september 2019 kl. 10:53:01, skrev Sonam Sharma <
> sonams1...@gmail.com>:
>
> I have installed postgres with the source code option using configure
> --prefix and then make install. Can someone please help in uninstalling
> this. How to uninstall the postgres now. The installation was done as
> postgres user.
>
>
> make uninstall
>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andr...@visena.com
> www.visena.com
> 
>
>


Re: Uninstall postgres

2019-09-05 Thread Andreas Joseph Krogh
På torsdag 05. september 2019 kl. 11:01:25, skrev Sonam Sharma <
sonams1...@gmail.com >: It's saying gmake *** No 
rule to make Target uninstall. On Thu, Sep 5, 2019, 2:27 PM Andreas Joseph 
Krogh mailto:andr...@visena.com>> wrote: På torsdag 05. 
september 2019 kl. 10:53:01, skrev Sonam Sharma mailto:sonams1...@gmail.com>>: I have installed postgres with the source code 
option using configure --prefix and then make install. Can someone please help 
in uninstalling this. How to uninstall the postgres now. The installation was 
done as postgres user. make uninstall Strange – works for me: [
andreak@spaceballs-one] ~/dev/postgresql (REL_12_STABLE) 
 $ make uninstall
 make -C doc uninstall
 make[1]: Entering directory '/home/andreak/dev/postgresql/doc'
 make -C src uninstall
 make[2]: Entering directory '/home/andreak/dev/postgresql/doc/src'
 make -C sgml uninstall
 ... ... -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 
963 andr...@visena.com  www.visena.com 
  

Re: Uninstall postgres

2019-09-05 Thread Sonam Sharma
It's done. I have done it from path 😃.
Thanks

On Thu, Sep 5, 2019, 2:27 PM Andreas Joseph Krogh 
wrote:

> På torsdag 05. september 2019 kl. 10:53:01, skrev Sonam Sharma <
> sonams1...@gmail.com>:
>
> I have installed postgres with the source code option using configure
> --prefix and then make install. Can someone please help in uninstalling
> this. How to uninstall the postgres now. The installation was done as
> postgres user.
>
>
> make uninstall
>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andr...@visena.com
> www.visena.com
> 
>
>


Re: floating point output

2019-09-05 Thread Luca Ferrari
On Thu, Sep 5, 2019 at 6:14 AM Adrian Klaver  wrote:
> Some examples would help explain your concerns.

I guess the problem is with extra_float_digits. If greater then zero,
the string representation of a real is at its minimum, that is only
significative digits are there:

restoredb=# set extra_float_digits to 1;
SET
restoredb=# select 2.001230::real;
 float4
-
 2.00123
(1 row)


If lower than zero, the output switch to "precise" mode that is
extra_float_digits are subtracted from the number of available digits
for a number:

restoredb=# set extra_float_digits to -2;
SET
restoredb=# select 2.001230::real;
 float4

  2.001
(1 row)

restoredb=# set extra_float_digits to -3;
SET
restoredb=# select 2.001230::real;
 float4

  2
(1 row)


However, this has some more obscure to me behaviors when the value is
greater than 1:

restoredb=# set extra_float_digits to 1;
SET
restoredb=# select 2.1::real;
 float4

2.1
(1 row)

restoredb=# set extra_float_digits to 2;
SET
restoredb=# select 2.1::real;
  float4
---
 2.099
(1 row)


>
> Bottom line, if you want precision use numeric.

Yeah, totally agree. I would also add that numeric is a little more documented.

Luca




Rebuild pg_toast from scratch?

2019-09-05 Thread Moreno Andreo

Hi,
    I have an issue with a Windows 10 PC with Postgres 9.1 x86.
Running some SELECTs we got "ERROR: could not open file 
"base/48121/784576": No such file or directory"


I then ran

select
n.nspname AS tableschema,
    c.relname AS tablename
from pg_class c
inner join pg_namespace n on (c.relnamespace = n.oid)
where c.relfilenode = 784576;

and discovered that the involved table is pg_toast_49713.

Now I need to recreate an empty copy of this table (data can be 
recovered later), how can I achieve this?


Thanks in advance
Moreno.-







Re: floating point output

2019-09-05 Thread Rob Sargent



> On Sep 4, 2019, at 9:14 PM, Adrian Klaver  wrote:
> 
>> On 9/4/19 5:23 PM, Rob Sargent wrote:
>> I've found the description of floating point types (here 
>> ), 
>> but I'm looking for the rationale of the output format, particularly with 
>> respect to total digits presented (variable in a single select's output) and 
>> the dropping of a trailing zero (to some implying a loss of precision).  Is 
>> the code my only guide here?
> 
> Some examples would help explain your concerns.
> 
> Bottom line, if you want precision use numeric.
> 
> 
> 
When at my desk I’ll give example. Actual precision is not the issue. Purely 
cosmetics. 



Re: Rebuild pg_toast from scratch?

2019-09-05 Thread Tom Lane
Moreno Andreo  writes:
>      I have an issue with a Windows 10 PC with Postgres 9.1 x86.

You realize 9.1 is long out of support ...

> Running some SELECTs we got "ERROR: could not open file 
> "base/48121/784576": No such file or directory"

Ugh :-(

> Now I need to recreate an empty copy of this table (data can be 
> recovered later), how can I achieve this?

TRUNCATE ought to be enough.

regards, tom lane




Cannot restore dump when using IS DISTINCT FROM on a HSTORE column

2019-09-05 Thread Lele Gaifax
Hi all,

I'm hitting a problem very similar to the one described here[1]: one of my
databases have the following trigger

   CREATE TRIGGER trg_dn_customer_contents_950_reset_usable
 BEFORE UPDATE
 ON dn.customer_contents
 FOR EACH ROW
 WHEN (OLD.usable IS NOT NULL AND OLD.usable = NEW.usable
   AND (OLD.customer_content_category_id IS DISTINCT FROM 
NEW.customer_content_category_id
OR OLD.title IS DISTINCT FROM NEW.title
OR OLD.summary IS DISTINCT FROM NEW.summary
OR OLD.description IS DISTINCT FROM NEW.description
OR OLD.active IS DISTINCT FROM NEW.active
OR OLD.languages IS DISTINCT FROM NEW.languages
OR OLD.address_id IS DISTINCT FROM NEW.address_id
OR OLD.schedule IS DISTINCT FROM NEW.schedule
OR OLD.price IS DISTINCT FROM NEW.price))
 EXECUTE FUNCTION dn.reset_customer_content_usable()

where several of those columns are HSTOREs. Trying to restore a dump I get the
same error: "ERROR:  operator does not exist: public.hstore = public.hstore".
The source and target PG versions are the same, 11.5.

I followed the link[2] and read the related thread: as it is more that one
year old, I wonder if there is any news on this, or alternatively if there is
a recommended workaround: as that is the only place where I'm using IS
DISTINCT FROM against an HSTORE field, I could easily replace those
expressions with the more verbose equivalent like

   (OLD.x IS NULL AND NEW.x IS NOT NULL)
   OR
   (OLD.x IS NOT NULL AND NEW.x IS NULL)
   OR
   (OLD.x <> NEW.x)

lacking a better approach.

What would you suggest?

Thanks in advance,
ciao, lele.

[1] 
https://www.postgresql-archive.org/BUG-15695-Failure-to-restore-a-dump-ERROR-operator-does-not-exist-public-hstore-public-hstore-td6077272.html
[2] 
https://www.postgresql.org/message-id/flat/ffefc172-a487-aa87-a0e7-472bf29735c8%40gmail.com
-- 
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
l...@metapensiero.it  | -- Fortunato Depero, 1929.





Re: Cannot restore dump when using IS DISTINCT FROM on a HSTORE column

2019-09-05 Thread Adrian Klaver

On 9/5/19 7:16 AM, Lele Gaifax wrote:

Hi all,

I'm hitting a problem very similar to the one described here[1]: one of my
databases have the following trigger

CREATE TRIGGER trg_dn_customer_contents_950_reset_usable
  BEFORE UPDATE
  ON dn.customer_contents
  FOR EACH ROW
  WHEN (OLD.usable IS NOT NULL AND OLD.usable = NEW.usable
AND (OLD.customer_content_category_id IS DISTINCT FROM 
NEW.customer_content_category_id
 OR OLD.title IS DISTINCT FROM NEW.title
 OR OLD.summary IS DISTINCT FROM NEW.summary
 OR OLD.description IS DISTINCT FROM NEW.description
 OR OLD.active IS DISTINCT FROM NEW.active
 OR OLD.languages IS DISTINCT FROM NEW.languages
 OR OLD.address_id IS DISTINCT FROM NEW.address_id
 OR OLD.schedule IS DISTINCT FROM NEW.schedule
 OR OLD.price IS DISTINCT FROM NEW.price))
  EXECUTE FUNCTION dn.reset_customer_content_usable()

where several of those columns are HSTOREs. Trying to restore a dump I get the
same error: "ERROR:  operator does not exist: public.hstore = public.hstore".
The source and target PG versions are the same, 11.5.

I followed the link[2] and read the related thread: as it is more that one
year old, I wonder if there is any news on this, or alternatively if there is
a recommended workaround: as that is the only place where I'm using IS
DISTINCT FROM against an HSTORE field, I could easily replace those
expressions with the more verbose equivalent like

(OLD.x IS NULL AND NEW.x IS NOT NULL)
OR
(OLD.x IS NOT NULL AND NEW.x IS NULL)
OR
(OLD.x <> NEW.x)

lacking a better approach.

What would you suggest?


I don't know if progress has been made on this or not.

Are you able to use a plain text dump?

If so you might try changing:

SELECT pg_catalog.set_config('search_path', '', false);

to something that covers the path where you installed hstore.



Thanks in advance,
ciao, lele.

[1] 
https://www.postgresql-archive.org/BUG-15695-Failure-to-restore-a-dump-ERROR-operator-does-not-exist-public-hstore-public-hstore-td6077272.html
[2] 
https://www.postgresql.org/message-id/flat/ffefc172-a487-aa87-a0e7-472bf29735c8%40gmail.com




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Cannot restore dump when using IS DISTINCT FROM on a HSTORE column

2019-09-05 Thread Tom Lane
Lele Gaifax  writes:
> I'm hitting a problem very similar to the one described here[1]

Yeah, nothing's been done about that yet :-(

> where several of those columns are HSTOREs. Trying to restore a dump I get the
> same error: "ERROR:  operator does not exist: public.hstore = public.hstore".
> The source and target PG versions are the same, 11.5.
> I followed the link[2] and read the related thread: as it is more that one
> year old, I wonder if there is any news on this, or alternatively if there is
> a recommended workaround:

Probably the simplest solution is to override pg_dump's forcing of the
search_path.  It's going to look like this:

SELECT pg_catalog.set_config('search_path', '', false);

If you have the dump in a text file you could just edit it and remove
that line (it's only about a dozen lines in, typically).  Otherwise
you could delete it with sed or the like, roughly

pg_restore dumpfile | sed s'/^SELECT pg_catalog.set_config('search_path', '', 
false);//' | psql targetdb

There's been some discussion of providing a simpler way to do this,
but nothing's been done about that either.

Disclaimer: in principle, doing this leaves you open to SQL-injection-like
attacks during the restore, if some malicious user has had access to
either the original source database or your restore target DB.  That's
why we put in the search_path restriction.  But it doesn't help to be
secure if you can't get your work done ...

regards, tom lane




Re: Cannot restore dump when using IS DISTINCT FROM on a HSTORE column

2019-09-05 Thread Adrian Klaver

On 9/5/19 7:35 AM, Tom Lane wrote:

Lele Gaifax  writes:

I'm hitting a problem very similar to the one described here[1]




pg_restore dumpfile | sed s'/^SELECT pg_catalog.set_config('search_path', '', 
false);//' | psql targetdb

There's been some discussion of providing a simpler way to do this,
but nothing's been done about that either.


My 2 cents would be for a --unsecure-path option to pg_dump/restore that 
would drop the pg_catalog.set_config('search_path', '', false).



regards, tom lane






--
Adrian Klaver
adrian.kla...@aklaver.com




postgresql: Help with hstore hextension

2019-09-05 Thread Lu, Dan
Hello,

We recently downloaded the "postgresql-10.7.tar.bz2" file from 
https://ftp.postgresql.org/pub/source/v10.7.

We configure our installation as followed:

./configure --prefix=/hostname/pg/PostgreSQL-10.7 --with-systemd

We created new database and things are working fine.

We are now trying to create "hstore" extension and ran into the error below.

opm=# CREATE EXTENSION hstore;
ERROR:  could not open extension control file 
"/pgdbadevbal801/pg/PostgreSQL-10.7/share/postgresql/extension/hstore.control": 
No such file or directory

I am very new to PostgreSQL work and greatly appreciate your expertise in 
guiding me through this error.

Is there a way we can add that extension into this installation we did like the 
"with-systemd" option above?

Dan



IMPORTANT: The information contained in this email and/or its attachments is 
confidential. If you are not the intended recipient, please notify the sender 
immediately by reply and immediately delete this message and all its 
attachments. Any review, use, reproduction, disclosure or dissemination of this 
message or any attachment by an unintended recipient is strictly prohibited. 
Neither this message nor any attachment is intended as or should be construed 
as an offer, solicitation or recommendation to buy or sell any security or 
other financial instrument. Neither the sender, his or her employer nor any of 
their respective affiliates makes any warranties as to the completeness or 
accuracy of any of the information contained herein or that this message or any 
of its attachments is free of viruses.


Re: postgresql: Help with hstore hextension

2019-09-05 Thread Tom Lane
"Lu, Dan"  writes:
> We are now trying to create "hstore" extension and ran into the error below.

> opm=# CREATE EXTENSION hstore;
> ERROR:  could not open extension control file 
> "/pgdbadevbal801/pg/PostgreSQL-10.7/share/postgresql/extension/hstore.control":
>  No such file or directory

You need to build/install the contrib/hstore module, or possibly you'd
just want to install all of contrib.  "make all" and "make install"
at the top level don't touch the contrib subtree.

If you already wiped your build tree, you'll have to recreate it ---
be careful to give the same configure arguments as you used before.

regards, tom lane




Re: Rebuild pg_toast from scratch?

2019-09-05 Thread Moreno Andreo

Hi Tom and thanks for your time,

Il 05/09/19 15:53, Tom Lane ha scritto:

Moreno Andreo  writes:

      I have an issue with a Windows 10 PC with Postgres 9.1 x86.

You realize 9.1 is long out of support ...

Absolutely :-)
I'm about to migrate it to 11

Now I need to recreate an empty copy of this table (data can be
recovered later), how can I achieve this?
TRUNCATE ought to be enough.

truncate pg_toast_49713;

ERROR: relation "pg_toast_49713" does not exist

:-\
... or do I need to truncate the "master" table (the table blobs are in)?

Thanks





Re: Cannot restore dump when using IS DISTINCT FROM on a HSTORE column

2019-09-05 Thread Lele Gaifax
Tom Lane  writes:

> If you have the dump in a text file you could just edit it and remove
> that line (it's only about a dozen lines in, typically).  Otherwise
> you could delete it with sed or the like, roughly
>
> pg_restore dumpfile | sed s'/^SELECT pg_catalog.set_config('search_path', '', 
> false);//' | psql targetdb

Great, I can easily do that! I missed the "rewrite-dump-as-sql-script"
functionality of pg_restore.

Thanks a lot, also to Adrian!

bye, lele.
-- 
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
l...@metapensiero.it  | -- Fortunato Depero, 1929.





RE: postgresql: Help with hstore hextension

2019-09-05 Thread Lu, Dan
Thanks for the information.

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Thursday, September 05, 2019 12:06 PM
To: Lu, Dan 
Cc: pgsql-gene...@postgresql.org; Kelly, Kevin 
Subject: Re: postgresql: Help with hstore hextension

"Lu, Dan"  writes:
> We are now trying to create "hstore" extension and ran into the error below.

> opm=# CREATE EXTENSION hstore;
> ERROR:  could not open extension control file
> "/pgdbadevbal801/pg/PostgreSQL-10.7/share/postgresql/extension/hstore.
> control": No such file or directory

You need to build/install the contrib/hstore module, or possibly you'd just 
want to install all of contrib.  "make all" and "make install"
at the top level don't touch the contrib subtree.

If you already wiped your build tree, you'll have to recreate it --- be careful 
to give the same configure arguments as you used before.

regards, tom lane



IMPORTANT: The information contained in this email and/or its attachments is 
confidential. If you are not the intended recipient, please notify the sender 
immediately by reply and immediately delete this message and all its 
attachments. Any review, use, reproduction, disclosure or dissemination of this 
message or any attachment by an unintended recipient is strictly prohibited. 
Neither this message nor any attachment is intended as or should be construed 
as an offer, solicitation or recommendation to buy or sell any security or 
other financial instrument. Neither the sender, his or her employer nor any of 
their respective affiliates makes any warranties as to the completeness or 
accuracy of any of the information contained herein or that this message or any 
of its attachments is free of viruses.




Re: Rebuild pg_toast from scratch?

2019-09-05 Thread Tom Lane
Moreno Andreo  writes:
> Il 05/09/19 15:53, Tom Lane ha scritto:
>> TRUNCATE ought to be enough.

> ... or do I need to truncate the "master" table (the table blobs are in)?

Yeah, that one.

regards, tom lane




Environment Variable for --data-checksum during initdb

2019-09-05 Thread Ray Cote
I'm trying to remember what environment variable can be set --data-checksum
before running postgresql-setup initdb. I know I've seen it but it escapes
my recall at the moment.

The PostgreSQL documentation has a nice page of environment variables for
when PostgreSQL is running, but I'm not finding the variable to set during
the initdb phase.

Any help to jog my memory greatly appreciated.
--Ray


Question from someone who is not trained in computer sciences

2019-09-05 Thread Judith Lacoste
Hi,

I think PostgreSQL is the solution for my needs, but I am not a 
programmer/coder.  If I can confirm PostgreSQL does what I need, I will have to 
hire someone to assist, I am willing to give the effort to learn myself but it 
may be difficult, my specialities are biology and microscopy.  Or perhaps the 
use of PostgreSQL is restricted to people highly trained in computer sciences? 


 

I have been looking around a lot through the PostgreSQL website, searching the 
archives, and I even contacted PostgreSQL people locally but I still don’t have 
a clear answer to my first question.  So I am posting it here with the hope to 
move on with PostgreSQL, or abandon the project. 


 

I plan to install the database on a server in the office. Me and my four 
colleagues will occasionally connect to this database when we are working in 
other locations (usually hospitals or universities). In such remote locations, 
we often do not have internet/network, yet we still need to access the 
database.  Currently, we use a system where a copy of the database lives on 
each of our laptops.  We can access all the information in the database despite 
being offline.  This local copy of the database is synchronized with the server 
once network becomes available again.  


M 
y
 question is whether or not such set up is possible with PostgreSQL?  

Why am I interested in PostrgreSQL?  First, my work has made me aware of how 
precious open source tools are.  Our main tools for data analysis are open 
source.  Commercial equivalents are black boxes which we try to avoid in the 
name of science reproducibility and transparency.  Secondly, the commercial 
software we are currently using is apparently based on PostgreSQL, so I am 
hoping that using PostgreSQL will make migration less painful. 

Thank you in advance,

Judith

Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Ben Chobot
On Sep 5, 2019, at 2:00 PM, Judith Lacoste  wrote:
> 
> Hi,
> 
> I plan to install the database on a server in the office. Me and my four 
> colleagues will occasionally connect to this database when we are working in 
> other locations (usually hospitals or universities). In such remote 
> locations, we often do not have internet/network, yet we still need to access 
> the database.  Currently, we use a system where a copy of the database lives 
> on each of our laptops.  We can access all the information in the database 
> despite being offline.  This local copy of the database is synchronized with 
> the server once network becomes available again.  
> 
> 
> M 
> y
>  question is whether or not such set up is possible with PostgreSQL?  

This ranges from very easy to technically-possible-but-very-difficult, 
depending upon what you and your colleagues do with you local copies of the 
data. If your database schema and activities are such that your local edits 
will trample over each other, reconciling those changes automatically when you 
return to your office might be a challenge. If, however, you and your 
colleagues each change different rows, or even better make no changes on your 
local copies, then this becomes much easier.

While it isn't the simpliest tool to set up, if you're planning to make edits 
to your local databases then bucardo is a replication package that can give you 
want you want. (Really any multi-master replication tool should work, but 
bucardo has an advantage in that it was designed with unreliable connectivity 
in mind.) 

If you're planning to have your local databases be read-only, then virtually 
any asynchronous replication strategy for postgres will work. (This means 
almost all of them.)



Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Peter Geoghegan
On Thu, Sep 5, 2019 at 2:00 PM Judith Lacoste  wrote:
> I think PostgreSQL is the solution for my needs, but I am not a 
> programmer/coder.

I don't think that it's restricted to people that are computer
scientists. At least, I certainly hope it isn't. SQL was originally
supposed to be something that is usable by domain experts/analysts,
rather than by computer people (that was at a time when the divide was
far larger than it is today).

> I plan to install the database on a server in the office. Me and my four 
> colleagues will occasionally connect to this database when we are working in 
> other locations (usually hospitals or universities). In such remote 
> locations, we often do not have internet/network, yet we still need to access 
> the database.  Currently, we use a system where a copy of the database lives 
> on each of our laptops.  We can access all the information in the database 
> despite being offline.  This local copy of the database is synchronized with 
> the server once network becomes available again.
>
> My question is whether or not such set up is possible with PostgreSQL?

Since you're a biologist, you may like to play around with the Mouse
Genome database using PostgreSQL:

http://www.informatics.jax.org/downloads/database_backups/

Any supported version of PostgreSQL will work. You'll need to use
pg_restore to restore the databases. Something like this will do it:

  pg_restore -d mgd  /path/to/mgd.postgres.dump

(I'm not sure what operating system you'll use -- something similar to
this invocation ought to work on Windows through cmd.exe, though.)

>From there, you can play around with the database using a GUI tool
such as pgAdmin. I sometimes use this database to test certain things,
since it's the only example of a living, breathing PostgreSQL database
that you can just download that I am aware of. Its schema is probably
not an exemplar of good design, but it does seem reasonably well
thought out. I'm not a domain expert, though, so I can't really be
sure how good it is.

The nice thing about this approach is that you can figure it out using
a "top down" approach, by first understanding how the database is used
in practical terms, and then filling in the details of how the
application that it backs uses the database.

Last I checked, restoring this database will take about 30GB of disk
space on top of the dump file itself.

-- 
Peter Geoghegan




Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Basques, Bob (CI-StPaul)
Hi Judith,

Might be more than you are looking for, but . . . We’ve done something like 
this with a portable mapping system installed on a Raspberry Pi.  Really!!, it 
works with Postgres and a webserver to serve out a replicated open software 
stack based mappint interface.

Our focus was on using the approach to take large amounts of constantly 
changing mapping data into the field without the dependence on a network.  
Since the RPis are so small, it’s easy to take them into the field as well as 
sync them in the office.

We used a whole service infrastructure with a postgres backend, web server and 
the GeoMoose product to display maps.  Our next iteration is to build some 
editing tools, but since the software stack is the same as a our in office 
version, the development works for both.

You can do some interesting tricks at the database level in order to pull 
things together from various edit users.  More info on request for this.  :c)

Here is our project from earlier this year:

https://github.com/klassenjs/rpi-workshop
https://github.com/klassenjs/rpi-workshop/blob/master/Workshop.org

We put on a Workshop with a slightly older version of the Raspberry Pi.  The 
newest version is even faster.  It can be operated in the field, in your pocket 
with a battery running all day, or plugged into a car charger, or both, for 
non-stop operation.  Just turn it on in the morning, and off at night.  We did 
some special database syncing processes with ours, but I’ll leave that as an 
exercise for you.  Basically you can use the RPi as a mobile Server stack, 
complete with Web FORMs and database connectivity.  you can also use them in a 
team approach where more than one wifi connection can be made to the RPi, 
because in the end, it’s just a web server advertising over it’s own broacast 
Wifi.

Glad to answer any questions.

bobb




On Sep 5, 2019, at 4:00 PM, Judith Lacoste 
mailto:jlaco...@miacellavie.com>> wrote:

Think Before You Click: This email originated outside our organization.

Hi,

I think PostgreSQL is the solution for my needs, but I am not a 
programmer/coder.  If I can confirm PostgreSQL does what I need, I will have to 
hire someone to assist, I am willing to give the effort to learn myself but it 
may be difficult, my specialities are biology and microscopy.  Or perhaps the 
use of PostgreSQL is restricted to people highly trained in computer 
sciences?


I have been looking around a lot through the PostgreSQL website, searching the 
archives, and I even contacted PostgreSQL people locally but I still don’t have 
a clear answer to my first question.  So I am posting it here with the hope to 
move on with PostgreSQL, or abandon the 
project.


I plan to install the database on a server in the office. Me and my four 
colleagues will occasionally connect to this database when we are working in 
other locations (usually hospitals or universities). In such remote locations, 
we often do not have internet/network, yet we still need to access the 
database.  Currently, we use a system where a copy of the database lives on 
each of our laptops.  We can access all the information in the database despite 
being offline.  This local copy of the database is synchronized with the server 
once network becomes available again. 


My
 question is whether or not such set up is possible with PostgreSQL?

Why am I interested in PostrgreSQL?  First, my work has made me aware of how 
precious open source tools are.  Our main tools for data analysis are open 
source.  Commercial equivalents are black boxes which we try to avoid in the 
name of science reproducibility and transparency.  Secondly, the commercial 
software we are currently using is apparently based on PostgreSQL, so I am 
hoping that using PostgreSQL will make migration less painful.

Thank you in advance,

Judith



Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Ron

On 9/5/19 4:00 PM, Judith Lacoste wrote:

Hi,

I think PostgreSQL is the solution for my needs, but I am not a 
programmer/coder.  If I can confirm PostgreSQL does what I need, I will 
have to hire someone to assist, I am willing to give the effort to learn 
myself but it may be difficult, my specialities are biology and 
microscopy.  Or perhaps the use of PostgreSQL is restricted to people 
highly trained in computer sciences? 



*You* the end-user won't be using Postgres; you'll be using *the 
application*.  The important question is whether Postgres has the features 
you need for your application.


I have been looking around a lot through the PostgreSQL website, searching 
the archives, and I even contacted PostgreSQL people locally but I still 
don’t have a clear answer to my first question.  So I am posting it here 
with the hope to move on with PostgreSQL, or abandon the project. 




I plan to install the database on a server in the office. Me and my four 
colleagues will occasionally connect to this database when we are working 
in other locations (usually hospitals or universities). In such remote 
locations, we often do not have internet/network, yet we still need to 
access the database.  Currently, we use a system where a copy of the 
database lives on each of our laptops.  We can access all the information 
in the database despite being offline.  This local copy of the database is 
synchronized with the server once network becomes available again. 



M 
y 
question is whether or not such set up is possible with PostgreSQL?


Are you asking if you can do with PostgreSQL what you currently do with the 
existing database?



Why am I interested in PostrgreSQL?  First, my work has made me aware of 
how precious open source tools are.  Our main tools for data analysis are 
open source.  Commercial equivalents are black boxes which we try to avoid 
in the name of science reproducibility and transparency.  Secondly, the 
commercial software we are currently using is apparently based on 
PostgreSQL, so I am hoping that using PostgreSQL will make migration less 
painful.


If you're already using a Postgres-based system, then the presumable answer 
is "yes, it'll work".  However, Postgres can be modified by closed-source 
code, so the amount of work needed by programmer you hire *might* be 
extreme.  We don't know because we don't know what your application does.



--
Angular momentum makes the world go 'round.


pg_restore issues with intarray

2019-09-05 Thread Kevin Brannen
I think I need some help to understand what’s going here because I can’t figure 
it out and google isn’t helping…

This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few months, or so the 
plan is.) Pg code came from the community and we compiled it with no changes. 
This is on Centos 6.7, though I think the OS doesn’t matter.

I’ve found that when we do a pg_restore, that sometimes we get “errors”. I 
quote that because it turns out they’re really only warnings we can ignore, but 
when you check the return code ($?) after pg_restore, you get a non-zero value.

We’re calling pg_restore like:

$PGPATH/pg_restore -jobs=$NCPU --dbname=x .

FWIW, the backup was created with:

$PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU --file=$EXP --dbname=x

The issue I’m seeing is:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3605; 2753 18784 OPERATOR 
FAMILY gin__int_ops postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"gin__int_ops" for access method "gin" already exists
Command was: CREATE OPERATOR FAMILY gin__int_ops USING gin;

pg_restore: [archiver (db)] Error from TOC entry 3606; 2753 18806 OPERATOR 
FAMILY gist__int_ops postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"gist__int_ops" for access method "gist" already exists
Command was: CREATE OPERATOR FAMILY gist__int_ops USING gist;

pg_restore: [archiver (db)] Error from TOC entry 3607; 2753 18829 OPERATOR 
FAMILY gist__intbig_ops postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  operator family 
"gist__intbig_ops" for access method "gist" already exists
Command was: CREATE OPERATOR FAMILY gist__intbig_ops USING gist;

Those operators come from the intarray extension. Looking into the toc.dat, I 
can see entries like:

^@3079^@^E^@^@^@16441^@^H^@^@^@intarray^@   
^@^@^@EXTENSION^@^B^@^@^@^@<^@^@^@CREATE EXTENSION IF NOT EXISTS intarray WITH 
SCHEMA common;

^A^A^@^@^@^@^F^@^@^@public^A^A^@^@^@^@^G^@^@^@nmsroot^@^E^@^@^@false^@^B^@^@^@15^@^A^@^@^@3^A^A^@^@^@^@^@^@^@^@^@^U^N^@^@^@^@^@^@^@^@^D^@^@^@2753^@^E^@^@^@18784^@^L^@^@^@gin__int_ops^@^O^@^@^@OPERATOR
 FAMILY^@^B^@^@^@^@/^@^@^@CREATE OPERATOR FAMILY gin__int_ops USING gin;
…same sort of thing for the other 2…

Those seem reasonable to me. It seems the problem is with the CREATE OPERATOR 
as there is not “if not exists” for it. Considering we created a new DB for the 
restore and went with --create to pg_restore, there should be nothing in the DB 
to create a conflict, yet somehow it already exists!

Interestingly, this doesn’t happen all the time. It seems that once we can get 
a restore in, that it never happens again in any subsequent restores. My theory 
for this is that the databases (or really installs) where this happens started 
are an earlier version that was pg_upgrade’d (probably from 9.5 or even from 
9.3). But I can take the shell script that runs this, turn off checking for the 
return code from pg_restore and we’ve found no issues with the DB (hence I can 
treat them like warnings). Of course, the downside there is that if I always 
ignore the return code from pg_restore, how do I catch real errors? 😊

If it matters, here’s the version we’re dealing with for intarray:

# \dx
  List of installed extensions
Name| Version |   Schema   |
Description
+-++
intarray   | 1.2 | common | functions, operators, and index 
support for 1-D arrays of integers

If there is a logical reason for this, I’d appreciate an explanation so I can 
deal with it properly. If more research is needed, I can do that and pointers 
as to what to look for would be great. I have a VM and a backup I can restore & 
test as many times as needed.

Last minute discovery and thought. It just occurred to me to look in template1 
and intarray is in there. Is it possible that gives me a pre-existing extension 
& operators and therefore the source of the conflict? If so, is the solution as 
simple as deleting that extension from template1? In fact, should I delete all 
the extensions from template1 that it shows as being in the public schema?

Thanks!
Kevin


This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving th

Re: Environment Variable for --data-checksum during initdb

2019-09-05 Thread Adrian Klaver

On 9/5/19 12:25 PM, Ray Cote wrote:
I'm trying to remember what environment variable can be set 
--data-checksum before running postgresql-setup initdb. I know I've seen 
it but it escapes my recall at the moment.


AFAIK it does not exist and do I not see anything here:

https://doxygen.postgresql.org/initdb_8c.html




The PostgreSQL documentation has a nice page of environment variables 
for when PostgreSQL is running, but I'm not finding the variable to set 
during the initdb phase.


Any help to jog my memory greatly appreciated.
--Ray




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Adrian Klaver

On 9/5/19 2:00 PM, Judith Lacoste wrote:

Hi,

I think PostgreSQL is the solution for my needs, but I am not a 
programmer/coder.  If I can confirm PostgreSQL does what I need, I will 
have to hire someone to assist, I am willing to give the effort to learn 
myself but it may be difficult, my specialities are biology and 
microscopy.  Or perhaps the use of PostgreSQL is restricted to people 
highly trained in computer sciences? 


No, I am biologist and I learned Postgres/database management. It is 
about organizing things and that is a commonality with biology.



I have been looking around a lot through the PostgreSQL website, 
searching the archives, and I even contacted PostgreSQL people locally 
but I still don’t have a clear answer to my first question.  So I am 
posting it here with the hope to move on with PostgreSQL, or abandon the 
project. 


This would be the list to talk to.


I plan to install the database on a server in the office. Me and my four 
colleagues will occasionally connect to this database when we are 
working in other locations (usually hospitals or universities). In such 
remote locations, we often do not have internet/network, yet we still 
need to access the database.  Currently, we use a system where a copy of 
the database lives on each of our laptops.  We can access all the 
information in the database despite being offline.  This local copy of 
the database is synchronized with the server once network becomes 
available again. 
question is whether or not such set up is possible with PostgreSQL?


The set up is possible, though how you would implement it would depend 
on several factors:


1) What OS and versions are you using?

2) Are you working directly with the database or through an application?

3) What programming languages are you using?

There is also the option of using Sqlite(https://sqlite.org/index.html) 
for your 'local' databases and then syncing them to Postgres.





Why am I interested in PostrgreSQL?  First, my work has made me aware of 
how precious open source tools are.  Our main tools for data analysis 
are open source.  Commercial equivalents are black boxes which we try to 
avoid in the name of science reproducibility and transparency. 
  Secondly, the commercial software we are currently using is apparently 
based on PostgreSQL, so I am hoping that using PostgreSQL will make 
migration less painful.


Thank you in advance,

Judith



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_restore issues with intarray

2019-09-05 Thread Adrian Klaver

On 9/5/19 2:57 PM, Kevin Brannen wrote:
I think I need some help to understand what’s going here because I can’t 
figure it out and google isn’t helping…


This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few months, or 
so the plan is.) Pg code came from the community and we compiled it with 
no changes. This is on Centos 6.7, though I think the OS doesn’t matter.


I’ve found that when we do a pg_restore, that sometimes we get “errors”. 
I quote that because it turns out they’re really only warnings we can 
ignore, but when you check the return code ($?) after pg_restore, you 
get a non-zero value.


We’re calling pg_restore like:

$PGPATH/pg_restore -jobs=$NCPU --dbname=x .

FWIW, the backup was created with:

$PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU --file=$EXP 
--dbname=x


The options you are adding for --clean, --create only have meaning for 
plain text dumps. If you want those actions to occur on the restore 
then add them to the pg_restore line. Though if you are going to create 
a new database it will inherit objects from template1(as you found 
below), assuming you have not set WITH TEMPLATE to something else.




The issue I’m seeing is:

pg_restore: [archiver (db)] Error while PROCESSING TOC:

pg_restore: [archiver (db)] Error from TOC entry 3605; 2753 18784 
OPERATOR FAMILY gin__int_ops postgres


pg_restore: [archiver (db)] could not execute query: ERROR:  operator 
family "gin__int_ops" for access method "gin" already exists


     Command was: CREATE OPERATOR FAMILY gin__int_ops USING gin;

pg_restore: [archiver (db)] Error from TOC entry 3606; 2753 18806 
OPERATOR FAMILY gist__int_ops postgres


pg_restore: [archiver (db)] could not execute query: ERROR:  operator 
family "gist__int_ops" for access method "gist" already exists


     Command was: CREATE OPERATOR FAMILY gist__int_ops USING gist;

pg_restore: [archiver (db)] Error from TOC entry 3607; 2753 18829 
OPERATOR FAMILY gist__intbig_ops postgres


pg_restore: [archiver (db)] could not execute query: ERROR:  operator 
family "gist__intbig_ops" for access method "gist" already exists


     Command was: CREATE OPERATOR FAMILY gist__intbig_ops USING gist;

Those operators come from the *intarray* extension. Looking into the 
toc.dat, I can see entries like:


^@3079^@^E^@^@^@16441^@^H^@^@^@intarray^@   
^@^@^@EXTENSION^@^B^@^@^@^@<^@^@^@CREATE EXTENSION IF NOT EXISTS 
intarray WITH SCHEMA common;


^A^A^@^@^@^@^F^@^@^@public^A^A^@^@^@^@^G^@^@^@nmsroot^@^E^@^@^@false^@^B^@^@^@15^@^A^@^@^@3^A^A^@^@^@^@^@^@^@^@^@^U^N^@^@^@^@^@^@^@^@^D^@^@^@2753^@^E^@^@^@18784^@^L^@^@^@gin__int_ops^@^O^@^@^@OPERATOR 
FAMILY^@^B^@^@^@^@/^@^@^@CREATE OPERATOR FAMILY gin__int_ops USING gin;


…same sort of thing for the other 2…

Those seem reasonable to me. It seems the problem is with the CREATE 
OPERATOR as there is not “if not exists” for it. Considering we created 
a new DB for the restore and went with --create to pg_restore, there 
should be nothing in the DB to create a conflict, yet somehow it already 
exists!


Interestingly, this doesn’t happen all the time. It seems that once we 
can get a restore in, that it never happens again in any subsequent 
restores. My theory for this is that the databases (or really installs) 
where this happens started are an earlier version that was pg_upgrade’d 
(probably from 9.5 or even from 9.3). But I can take the shell script 
that runs this, turn off checking for the return code from pg_restore 
and we’ve found no issues with the DB (hence I can treat them like 
warnings). Of course, the downside there is that if I always ignore the 
return code from pg_restore, how do I catch real errors? 😊


If it matters, here’s the version we’re dealing with for intarray:

# \dx

   List of installed extensions

     Name    | Version |   Schema   |
Description


+-++

intarray   | 1.2 | common | functions, operators, and 
index support for 1-D arrays of integers


If there is a logical reason for this, I’d appreciate an explanation so 
I can deal with it properly. If more research is needed, I can do that 
and pointers as to what to look for would be great. I have a VM and a 
backup I can restore & test as many times as needed.


Last minute discovery and thought. It just occurred to me to look in 
template1 and intarray is in there. Is it possible that gives me a 
pre-existing extension & operators and therefore the source of the 
conflict? If so, is the solution as simple as deleting that extension 
from template1? In fact, should I delete all the extensions from 
template1 that it shows as being in the public schema?


Thanks!

Kevin

This e-mail transmission, and any documents, files or previous e-mail 
messages attached to it, may contain confidential information. If you 
are not the intended reci

RE: pg_restore issues with intarray

2019-09-05 Thread Kevin Brannen
> From: Adrian Klaver 
>
> On 9/5/19 2:57 PM, Kevin Brannen wrote:
> > I think I need some help to understand what’s going here because I
> > can’t figure it out and google isn’t helping.
> >
> > This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few months,
> > or so the plan is.) Pg code came from the community and we compiled it
> > with no changes. This is on Centos 6.7, though I think the OS doesn’t 
> > matter.
> >
> > We’re calling pg_restore like:
> >
> > $PGPATH/pg_restore -jobs=$NCPU --dbname=x .
> >
> > FWIW, the backup was created with:
> >
> > $PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU --file=$EXP
> > --dbname=x
>
> The options you are adding for --clean, --create only have meaning for plain 
> text dumps. If you want those actions to occur on the restore then add them 
> to the pg_restore line. Though if you are going to create a new database it 
> will inherit objects from template1(as you found below), assuming you have 
> not set WITH TEMPLATE to something else.
>

Good point that I'm not doing plain text dumps.

Are you saying that my problem is that I need "--clean" on the pg_restore?
I can try that. The fact that this only happens on a few DBs and not all still
mystifies me. See below on the template...

> >
> > The issue I’m seeing is:
> >
> > pg_restore: [archiver (db)] Error while PROCESSING TOC:
> >
> > pg_restore: [archiver (db)] Error from TOC entry 3605; 2753 18784
> > OPERATOR FAMILY gin__int_ops postgres
> >
> > pg_restore: [archiver (db)] could not execute query: ERROR:  operator
> > family "gin__int_ops" for access method "gin" already exists
> >
> >  Command was: CREATE OPERATOR FAMILY gin__int_ops USING gin;
> >
...
> >
> > If there is a logical reason for this, I’d appreciate an explanation
> > so I can deal with it properly. If more research is needed, I can do
> > that and pointers as to what to look for would be great. I have a VM
> > and a backup I can restore & test as many times as needed.
> >
> > Last minute discovery and thought. It just occurred to me to look in
> > template1 and intarray is in there. Is it possible that gives me a
> > pre-existing extension & operators and therefore the source of the
> > conflict? If so, is the solution as simple as deleting that extension
> > from template1? In fact, should I delete all the extensions from
> > template1 that it shows as being in the public schema?

I've had time since I wrote the post to drop the intarray extension from
template1 and try the restore. Sadly, same errors happen, so it's not
template1 that's doing this to me, or at least not in a way I understand.

Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Sam Gendler
If I was in a hurry to implement this, and I had a userbase that wasn't
very experienced with managing relational databases, I'd write some code to
automatically and periodically build a docker image with the latest data in
it (however often is sufficient to meet your needs), and then I'd set up a
1-line scheduled command on the laptops that would pull the latest docker
image to the user's laptop. Then I'd give them a script that runs the
docker container locally, and give them a client that knows how to connect
to it.  Assuming it is a read-only db when you aren't connected, I could
automate all of that in just a few hours in most environments, and the
changes that would be required on the individual laptops would be minimal.

If you need to be able to write to the db when disconnected, and pull those
writes into the central db instance when connected, that's a tougher
problem to solve which is more suited to some of the earlier suggestions.
But if you only need to read when remote and just want something that
works, is easy to put together, and can likely be built by an outside
consultant for minimal expense and even less ongoing support and
maintenance, I would just pay someone to read rthat first paragraph and set
it up for me and call it good. Any of the suggested solutions is going to
require a fair amount of administrative competence to really put together,
so going for one that shouldn't require much maintenance to keep
synchronized is your best bet.

On Thu, Sep 5, 2019 at 3:43 PM Adrian Klaver 
wrote:

> On 9/5/19 2:00 PM, Judith Lacoste wrote:
> > Hi,
> >
> > I think PostgreSQL is the solution for my needs, but I am not a
> > programmer/coder.  If I can confirm PostgreSQL does what I need, I will
> > have to hire someone to assist, I am willing to give the effort to learn
> > myself but it may be difficult, my specialities are biology and
> > microscopy.  Or perhaps the use of PostgreSQL is restricted to people
> > highly trained in computer sciences?
>
> No, I am biologist and I learned Postgres/database management. It is
> about organizing things and that is a commonality with biology.
>
>
> > I have been looking around a lot through the PostgreSQL website,
> > searching the archives, and I even contacted PostgreSQL people locally
> > but I still don’t have a clear answer to my first question.  So I am
> > posting it here with the hope to move on with PostgreSQL, or abandon the
> > project.
>
> This would be the list to talk to.
>
>
> > I plan to install the database on a server in the office. Me and my four
> > colleagues will occasionally connect to this database when we are
> > working in other locations (usually hospitals or universities). In such
> > remote locations, we often do not have internet/network, yet we still
> > need to access the database.  Currently, we use a system where a copy of
> > the database lives on each of our laptops.  We can access all the
> > information in the database despite being offline.  This local copy of
> > the database is synchronized with the server once network becomes
> > available again.
> > question is whether or not such set up is possible with PostgreSQL?
>
> The set up is possible, though how you would implement it would depend
> on several factors:
>
> 1) What OS and versions are you using?
>
> 2) Are you working directly with the database or through an application?
>
> 3) What programming languages are you using?
>
> There is also the option of using Sqlite(https://sqlite.org/index.html)
> for your 'local' databases and then syncing them to Postgres.
>
>
> >
> > Why am I interested in PostrgreSQL?  First, my work has made me aware of
> > how precious open source tools are.  Our main tools for data analysis
> > are open source.  Commercial equivalents are black boxes which we try to
> > avoid in the name of science reproducibility and transparency.
> >   Secondly, the commercial software we are currently using is apparently
> > based on PostgreSQL, so I am hoping that using PostgreSQL will make
> > migration less painful.
> >
> > Thank you in advance,
> >
> > Judith
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>


Re: pg_restore issues with intarray

2019-09-05 Thread Adrian Klaver

On 9/5/19 4:06 PM, Kevin Brannen wrote:

From: Adrian Klaver 

On 9/5/19 2:57 PM, Kevin Brannen wrote:

I think I need some help to understand what’s going here because I
can’t figure it out and google isn’t helping.

This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few months,
or so the plan is.) Pg code came from the community and we compiled it
with no changes. This is on Centos 6.7, though I think the OS doesn’t matter.

We’re calling pg_restore like:

$PGPATH/pg_restore -jobs=$NCPU --dbname=x .

FWIW, the backup was created with:

$PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU --file=$EXP
--dbname=x


The options you are adding for --clean, --create only have meaning for plain 
text dumps. If you want those actions to occur on the restore then add them to 
the pg_restore line. Though if you are going to create a new database it will 
inherit objects from template1(as you found below), assuming you have not set 
WITH TEMPLATE to something else.



Good point that I'm not doing plain text dumps.

Are you saying that my problem is that I need "--clean" on the pg_restore?


No, just that if you where expecting the clean to happen on the restore 
you would be disappointed.



I can try that. The fact that this only happens on a few DBs and not all still
mystifies me. See below on the template..


My guess is you where restoring into a database with preexisting objects 
because neither create or clean was being done.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_restore issues with intarray

2019-09-05 Thread Adrian Klaver

On 9/5/19 4:24 PM, Adrian Klaver wrote:

On 9/5/19 4:06 PM, Kevin Brannen wrote:

From: Adrian Klaver 

On 9/5/19 2:57 PM, Kevin Brannen wrote:

I think I need some help to understand what’s going here because I
can’t figure it out and google isn’t helping.

This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few months,
or so the plan is.) Pg code came from the community and we compiled it
with no changes. This is on Centos 6.7, though I think the OS 
doesn’t matter.


We’re calling pg_restore like:

$PGPATH/pg_restore -jobs=$NCPU --dbname=x .

FWIW, the backup was created with:

$PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU --file=$EXP
--dbname=x


The options you are adding for --clean, --create only have meaning 
for plain text dumps. If you want those actions to occur on the 
restore then add them to the pg_restore line. Though if you are going 
to create a new database it will inherit objects from template1(as 
you found below), assuming you have not set WITH TEMPLATE to 
something else.




Good point that I'm not doing plain text dumps.

Are you saying that my problem is that I need "--clean" on the 
pg_restore?


No, just that if you where expecting the clean to happen on the restore 
you would be disappointed.


I can try that. The fact that this only happens on a few DBs and not 
all still

mystifies me. See below on the template..


My guess is you where restoring into a database with preexisting objects 
because neither create or clean was being done.


Just be aware that --clean & --create drops the existing database before 
creating a new one.









--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Question from someone who is not trained in computer sciences

2019-09-05 Thread Judith Lacoste
Hi,

I am very grateful for all your prompt replies, that’s fantastic since I got an 
answer to my question.  I am going to digest all of it, and then decide what to 
do next.  Once again, thanks a lot!

Best regards,

Judith


> On 2019-Sep-05, at 19:08, Sam Gendler  wrote:
> 
> If I was in a hurry to implement this, and I had a userbase that wasn't very 
> experienced with managing relational databases, I'd write some code to 
> automatically and periodically build a docker image with the latest data in 
> it (however often is sufficient to meet your needs), and then I'd set up a 
> 1-line scheduled command on the laptops that would pull the latest docker 
> image to the user's laptop. Then I'd give them a script that runs the docker 
> container locally, and give them a client that knows how to connect to it.  
> Assuming it is a read-only db when you aren't connected, I could automate all 
> of that in just a few hours in most environments, and the changes that would 
> be required on the individual laptops would be minimal.
> 
> If you need to be able to write to the db when disconnected, and pull those 
> writes into the central db instance when connected, that's a tougher problem 
> to solve which is more suited to some of the earlier suggestions.  But if you 
> only need to read when remote and just want something that works, is easy to 
> put together, and can likely be built by an outside consultant for minimal 
> expense and even less ongoing support and maintenance, I would just pay 
> someone to read rthat first paragraph and set it up for me and call it good. 
> Any of the suggested solutions is going to require a fair amount of 
> administrative competence to really put together, so going for one that 
> shouldn't require much maintenance to keep synchronized is your best bet. 
> 
> On Thu, Sep 5, 2019 at 3:43 PM Adrian Klaver  > wrote:
> On 9/5/19 2:00 PM, Judith Lacoste wrote:
> > Hi,
> > 
> > I think PostgreSQL is the solution for my needs, but I am not a 
> > programmer/coder.  If I can confirm PostgreSQL does what I need, I will 
> > have to hire someone to assist, I am willing to give the effort to learn 
> > myself but it may be difficult, my specialities are biology and 
> > microscopy.  Or perhaps the use of PostgreSQL is restricted to people 
> > highly trained in computer sciences? 
> 
> No, I am biologist and I learned Postgres/database management. It is 
> about organizing things and that is a commonality with biology.
> 
> 
> > I have been looking around a lot through the PostgreSQL website, 
> > searching the archives, and I even contacted PostgreSQL people locally 
> > but I still don’t have a clear answer to my first question.  So I am 
> > posting it here with the hope to move on with PostgreSQL, or abandon the 
> > project. 
> 
> This would be the list to talk to.
> 
> 
> > I plan to install the database on a server in the office. Me and my four 
> > colleagues will occasionally connect to this database when we are 
> > working in other locations (usually hospitals or universities). In such 
> > remote locations, we often do not have internet/network, yet we still 
> > need to access the database.  Currently, we use a system where a copy of 
> > the database lives on each of our laptops.  We can access all the 
> > information in the database despite being offline.  This local copy of 
> > the database is synchronized with the server once network becomes 
> > available again. 
> > question is whether or not such set up is possible with PostgreSQL?
> 
> The set up is possible, though how you would implement it would depend 
> on several factors:
> 
> 1) What OS and versions are you using?
> 
> 2) Are you working directly with the database or through an application?
> 
> 3) What programming languages are you using?
> 
> There is also the option of using Sqlite(https://sqlite.org/index.html 
> ) 
> for your 'local' databases and then syncing them to Postgres.
> 
> 
> > 
> > Why am I interested in PostrgreSQL?  First, my work has made me aware of 
> > how precious open source tools are.  Our main tools for data analysis 
> > are open source.  Commercial equivalents are black boxes which we try to 
> > avoid in the name of science reproducibility and transparency. 
> >   Secondly, the commercial software we are currently using is apparently 
> > based on PostgreSQL, so I am hoping that using PostgreSQL will make 
> > migration less painful.
> > 
> > Thank you in advance,
> > 
> > Judith
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 
> 
> 




RE: pg_restore issues with intarray

2019-09-05 Thread Kevin Brannen
> On 9/5/19 4:24 PM, Adrian Klaver wrote:
> > On 9/5/19 4:06 PM, Kevin Brannen wrote:
> >>> From: Adrian Klaver 
> >>>
> >>> On 9/5/19 2:57 PM, Kevin Brannen wrote:
>  I think I need some help to understand what’s going here because I
>  can’t figure it out and google isn’t helping.
> 
>  This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few
>  months, or so the plan is.) Pg code came from the community and we
>  compiled it with no changes. This is on Centos 6.7, though I think
>  the OS doesn’t matter.
> 
>  We’re calling pg_restore like:
> 
>  $PGPATH/pg_restore -jobs=$NCPU --dbname=x .
> 
>  FWIW, the backup was created with:
> 
>  $PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU
>  --file=$EXP --dbname=x
> >>>
> >>> The options you are adding for --clean, --create only have meaning
> >>> for plain text dumps. If you want those actions to occur on the
> >>> restore then add them to the pg_restore line. Though if you are
> >>> going to create a new database it will inherit objects from
> >>> template1(as you found below), assuming you have not set WITH
> >>> TEMPLATE to something else.
> >>>
> >>
> >> Good point that I'm not doing plain text dumps.
> >>
> >> Are you saying that my problem is that I need "--clean" on the
> >> pg_restore?

Not the issue, that made various things worse. :)

> No, just that if you were expecting the clean to happen on the
> restore you would be disappointed.

To be crystal clear, on restore I do this from a bash script:

# move old to the side in case we need this on failure
$PGPATH/psql -d template1 -c "DROP DATABASE IF EXISTS save$db;"
$PGPATH/psql -d template1 -c "ALTER DATABASE $db RENAME TO save$db;"
# restore
$PGPATH/createdb -e -O $dbowner -T template0 $db
$PGPATH/pg_restore $VERBOSE --jobs=$NCPU --dbname=$db .

So by using template0, I'm expecting nothing to be there and the restore
to put everything in there I need to get back to the point where the
backup/dump happened. This is why I'm surprised I'm getting this error.

It feels like the restore is adding the intarray extension, which does a
CREATE OPERATOR FAMILY on its own, then later the restore does CREATE OPERATOR
FAMILY on again causing the problem. Yet this doesn't happen on most of our
databases, just a few. It's maddening to me.

> >> I can try that. The fact that this only happens on a few DBs and not
> >> all still mystifies me. See below on the template..
>
> My guess is you where restoring into a database with preexisting
> objects because neither create or clean was being done.

Shouldn't be happening with that createdb command. Hmm, I wonder what
I'd see if I put a "psql" command with "\dx" after the createdb and before
the restore...

Nope, the only extension is plpgsql, so the problem is coming from the
restore. Maybe I gave a bad option to pg_dump, but pg_restore seems to be
the issue. It really makes me want to modify the toc.dat file and hack
out those CREATE OPERATOR FAMILY lines and see what happens.

K.
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: pg_restore issues with intarray

2019-09-05 Thread Adrian Klaver

On 9/5/19 5:05 PM, Kevin Brannen wrote:

On 9/5/19 4:24 PM, Adrian Klaver wrote:

On 9/5/19 4:06 PM, Kevin Brannen wrote:

From: Adrian Klaver 

On 9/5/19 2:57 PM, Kevin Brannen wrote:

I think I need some help to understand what’s going here because I
can’t figure it out and google isn’t helping.

This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few
months, or so the plan is.) Pg code came from the community and we
compiled it with no changes. This is on Centos 6.7, though I think
the OS doesn’t matter.

We’re calling pg_restore like:

$PGPATH/pg_restore -jobs=$NCPU --dbname=x .

FWIW, the backup was created with:

$PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU
--file=$EXP --dbname=x


The options you are adding for --clean, --create only have meaning
for plain text dumps. If you want those actions to occur on the
restore then add them to the pg_restore line. Though if you are
going to create a new database it will inherit objects from
template1(as you found below), assuming you have not set WITH
TEMPLATE to something else.



Good point that I'm not doing plain text dumps.

Are you saying that my problem is that I need "--clean" on the
pg_restore?


Not the issue, that made various things worse. :)


No, just that if you were expecting the clean to happen on the
restore you would be disappointed.


To be crystal clear, on restore I do this from a bash script:

# move old to the side in case we need this on failure
$PGPATH/psql -d template1 -c "DROP DATABASE IF EXISTS save$db;"
$PGPATH/psql -d template1 -c "ALTER DATABASE $db RENAME TO save$db;"
# restore
$PGPATH/createdb -e -O $dbowner -T template0 $db
$PGPATH/pg_restore $VERBOSE --jobs=$NCPU --dbname=$db .

So by using template0, I'm expecting nothing to be there and the restore
to put everything in there I need to get back to the point where the
backup/dump happened. This is why I'm surprised I'm getting this error.

It feels like the restore is adding the intarray extension, which does a
CREATE OPERATOR FAMILY on its own, then later the restore does CREATE OPERATOR
FAMILY on again causing the problem. Yet this doesn't happen on most of our
databases, just a few. It's maddening to me.


I can try that. The fact that this only happens on a few DBs and not
all still mystifies me. See below on the template..


My guess is you where restoring into a database with preexisting
objects because neither create or clean was being done.


Shouldn't be happening with that createdb command. Hmm, I wonder what
I'd see if I put a "psql" command with "\dx" after the createdb and before
the restore...


What does \dx show in the database you taking the dump from?



Nope, the only extension is plpgsql, so the problem is coming from the
restore. Maybe I gave a bad option to pg_dump, but pg_restore seems to be
the issue. It really makes me want to modify the toc.dat file and hack
out those CREATE OPERATOR FAMILY lines and see what happens.


What if you do a restore to a file only the schema e.g.:

pg_restore -s -f some_file.sql

This will create a plain text version of only the schema objects in 
some_file.sql instead of restoring to the database. It might help shed 
some light.




K.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_restore issues with intarray

2019-09-05 Thread Jerry Sievers
Kevin Brannen  writes:

>> On 9/5/19 4:24 PM, Adrian Klaver wrote:
>> > On 9/5/19 4:06 PM, Kevin Brannen wrote:
>> >>> From: Adrian Klaver 
>> >>>
>> >>> On 9/5/19 2:57 PM, Kevin Brannen wrote:
>>  I think I need some help to understand what’s going here because I
>>  can’t figure it out and google isn’t helping.
>> 
>>  This is for Pg 9.6.5. (Yes, we are upgrading to 12.x in a few
>>  months, or so the plan is.) Pg code came from the community and we
>>  compiled it with no changes. This is on Centos 6.7, though I think
>>  the OS doesn’t matter.
>> 
>>  We’re calling pg_restore like:
>> 
>>  $PGPATH/pg_restore -jobs=$NCPU --dbname=x .
>> 
>>  FWIW, the backup was created with:
>> 
>>  $PGPATH/pg_dump --clean --create --format=d --jobs=$NCPU
>>  --file=$EXP --dbname=x
>> >>>
>> >>> The options you are adding for --clean, --create only have meaning
>> >>> for plain text dumps. If you want those actions to occur on the
>> >>> restore then add them to the pg_restore line. Though if you are
>> >>> going to create a new database it will inherit objects from
>> >>> template1(as you found below), assuming you have not set WITH
>> >>> TEMPLATE to something else.
>> >>>
>> >>
>> >> Good point that I'm not doing plain text dumps.
>> >>
>> >> Are you saying that my problem is that I need "--clean" on the
>> >> pg_restore?
>
> Not the issue, that made various things worse. :)
>
>> No, just that if you were expecting the clean to happen on the
>> restore you would be disappointed.
>
> To be crystal clear, on restore I do this from a bash script:
>
> # move old to the side in case we need this on failure
> $PGPATH/psql -d template1 -c "DROP DATABASE IF EXISTS save$db;"
> $PGPATH/psql -d template1 -c "ALTER DATABASE $db RENAME TO save$db;"
> # restore
> $PGPATH/createdb -e -O $dbowner -T template0 $db
> $PGPATH/pg_restore $VERBOSE --jobs=$NCPU --dbname=$db .
>
> So by using template0, I'm expecting nothing to be there and the restore
> to put everything in there I need to get back to the point where the
> backup/dump happened. This is why I'm surprised I'm getting this error.
>
> It feels like the restore is adding the intarray extension, which does a
> CREATE OPERATOR FAMILY on its own, then later the restore does CREATE OPERATOR
> FAMILY on again causing the problem. Yet this doesn't happen on most of our
> databases, just a few. It's maddening to me.
>

I've seen this sort of problem before.

It was due to some legacy DBs where I work having a few missing
extension membership registrations.

pg_dump wants to include any such things in the output  which may run
afoul  of same having been already created by the extension load.

HTH



-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net




Seeking New Members for the Community Code of Conduct Committee

2019-09-05 Thread Stacey Haysler
This message is being sent from the Community Code of Conduct Committee, with 
the approval of the Core Team.
The PostgreSQL Community Code of Conduct Committee is approaching our first 
year anniversary. As part of the Community CoC policy, the Committee membership 
is to be refreshed on an annual basis.
We are seeking up to two (2) volunteers to serve on the Committee for the 
coming year, October 1, 2019 - September 30, 2020.
We are seeking people who reflect the diversity of the PostgreSQL community, 
with the goal to have members from multiple countries and varied demographics.
The time commitment for Committee involvement varies, based on internal 
administrative work and the number of active investigations. We estimate an 
average of 5 to 10 hours per month, but that could increase if there is an 
increase in the number of incident reports.
If you are interested, please complete the questionnaire below, and email your 
responses to the Committee at c...@postgresql.org  
no later than September 17, 2019, at 11:00 PM AOE. You can check your local 
time against AOE at https://time.is/Anywhere_on_Earth 

The questionnaire:
Your name:
Current employer: 
Current country of residence:
(We ask for employer and residence because one of the goals of the Committee is 
to have representation from a variety of geographical areas. We also want to 
avoid a concentration of members from one company.)
1. What interests you about being on the CoC Committee?
2. Have you been on another CoC Committee, or had a similar role at another 
organization? (Prior experience is not required, it's just helpful to know 
everyone's background.)
3. What else do you want to tell us about yourself that is helpful for us to 
know about your potential involvement with the CCoC Committee?
Please be sure to send your reply to the CCoC email listed above, and not as a 
reply to this message. Thank you.
Regards,
Stacey

Stacey Haysler
Chair
PostgreSQL Community Code of Conduct Committee