Failed to initialize 13.2-1 cluster on Windows

2021-03-31 Thread Martin
Hello, 


I'm trying to install 13.2-1 on Windows 10. Previous version was 10.5,
installed using the EnterpriseDB installer, without any problems. That
version was completely removed before attempting to install 13.2. 


EnterpriseDB installer failed to initialize the cluster. The error says
that initdb can find postgres executable but it's of a different
version. I've read some threads that described permission problems, but
I couldn't solve it. What I've tried: 

- Installing everything by default (including all default folders) 


- Installing on different folders, on C: and D: disks, outside of
Program Files. 

- Installer is run with "Run as administrator" option 


- Tried to execute "initdb -D path/to/data/folder" manually after the
installer failed. Same message 


- Manually altered permission on installation folder, giving local users
full control on the whole folder (programs and data) 


- Tried the zip file without installer. Uncompressed everything on a
folder outside program files and running initdb by hand. Error: "initdb:
error: El programa «postgres» fue encontrado por
«C:\Apps\PostgreSQL\13\bin/initdb», pero no es de la misma versión que
initdb." BTW, I don't know why messages are in spanish. Although I'm
located in Argentina, my windows and every program is installed in
english, and english is selected as primary language. But, nevertheless,
that's unimportant. Translation is "postgres program was found by
initdb, but it's a different version". 


- When executing postgres.exe directly, it says "FATAL: XX000: postgres:
no se pudo localizar el ejecutable postgres correspondiente. UBICACIÓN:
getInstallationPaths,
d:\pginstaller_13.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:1489".
That would translate as "could not locate corresponding postgres
executable". 


I've read somewhere a recommendation to create a separate user
"postgres". Although that's common practice on Linux, I've never done it
on previous Windows installations. Would that be something to try? 


I'm running out of ideas for what else to try. Any help will be much
appreciated. 

Regards, 


Martin

Re: Failed to initialize 13.2-1 cluster on Windows

2021-03-31 Thread Martin

Update: I've just tried with version 12.6-1 and works perfectly fine.
Clearly, there is some bug in the 13.2-1 published version. 


On 2021-03-31 16:35, Martin wrote:

Hello, 

I'm trying to install 13.2-1 on Windows 10. Previous version was 10.5, installed using the EnterpriseDB installer, without any problems. That version was completely removed before attempting to install 13.2. 

EnterpriseDB installer failed to initialize the cluster. The error says that initdb can find postgres executable but it's of a different version. I've read some threads that described permission problems, but I couldn't solve it. What I've tried: 

- Installing everything by default (including all default folders) 

- Installing on different folders, on C: and D: disks, outside of Program Files. 

- Installer is run with "Run as administrator" option 

- Tried to execute "initdb -D path/to/data/folder" manually after the installer failed. Same message 

- Manually altered permission on installation folder, giving local users full control on the whole folder (programs and data) 

- Tried the zip file without installer. Uncompressed everything on a folder outside program files and running initdb by hand. Error: "initdb: error: El programa «postgres» fue encontrado por «C:\Apps\PostgreSQL\13\bin/initdb», pero no es de la misma versión que initdb." BTW, I don't know why messages are in spanish. Although I'm located in Argentina, my windows and every program is installed in english, and english is selected as primary language. But, nevertheless, that's unimportant. Translation is "postgres program was found by initdb, but it's a different version". 

- When executing postgres.exe directly, it says "FATAL: XX000: postgres: no se pudo localizar el ejecutable postgres correspondiente. UBICACIÓN: getInstallationPaths, d:\pginstaller_13.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:1489". That would translate as "could not locate corresponding postgres executable". 

I've read somewhere a recommendation to create a separate user "postgres". Although that's common practice on Linux, I've never done it on previous Windows installations. Would that be something to try? 

I'm running out of ideas for what else to try. Any help will be much appreciated. 

Regards, 


Martin

Re: Enforce primary key on every table during dev?

2018-03-01 Thread Martin Moore
 

On 03/01/2018 11:47 AM, Daevor The Devoted wrote:

>I can't stand synthetic keys.  By their very nature, they're so purposelessly 
>arbitrary, and allow you to insert garbage into the >table.

How does not having a ‘real’ PK allow you to insert garbage and a ‘real’ one 
prevent garbage? 

If there’s no natural PK, at least a synthetic one will allow very quick record 
identification if used.

 

 

Martin.

 



Re: What is wrong with my pgadmin?

2018-03-05 Thread Martin Moore
Using pgAdmin 4 2.0 on pg 9.6 and 10 I get div/0 error.



On 05/03/2018, 14:22, "Adrian Klaver"  wrote:

On 03/05/2018 06:05 AM, Łukasz Jarych wrote:
> Hi Guys,
> 
> what is wrong with my pgAdmin?
> 
> I am creating wrong query:
> 
> select 1/0 but still this is executed!

Well it will be executed, but it should throw an error. Can you confirm 
that running select 1/0; in psql does indeed throw an error:

test=# select 1/0;
ERROR:  division by zero

Also:

1) What version of pgAdmin4 are you running?

2) What version of Postgres is it running against?

> 
> 
> 
> why?
> 
> Best wishes,
> Jacek
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com







a very primitive question about division

2018-03-07 Thread Martin Mueller
I have a very primitive question about division to which I can’t find an 
obvious answer in the documentation.

Given two values defined as integers, how do I divide one by the other and get 
an answer with two decimals, e.g 3 /4 = 0.75.

This is very simple in mysql and seems to be oddly contorted in postgres. I 
spent half an hour on the Web and with the documentation but couldn’t find a 
simple example.

Probably my stupidity.


Re: a very primitive question about division

2018-03-07 Thread Martin Mueller
Thanks.  So  round(before1550/colfreq::numeric, 2)  produces the desired result.

The explanations and examples of string functions in the Postgres documentation 
are a model of clarity. The explanations and examples of basic arithmetic 
operations  are not.  There is room for improvement there.

From: "David G. Johnston" 
Date: Wednesday, March 7, 2018 at 11:23 AM
To: Martin Mueller 
Cc: "pgsql-general@lists.postgresql.org" 
Subject: Re: a very primitive question about division

On Wed, Mar 7, 2018 at 10:21 AM, Martin Mueller 
mailto:martinmuel...@northwestern.edu>> wrote:
Given two values defined as integers, how do I divide one by the other and get 
an answer with two decimals, e.g 3 /4 = 0.75.

​Case one of them to numeric.

​select 3/4::numeric
​
​David J.​



Re: a very primitive question about division

2018-03-07 Thread Martin Mueller
In Table 9.5  the division of 9 by 4 is indeed described clearly. But there is 
no example of 4/9 and the different ways of formatting it as a decimal fraction 
with different options for rounding or a percentage.  Two or three added 
examples would make life easier for folks who have not progressed much beyond 
8th grade math.

From: "David G. Johnston" 
Date: Wednesday, March 7, 2018 at 11:43 AM
To: Martin Mueller 
Cc: "pgsql-general@lists.postgresql.org" 
Subject: Re: a very primitive question about division

On Wed, Mar 7, 2018 at 10:30 AM, Martin Mueller 
mailto:martinmuel...@northwestern.edu>> wrote:
Thanks.  So  round(before1550/colfreq::numeric, 2)  produces the desired result.

The explanations and examples of string functions in the Postgres documentation 
are a model of clarity. The explanations and examples of basic arithmetic 
operations  are not.  There is room for improvement there.


Possibly.  Patches (or even just email suggestions - though the former are 
generally more apt to get applied) detailing specific improvements to make are 
welcome.

The description "division (integer division truncates the result)" seems 
reasonably clear - if you don't want the result truncated, and have two 
integers, you have to make one of the inputs a non-integer.  Of the various 
options I tend to choose numeric though others are possible.

David J.



Re: a very primitive question about division

2018-03-07 Thread Martin Mueller
I see the logic of that, but I also think that this is not how many users think 
when they want to know how to get results for 4/9.  Note that division is a 
great stumbling block in middle school.  They want to know how to divide four 
by nine and don’t want to think about division and formatting as separate 
operations to be looked up in separate chapters. Perhaps they should think that 
way, but they don’t, and as Adenauer, modern Germany’s first chancellor,  
observed: you have to take people as they come because there are no others.

At the least the table 9.5 in 9.3  could have an NB about the problem of 
decimals in division, together with an example of how to do 4/9 in different 
formats, and a reference to the fuller discussion.

From: "David G. Johnston" 
Date: Wednesday, March 7, 2018 at 12:11 PM
To: Martin Mueller 
Cc: "pgsql-general@lists.postgresql.org" 
Subject: Re: a very primitive question about division

On Wed, Mar 7, 2018 at 10:56 AM, Martin Mueller 
mailto:martinmuel...@northwestern.edu>> wrote:
But there is no example of 4/9 and the different ways of formatting it as a 
decimal fraction with different options for rounding or a percentage.

​That would be the responsibility of the "Data Type Formatting Functions" 
chapter and the "to_char" function it describes.

David J.
​


Circle and box intersect

2018-03-16 Thread Martin Moore
PG10

Is there an operator to determine if a box and circle intersect? 
I can only see box && box and can use centre+rad and distance to calculate
circle:circle.

Thanks.




RE: Circle and box intersect

2018-03-16 Thread Martin Moore
Cheers - I'd tried postGIS on 9.6 but had install issues. Installed first time 
on 10 :)


-Original Message-
From: Andreas Kretschmer [mailto:andr...@a-kretschmer.de] 
Sent: 16 March, 2018 11:46 AM
To: pgsql-general@lists.postgresql.org
Subject: Re: Circle and box intersect



Am 16.03.2018 um 11:00 schrieb Martin Moore:
> PG10
>
> Is there an operator to determine if a box and circle intersect?
> I can only see box && box and can use centre+rad and distance to 
> calculate circle:circle.
>
> Thanks.
>
>
please don't hijack other mail-threads by answering & changing the subject, 
your mail still contains references to "SELECT .. FOR UPDATE: 
find out who locked a row"
Create a new mail for a new thread!


to your question: consider PostGIS and it's functions st_intersects()

Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com






Problem with connection to host (wrong host)

2018-03-31 Thread Mike Martin
Hi
I am just setting up a postgres server, details
Host 192.168.0.3
pg_hba.conf
# TYPE  DATABASEUSERADDRESS METHOD

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
hostall all 192.168.0.0/32ident
# IPv6 local connections:
hostall all 127.0.0.1/32ident

hostall all ::1/128 ident

postgresql.conf
listen_addresses-'*'

however when I try to connect from my laptop (ip 192.168.0.2) I get

psql -h 192.168.0.3 -U usevideo -W
Password for user usevideo:
psql: FATAL:  no pg_hba.conf entry for host "192.168.0.2", user "usevideo",
database "usevideo", SSL off

So a bit confused, is psql ignoring the host parameter

thanks


Regex on field value

2018-04-14 Thread Mike Martin
Is this possible, eg
Substring(field, regex include other field name)
Thanks


computing z-scores

2018-05-24 Thread Martin Mueller
You construct a z-score for a set of values by subtracting the average from the 
value and dividing the result by the standard deviation. I know how to do this 
in a two-step procedure. First, I compute the average and standard deviation. 
In a second run I use the formula and apply it to each value. 

Is there a way of doing this in a single-step procedure or can you chain the 
two parts together in one query?  This goes beyond my SQL competence. 

Martin Mueller



Question about getting values from range of dates

2018-06-22 Thread Mike Martin
Hi
I am looking for suggestions about the best way to resolve this problem.

Scenario

I have entries for files in database which lack specific values (xml files
not generated)
These values can be obtained by what are called DTC read files, where the
values are the same in the last DTC read file before date of file and the
next DTC read file (by date)

This code works but looks horrendous, so would appreciate any ideas.

thanks

Mike

select max(a.recordingdate) ,max(b.recordingdate)
,a.registration,max(b.filename)
from create_tdms a
join (select registration,recordingdate,filename from create_tdms where
filename not like 'DTC%') b
on b.registration=a.registration
where b.recordingdatea.recordingdate and a.filename like 'DTC%'
group by a.registration


two instances of postgres on the same machine?

2018-07-15 Thread Martin Mueller
I made a stupid mistake with a backup and would be grateful for some advice on 
how to extricate myself.

I created a dumpfile from a 10.3 postgres installation on a Mac and tried to 
restore it on another Mac 1,000 miles away, but forgot that that machine runs 
Postgres 9.6.

The command

/Applications/Postgres.app/Contents/Versions/9.6/bin/psql -U postgres  
earlyprint2 

Re: two instances of postgres on the same machine?

2018-07-15 Thread Martin Mueller
Many thanks for your generous help, and on a Sunday afternoon as well. 

This is a wonderful listserv, and I am deeply indebted to it. 

MM
On 7/15/18, 2:17 PM, "Tom Lane"  wrote:

Martin Mueller  writes:
> I created a dumpfile from a 10.3 postgres installation on a Mac and tried 
to restore it on another Mac 1,000 miles away, but forgot that that machine 
runs Postgres 9.6.

OK ...

> The command
> /Applications/Postgres.app/Contents/Versions/9.6/bin/psql -U postgres  
earlyprint2  generated a long error list  like this:
> ERROR:  syntax error at or near "toc"
> LINE 1: toc.datspelling character varying(150) NOT NULL,
> ^

I gather from the ".tar" filename that you used -Ft dump format; if so,
you need to feed it to pg_restore not plain psql.  The errors you're
showing here look somewhat consistent with the theory that you got that
wrong, although other explanations are certainly possible.

> I can’t tell from the error list whether it’s  a matter of a corrupted
> backup file or (more probably) a result of 9.6 not reading  a 10.3 dump
> file.

While the latter is possible, I think that typically it would only happen
if the 10.x installation is using SQL DDL features that 9.6.x doesn't
have.  I would try doing what you're doing, but with the right process...

> If the latter, what would be the safest way of cleaning up this mess?
> There is a side of me that would prefer installing a 10.3 on the same
> machine in addition, but I’m not sure whether I can do this in the
> environment of the Postgres App. I access the data via Aqua Studio that
> looks for the server on port 5432, which seems to be a standard port.
> Is there a safe way of installing a second server via a different port?

You can certainly install multiple PG servers on one machine; most of the
developers have several different PG versions laying about.  What you need
for that is separate install directories, separate data directories, and
separate port numbers for each server.  This is easy to achieve if
building from source, but may be nigh impossible if you are installing
somebody else's packaging and they didn't make provisions for it.  I don't
know much about the Postgres App packaging so I can't offer any advice
there.  I also don't know how hard it is to get the Aqua client you
mention to talk to a nonstandard port number, though I'd guess that it's
possible.

regards, tom lane




Re: Code of Conduct plan

2018-09-14 Thread Martin Mueller
I have followed this list for a couple of years, have benefited several times 
from quick and helpful advice,  and wonder whether all this code of conduct 
stuff is a solution in search of a problem. Or, if there is a problem now and 
then, whether an elaborate code does a better job than reminding offenders that 
they’ve crossed a line marked by common decency or common courtesy. I think a 
list manager should have the right to expel repeat offenders. I doubt whether 
‘proceduralizing’ offences against common decency or common courtesy makes it 
easier to police what is always a tricky boundary.

It is possible to spend a lot of time and energy designing bureaucratic 
solution that in the end does little good.  My grandchildren were taught that 
“please and thank you sound so nice  manners are important, be polite” sung 
to the tune of Frère Jacques. They don’t always remember it,  but a longer poem 
wouldn’t help.


From: James Keener 
Date: Friday, September 14, 2018 at 7:52 AM
To: "pgsql-general@lists.postgresql.org" , 
Chris Travers , "i...@dataegret.com" 

Cc: Tom Lane , Stephen Frost , 
"pgsql-generallists.postgresql.org" , 
"pgsql-hack...@lists.postgresql.org" , 
"pgsql-advoc...@lists.postgresql.org" 
Subject: Re: Code of Conduct plan

I find a lot of neo-con/trumpian political stances moronic, short-sighted, and 
anti-intellectual and therefore consider them offensive, an affront on my way 
of life, and a stain on my country.

1) Can I report anyone holding such views and discussing them on a 3rd party 
forum?

2) Could I be reported for saying the above on a 3rd party forum?

Obviously the pg mailing list isn't a place for such discussion, but is being a 
member of this community a deal with the devil to give up my right to free 
speech elsewhere?

Jim
On September 14, 2018 6:10:47 AM EDT, Chris Travers  
wrote:

On Fri, Sep 14, 2018 at 11:45 AM Ilya Kosmodemiansky 
mailto:i...@dataegret.com>> wrote:
On Fri, Sep 14, 2018 at 10:31 AM, Chris Travers 
mailto:chris.trav...@gmail.com>> wrote:
> I really have to object to this addition:
> "This Code is meant to cover all interaction between community members,
> whether or not it takes place within 
> postgresql.org
>  infrastructure, so long
> as there is not another Code of Conduct that takes precedence (such as a
> conference's Code of Conduct)."
>
> That covers things like public twitter messages over live political
> controversies which might not be personally directed.   At least if one is
> going to go that route, one ought to *also* include a safe harbor for
> non-personally-directed discussions of philosophy, social issues, and
> politics.  Otherwise, I think this is asking for trouble.  See, for example,
> what happened with Opalgate and how this could be seen to encourage use of
> this to silence political controversies unrelated to PostgreSQL.

I think, this point has nothing to do with _correct_ discussions or
public tweets.

If one community member tweets publicly and in a way which abuses
other community members, it is obvious CoC violation. It is hard to
imagine healthy community if someone interacts with others  correctly
on the list or at a conference because the CoC stops him doing things
which he will do on private capacity to the same people when CoC
doesnt apply.

If someone reports CoC violation just because other community member's
_correct_ public tweet or whatsoever  expressed different
political/philosophical/religious views, this is a quite different
story. I suppose CoC committee and/or Core team in this case should
explain the reporter the purpose of CoC rather than automatically
enforce it.

So first, I think what the clause is trying to do is address cases where 
harassment targeting a particular community member takes place outside the 
infrastructure and frankly ensuring that the code of conduct applies in these 
cases is important and something I agree with.

However, let's look at problem cases:

"I am enough of a Marxist to see gender as a qualitative relationship to 
biological reproduction and maybe economic production too."

I can totally imagine someone arguing that such a tweet might be abusive, and 
certainly not "correct."

Or consider:

"The effort to push GLBT rights on family-business economies is nothing more 
than an effort at corporate neocolonialism."

Which would make the problem more clear.  Whether or not a comment like that 
occurring outside 
postgresql.org
 infrastructure wou

Re: Code of Conduct plan

2018-09-14 Thread Martin Mueller


On 9/14/18, 12:50 PM, "Joshua D. Drake"  wrote:

On 09/14/2018 07:41 AM, James Keener wrote:
> > Community is people who joined it
>
> We're not a "community."

I do not think you are going to get very many people on board with that 
argument. As anyone who knows me will attest I am one of the most 
contrarian members of this community but I still agree that it is a 
community.

JD


As Bill Clinton said in another context, "it all depends on the meaning of 
'community'".  'Community' is a very tricky word with uncertain boundaries and 
variable degrees of belonging to it.  Moreover, it's reciprocal: 'you' and the 
'community' may have different ideas of whether or how you belong. Rules in 
communities are usually tacit. You might almost want to say that if you need to 
write rules you no longer have a community.  Writing community rules is a very 
and probably hopeless endeavor.

For quite a while the word 'community' has been grossly overused and has often 
been invoked as a way of creating a sense of community where there is reason to 
doubt whether the thing is there in the first place. 

'Civil' and 'civility' are more modest words with more modest goals that are 
perhaps easier to capture in language. When it comes to a code of civil 
conduct, less is more. If you use more than the words of the ten commandments 
you almost certainly have gone too far. I have yet to see a posting on this 
list that would suggest an urgent need for trying to regulate what contributors 
say or how they say it.  





-- 
Command Prompt, Inc. || 
https://urldefense.proofpoint.com/v2/url?u=http-3A__the.postgres.company_&d=DwICaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=RJwS1VI8elhlnCutR_Pulg0oUzeSh5KpHQs0EJSdr04&s=3RBPPMk6HiBPEHYfzKDsP-DZxFvRs5NCYc9LKGXjpdE&e=
 || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__postgresconf.org&d=DwICaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=RJwS1VI8elhlnCutR_Pulg0oUzeSh5KpHQs0EJSdr04&s=ZiPaHw5gfja9OJeMGlTHieS-paSoyTHYC35rTgkwv_U&e=
* Unless otherwise stated, opinions are my own.   *






Re: Code of Conduct plan

2018-09-15 Thread Martin Mueller
What counts as foul language has changed a great deal in the last two decades.  
You could always tie it to what is printable in the New York Times, but that 
too is changing. I could live with something like “Be considerate, and if you 
can’t be nice, be at least civil”.

From: Melvin Davidson 
Date: Saturday, September 15, 2018 at 11:12 AM
To: Tom Lane 
Cc: Bruce Momjian , Chris Travers , 
James Keener , Steve Litt , 
"pgsql-generallists.postgresql.org" 
Subject: Re: Code of Conduct plan

How about we just simplify the code of conduct to the following:
Any member in the various PostgreSQL lists is expected to maintain
respect to others and not use foul language. A variation from
the previous sentence shall be considered a violation of the CoC.

On Sat, Sep 15, 2018 at 11:51 AM Tom Lane 
mailto:t...@sss.pgh.pa.us>> wrote:
Bruce Momjian mailto:br...@momjian.us>> writes:
> There is a risk that if we adopt a CoC, and nothing happens, and the
> committee does nothing, that they will feel like a failure, and get
> involved when it was best they did nothing.  I think the CoC tries to
> address that, but nothing is perfect.

Yeah, a busybody CoC committee could do more harm than good.
The way the CoC tries to address that is that the committee can't
initiate action of its own accord: somebody has to bring it a complaint.

Of course, a member of the committee could go out and find a "problem"
and then file a complaint --- but then they'd have to recuse themselves
from dealing with that complaint, so there's an incentive not to.

regards, tom lane


--
Melvin Davidson
Maj. Database & Exploration Specialist
Universe Exploration Command – UXC
Employment by invitation only!


Re: Code of Conduct plan

2018-09-15 Thread Martin Mueller
That is quite true: the very high quotient of helpful prose and very low 
quotient of inappropriate language is striking--much like the TEI list of which 
I long have been a member, and unlike the MySQL list, which has a non-trivial 
(though not serious)  boorish component. 

Which makes me say again "Where is the problem that needs solving?"

On 9/15/18, 11:32 AM, "Bruce Momjian"  wrote:

On Sat, Sep 15, 2018 at 04:24:38PM +, Martin Mueller wrote:
> What counts as foul language has changed a great deal in the last two 
decades. 
> You could always tie it to what is printable in the New York Times, but 
that
> too is changing. I could live with something like “Be considerate, and if 
you
> can’t be nice, be at least civil”.

I have to admit I am surprised how polite the language is here,
considering how crudely some other open source projects communicate.

-- 
  Bruce Momjian  
https://urldefense.proofpoint.com/v2/url?u=http-3A__momjian.us&d=DwIDaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=TJILWn2nTs3E72LB1XpPNrNBCTYdMYWcTUevA54MIgM&s=jP360tfk8zSE3PhzhCJ5PSD_h8HnzqLCs4jFe5nUddE&e=
  EnterpriseDB 
https://urldefense.proofpoint.com/v2/url?u=http-3A__enterprisedb.com&d=DwIDaQ&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=TJILWn2nTs3E72LB1XpPNrNBCTYdMYWcTUevA54MIgM&s=EHp2yUxMzSrJsO0jCYJM4dq7m35j69Aec87OEBfXaP8&e=

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Code of Conduct plan

2018-09-16 Thread Martin Mueller
As long as subscribers to the list or attendants at a conference do not violate 
explicit or implicit house rules, what business does Postgres have worrying 
about what they do or say elsewhere?  Some version of an 'all-of-life' clause 
may be appropriate to the Marines or  federal judges, but it strikes me as 
overreach for a technical listserv whose subject is a particular relational 
database. The overreach is dubious on both practical and theoretical grounds. 
"Stick to your knitting " or the KISS principle seem good advice in this 
context. 

On 9/16/18, 7:08 AM, "Stephen Cook"  wrote:

On 2018-09-16 00:00, Mark Kirkwood wrote:
> On 15/09/18 08:17, Tom Lane wrote:
>> Yeah, this.  The PG community is mostly nice people, AFAICT.  I'll be
>> astonished (and worried) if the CoC committee finds much to do.  We're
>> implementing this mostly to make newcomers to the project feel that
>> it's a safe space.
> 
> Agreed. However I think the all-of-life clause gives an open door to
> potential less than well intentioned new members joining up to extend a
> SJW agenda. So in fact the unintended consequence of this may be a
> *less* safe place for some existing members - unless all of their social
> media utterances are agreeable to the angry militant left.

This is my only concern, there are some very sensitive people out there
just looking for scandal / publicity. No reason to give them a larger
attack surface. Maybe that sounds paranoid but look around, there are
folks that want to spread the US culture war to every front, including
open source projects on the internet.

This sentence in the CoC should be worded to exclude things that are not
directed harassment when outside of the community spaces. For example,
some "incorrect opinion" on Twitter should have little bearing if it
wasn't meant as an "attack". Maybe for extreme cases there could be a
"hey you're making us look bad and scaring people away, chill with the
hate speech or leave" clause, but that should only apply if it is
someone whose name is publicly associated with Postgres and they are
saying really terrible things. I feel there is a big difference between
keeping it civil/safe in the lists and conferences, and making people
afraid to say anything controversial (in the USA) anywhere ever.

Maybe the way the committee is set up, it will handle this fairly. But
it's better to be explicit about it IMO, so as not to attract
professional complainers.


-- Stephen






metadata about creation and size of tables

2018-10-02 Thread Martin Mueller
I’ve looked at the documentation to find where to find some data that are very 
to find Mysql:  the creation and modification data of a table and the size of 
particular tables.

Where do I find an answer to the question “which is the last table I created” 
or “when did I last modify this table?”  In the data directory, tables seem to 
have numbers, but there doesn’t seem to be an easy mapping of those numbers to 
the table names.

I thumbed through the documentation, but didn’t see any heading that was likely 
to have that information.  Is there some where  a “table of tables” that lets 
you look up various metadata?




Re: Barman issue

2018-10-10 Thread Martin Marques
El 05/10/18 a las 06:50, Rijo Roy escribió:
> Hello Experts,
> 
> I have a server rhel 6.9 installed with barman 2.3 which is configured
> to take backup from Postgresql 10 installed in a remote host and the
> same is working.

You should upgrade to 2.4, although I can't see anything relevant in the
changelog to your issue. Anyway, upgrade.

> I am trying to do a poc on streaming backup and streaming wal method by
> making use of the pg_receivewal. I have already created a user with
> replication privilege the same is added in pg_hba. conf. And I am able
> to connect remotely using the user created.
> But when I am running barman check pg
> pg is the name given to the remote dB host. I am getting an error like this
> EXCEPTION: LooseVersion instance has no attribute 'version'

You should be getting more output than just that. Have you checked the
barman logs?

> Below I am pasting the conf files used:
> [pg] 
> conninfo
> streaming_conninfo
> streaming_archiver = on
> slot_name = barman
> backup_method = postgres 
> path_prefix = Postgresql bin path
> basebackups_directory
> errors_directory
> incoming_wals_directory
> streaming_wals_directory

Set log_level to DEBUG while trying to find the reason for that exception.

> ---—---
> I have already created the replication slots and started the
> pg_receivexlog remotely from the batman server. Was are streaming and is
> coming to the incoming_wals_directory 

Did you start start pg_receivexlog (which in PG10 is actually called
pg_receivewal) manually or using `barman receive-wal`? (also, `barman
cron` should automatically start streaming if you have everything
configured correctly)

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: how to cleanup archive based on datetime

2018-10-10 Thread Martin Marques
El 10/10/18 a las 04:00, magodo escribió:
> 
> Hello,
> 
> I know there is a tool `pg_archivecleanup`, but it only accepts wal
> filename. What if I want to clean up all archived wal before a
> specified datetime?
> 
> Though I could check the ctime fo each wal and figure out which to
> remove, but it looks like not a elegent way to do this.
> 
> Also I tried to parse each wal and compare with the specified datetime
> in a binary-search way, but it is based on the fact that every COMMIT
> time in wal is sequential, but it looks like not the case..
> 
> Hence I wonder what is the best practice for this task?

Don't do that manually. Install `barman` and let it take care of
removing old WALs which are not needed anyore.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Advice on logging strategy

2018-10-11 Thread Mike Martin
I have a question on logging strategy

I have loggin set to
log_statement = 'all' on a network database with logging set to csv so I
can import it to a logging table

However the database is populated via a nightly routine downloading data
via REST APIusing prepared statements

This results in enormous log files which take ages to import using copy
becuase each execute statement is logged with the parameters chosen

Is there any way around this?

I cant find any way to filter dml statements

thanks


Re: Advice on logging strategy

2018-10-11 Thread Mike Martin
I suppose the ideal would be to log the prepared statement once and detail
only if error rather than one per execution

On Thu, 11 Oct 2018 at 11:33, Rob Sargent  wrote:

>
>
> > On Oct 11, 2018, at 4:26 AM, Mike Martin  wrote:
> >
> > I have a question on logging strategy
> >
> > I have loggin set to
> > log_statement = 'all' on a network database with logging set to csv so I
> can import it to a logging table
> >
> > However the database is populated via a nightly routine downloading data
> via REST APIusing prepared statements
> >
> > This results in enormous log files which take ages to import using copy
> becuase each execute statement is logged with the parameters chosen
> >
> > Is there any way around this?
> >
> > I cant find any way to filter dml statements
> >
> > thanks
> >
> Do you want all the log lines in you logging table?
> There was a thread yesterday (10.Oct.2018) on COPY which mention the
> possibility of multiple processes COPYing to same table.


Re: Advice on logging strategy

2018-10-12 Thread Mike Martin
Thanks!

On Fri, 12 Oct 2018 at 14:33, David Steele  wrote:

> On 10/11/18 11:26 AM, Mike Martin wrote:
> >
> > This results in enormous log files which take ages to import using copy
> > becuase each execute statement is logged with the parameters chosen
> >
> > Is there any way around this?
> >
> > I cant find any way to filter dml statements
>
> pgAudit (https://github.com/pgaudit/pgaudit) gives you fine-grain
> control over what is logged by command type, table, or user as well as a
> lot more detail.
>
> --
> -David
> da...@pgmasters.net
>


Re: Pgbouncer discard all

2018-10-16 Thread Martin Marques
El 16/10/18 a las 12:12, Nicola Contu escribió:
>  2492534808 | dev    |   7355 | 1833427130 | pgbouncer       |         
>                | 10.151.2.145   |                 |       60570 |
> 2018-10-16 14:13:05.151015+00 |                               |
> 2018-10-16 15:10:40.309993+00 | 2018-10-16 15:10:40.310038+00 | Client
>   | ClientRead          | idle   |             |              | DISCARD ALL
> 
> They are idle actually.
> Will they count as client connection on the total amount set on the
> postgres.conf?

Yes, but that's the idea of having a pooler in between the app and the
server. Whenever the app needs a new connection it will reuse one from
the pool, instead of having to open a new connection.

pgbouncer will eventually close the connection if it stays in the pool
for longer than server_lifetime or server_idle_timeout.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Drop a primary

2019-10-02 Thread Martin Mueller
I created a primary key with the following commands

Add id serial
Add primary key (id)

I cannot figure out from the documentation how to drop that column.




Finding out about the dates of table modification

2019-11-22 Thread Martin Mueller
 I've moved happily from MySQL to Postgres but miss one really good feature of 
MYSQL: the table of tables that let you use SQL queries to find out metadata 
about your table. Thus looking at the table of tables and sorting it by last 
change, lets you quickly look at the most recently modified table. Which is 
really useful if you have a bad memory, as I do, and can't remember the name of 
a particular table that I worked on last Wednesday. 

Are those capabilities hidden somewhere in Postgres?  There isn't an obvious 
section in the documentation. At least I can't find it. 

Martin Mueller
Professor emeritus of English and Classics
Northwestern University




Re: MS Access Frontend

2019-11-30 Thread Martin Mueller
Leaving aside the question of money, a frontend like AquaData Studio or the 
equivalent program by Jetbrains would be a much better solution. I used to work 
with Access a lot and quite liked it. But AquaData Studio is not any more 
difficult to learn and gives you access to everything Postgress can do. 

I haven't used the Jetbrains version, which is cheaper. 

On 11/30/19, 8:53 AM, "Adrian Klaver"  wrote:

On 11/30/19 3:15 AM, Tim Clarke wrote:
> On 29/11/2019 17:30, Adrian Klaver wrote:
>> On 11/29/19 9:23 AM, Jason L. Amerson wrote:
>>> I am trying to setup MS Access as a frontend so that it would be
>>> easier on my wife and children to interact with PostgreSQL. I looked
>>> online for some tutorials but the ones I found are out-date or only
>>> pick up after Access is connected to PostgreSQL. I was wondering if
>>> someone knew of some updated material that they could point me to or
>>> maybe walk me through it. I have used Access quite a bit years ago
>>> and things have changed since then. I know I must install the ODBC
>>> drivers, which I have already done. I have already setup the DSN and
>>> I clicked on test and it says everything is fine. I know that my next
>>> step has something to do with Linked Tables in Access, but I am not
>>> sure how to set it up. I guess that is where I start to need help.
>>> The client computers using the frontend will be running Windows 10
>>> and Office 365, both are updated to the latest versions.
>>
>> This might help:
>>
>> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__support.office.com_en-2Dus_article_Manage-2Dlinked-2Dtables-2D1d9346d6-2D953d-2D4f85-2Da9ce-2D4caec2262797&d=DwIC-g&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=q2mFiQLD3Q0WGNcvV6_A-Jva78TI-_O-TJQkuVHqU_Y&s=n5gUTTvGiefsNhxsv4WNKTOok9pQSRe9TVdcXwWPjbI&e=
 
>>
>>
>>
>>>
>>> Thank you,
>>>
>>> Jason L. Amerson
>>>
>>
>>
> That will help you manage once you have created some linked tables, but
> to create them:

Which is covered in the above:


https://urldefense.proofpoint.com/v2/url?u=https-3A__support.office.com_en-2Dus_article_Manage-2Dlinked-2Dtables-2D1d9346d6-2D953d-2D4f85-2Da9ce-2D4caec2262797-23bkmk-5Fadd&d=DwIC-g&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=q2mFiQLD3Q0WGNcvV6_A-Jva78TI-_O-TJQkuVHqU_Y&s=pDhZ2CyGXAtJ2gOO7qDxiTFYvLGWbW1y-b8KRfDaK5E&e=
 

> 
> 1) Click "External Data" from the top Access menu, then "New data
> source" on the left
> 
> 2) Pick "From other sources" at the bottom and as you correctly
> identified "ODBC Database".
> 
> 3) From the dialogue box, change the default of the two options to the
> bottom one "Linkby creating linked table"
> 
> 4) Next pick your created DSN from the "Select data source" dialog
> 
> 5) You should then see the list of tables etc from your Postgres database
> 
> Troubleshooting; make sure you have an ODBC 32 or 64 bit version
> matching the 32 or 64 bit MS Access installed.
> 
> If you use any security at the MS Access level you will need to set up a
> simultaneously shared central system.mdw file in some repository. Don't
> try to get more than 6-7 people using this at the same time, it breaks.
> 
> Good luck.
> 
> 
> --
> Tim Clarke
> IT Director
> Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420
> 
> 
> 
> Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | 
Frankfurt: +49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 
503 2848
> Web: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.manifest.co.uk_&d=DwIC-g&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=q2mFiQLD3Q0WGNcvV6_A-Jva78TI-_O-TJQkuVHqU_Y&s=yyQ5fbd8EgtdPmHpuM9zaCn-ZVfurKRUUMinpN3JQuI&e=
 
> 
> 
> 
> Minerva Analytics Ltd - A Solactive Company
> 9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United 
Kingdom
> 
> 
> 
> Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
> Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.manifest.co.uk_legal_&d=DwIC-g&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk&m=q2mFiQLD3Q0WGNcvV6_A-Jva78TI-_O-TJQkuVHqU_Y&s

Re: MS Access Frontend

2019-12-01 Thread Martin Gainty
commenting to Jason that majority of my work in the 90s was read from one DB 
write to another DB
via ODBC2JDBC driver
https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers/using-openlinks-odbc-jdbc-bridge-to-access-jdbc-data-sources-via-tableau-windows-edition-89094aa33b7b

that said I think MS missed the boat on 2-phase-commits
https://en.wikipedia.org/wiki/Two-phase_commit_protocol
Two-phase commit protocol - 
Wikipedia<https://en.wikipedia.org/wiki/Two-phase_commit_protocol>
In transaction processing, databases, and computer networking, the two-phase 
commit protocol (2PC) is a type of atomic commitment protocol (ACP). It is a 
distributed algorithm that coordinates all the processes that participate in a 
distributed atomic transaction on whether to commit or abort (roll back) the 
transaction (it is a specialized type of consensus protocol).
en.wikipedia.org

Also 2 factor DB authentication was not implemented by Access
https://en.wikipedia.org/wiki/Database_security
Database security - Wikipedia<https://en.wikipedia.org/wiki/Database_security>
Database security concerns the use of a broad range of information security 
controls to protect databases (potentially including the data, the database 
applications or stored functions, the database systems, the database servers 
and the associated network links) against compromises of their confidentiality, 
integrity and availability. It involves various types or categories of 
controls, such ...
en.wikipedia.org

Feel free to ping me offline if you're interested in implementing ODBC2JDBC 
driver as an option
Good Luck!


From: bret_stern 
Sent: Saturday, November 30, 2019 7:14 PM
To: pgsql-general@lists.postgresql.org 
Subject: Re: MS Access Frontend

My two cents. Access is awesome. Extremley fast prototyping environment.
Found on most pc's in businesses. Great reporting tools.
It is my go-to tool for prototyping DB structures, reports, great
import/export tools, and boatloads of people who are glad to share
vba code and solutions

My only bitch is the dumbing down of the application by Microsoft,
Adding bands, and hiding past tools...killing the performance with xml
bs..."there, I feel better now"


Libre Base does alot of the same things, just a different way.

If learning is the point of this discussion, then learn as much as
possible about each one.

I started using win32 back in odd-06 with ODBC and direct db-api calls,
but have steadily drifted towards RAD environments...although there's
great satisfaction and control the lower you go.


When the prototyping is over , I've been pushing Lazarus as my
multi-plateform development environment.

It still pisses me off how abused Excel is, but people don't know the
difference.

Keep on rockin






On 11/30/2019 2:34 PM, Jason L. Amerson wrote:
> I am a Linux user too. I just bought my children Windows laptops so it would
> be easier for them to use. I had to buy me a Windows one too so that I can
> do the whole Microsoft Family thing and monitor them a little. But I think
> it is time to throw them into the deep end and see if they can swim.
>
> Jason L. Amerson
>
>
> -Original Message-
> From: Adrian Klaver 
> Sent: Saturday, November 30, 2019 01:52 PM
> To: Jason L. Amerson ; 'Martin Gainty'
> ; PostgreSQL 
> Subject: Re: MS Access Frontend
>
> On 11/30/19 11:04 AM, Jason L. Amerson wrote:
>> Thanks Martin. I have decided to go another route. I have nothing but
>> problems whenever I use Microsoft products. I personally think that
>> Microsoft was shit when it started, and it is still shit 35 years later.
>> So, I am just going to take Windows off my computers and put Linux on
>> them and just use a different client.
>
> I'm a Linux user and I applaud your move. Just be aware you will not find an
> Access replacement on Linux. You will find things that have subsets of its
> functionality, but not a drop in replacement.
>
>
>>
>> Jason L. Amerson
>>
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
>
>




Re: full text

2020-04-09 Thread Martin Gainty
i dont know how to do with elastic-search

according to programcreek you can create an index from a  jdbc 
source
https://www.programcreek.com/java-api-examples/?code=SeldonIO/semantic-vectors-lucene-tools/semantic-vectors-lucene-tools-master/src/main/java/io/seldon/semvec/CreateLuceneIndexFromDb.java
CreateLuceneIndexFromDb Java Source Code 

This page provides Java source code for CreateLuceneIndexFromDb.
www.programcreek.com
its up to you to determine the jdbc connection attributes from Postgres

Buona Fortuna!

martini



From: Roberto Della Pasqua 
Sent: Wednesday, April 8, 2020 1:28 PM
To: pgsql-gene...@postgresql.org 
Subject: full text


Please sorry because I’m newbie of PGSQL



I need the best performing and overall quality full-text search, can be 
possible to have the data stored in pgsql and the index to elasticsearch? Can 
be in sync between?



Thank you



Btw. Do you suggest another engine than elastic?



Roberto Della Pasqua

www.dellapasqua.com




Re: Oracle vs. PostgreSQL - a comment on Mysql

2020-06-03 Thread Martin Mueller
On the topic of what other databases do better: I much prefer Postgres to Mysql 
because it has better string functions and better as well as very courteous 
error messages. But MySQL has one feature that sometimes makes me want to 
return it: it stores the most important metadata about tables in a Mysql table 
that can be queried as if it were just another table.  That is a really 
feature. I makes it very easy to look for a table that you edited most 
recently, including a lot of other things.

Why doesn’t Postgres have that feature? Or is there a different and equally 
easy way of getting at these things that I am just missing?

From: Andreas Joseph Krogh 
Date: Wednesday, June 3, 2020 at 12:54 PM
To: Chris Travers 
Cc: "pgsql-generallists.postgresql.org" 
Subject: Re: Oracle vs. PostgreSQL - a comment

På onsdag 03. juni 2020 kl. 20:07:24, skrev Chris Travers 
mailto:chris.trav...@gmail.com>>:
[...]

Regardless of what Oracle does, I agree this would be a huge step in the right 
direction for pg-DBAs.
I have absolutely no clue about how much work is required etc., but I think 
it's kind of strange that no companies have invested in making this happen.

I manage database clusters where the number of databases is a reason not to do 
logical replication based upgrades, where pg_upgrade is far preferred instead.

If this were to be the case, I would be very concerned that a bunch of things 
would have to change:
1.  Shared catalogs would have txid problems unless you stay with global txids 
and then how do local wal streams work there?
2.  Possibility that suddenly streaming replication has the possibility of 
different databases having different amounts of lag
3.  Problems with io management on WAL on high throughput systems (I have 
systems where a db cluster generates 10-20TB of WAL per day)

So I am not at all sure this would be a step in the right direction or worth 
the work.

I agree these are all technical issues, but nevertheless - "implementation 
details", which DBAs don't care about. What's important from a DBA's 
perspective is not whether WAL is cluster-wide or database-wide, but whether 
it's possible to manage backups/PITR/restores of individual databases in a more 
convenient matter, which other RDBMS-vendors seem to provide.

I love PG, have been using it professionally since 6.5, and our company depends 
on it, but there are things other RDBMS-vendors do better...

--
Andreas Joseph Krogh


Is it possible to use keywords (date units) in a function definition?

2020-06-08 Thread Martin Gainty
Hello,

I recently tried to write a wrapper function to calculate the difference 
between two dates, mainly as a convenience. I'd essentially be emulating 
EXTRACT( FROM date1 - date2), in various ways. I got a bit stuck on 
allowing specification of the : is this possible in function 
definitions? I'd like to be able to write something along the lines of:

CREATE OR REPLACE FUNCTION DATETIME_DIFF(end TIMESTAMP(3), start TIMESTAMP(3), 
datepart UNIT) RETURNS DOUBLE PRECISION AS $$
BEGIN
RETURN EXTRACT(datepart FROM end - start);
END; $$
LANGUAGE PLPGSQL;

One option would be to treat datepart as a string, but this doesn't work for my 
use case. (Background: I'm trying to refactor a bunch of SQL scripts to work on 
Google BigQuery and PostgreSQL by writing PostgreSQL functions to emulate 
BigQuery functions. Unfortunately BigQuery does not recognize the third 
argument if it is a string (i.e. 'HOUR' does not work but HOUR does)).

Any ideas? Is this even possible?

-Alistair


Re: Certficates

2020-08-10 Thread Martin Gainty
cant you use keytool ?
assume server certificate is called server.crt
assume truststore file is called server.truststore.jks

cd %JRE_HOME%/lib/security
keytool -import -alias %ALIAS% -file server.crt -keystore server.truststore.jks

m.

From: Shankar Bhaskaran 
Sent: Monday, August 10, 2020 1:53 PM
To: pgsql-general@lists.postgresql.org 
Subject: Certficates

Hi All,

This is a very basic question . i have to import the server.crt on the client 
side java trustore to connect using jdbc to postgres server secured by ssl.
But when i try psql from same machine , it shows the connection as encrypted . 
How does psql import the server certificate?

 psql "postgresql://$POSTGRES_HOST:$PG_PORT/postgres" -U postgres
psql (9.6.18)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 
256, compression: off)
Type "help" for help.


Regards,
Shankar


Stored procedure with execute and returning clause

2020-08-23 Thread Mike Martin
Hi
I am having difficulty with returning clause and stored procedure. This is
an (edited) example of where I am

CREATE OR REPLACE PROCEDURE public.arrcopy1(
 dataarr anyarray,
 tblname text,
 cols text DEFAULT NULL::text,
 selstr text DEFAULT NULL::text,
 INOUT outarr text[] DEFAULT NULL
)
LANGUAGE 'plpgsql'
AS $BODY$

insstr:= INSERT INTO tagdata(fileid,tagname,tagvalue)  SELECT
arr[1]::integer,arr[2]::text,string_to_array(arr[3],E'\b') FROM
  (select array_agg(v order by rn) arr
  from unnest($1) with ordinality v(v,rn)
  group by (rn - 1) / array_length($1::text[],2)
  ) a
JOIN tagfile ON fileid=arr[1]::int RETURNING *::text[];

Then called as

EXECUTE insstr INTO outarr USING (dataarr) ;
$BODY$

This compiles as a proc

But I then get an error (this is in perl)

DBD::Pg::db selectall_arrayref failed: ERROR:  malformed array literal:
"3182753"
DETAIL:  Array value must start with "{" or dimension information

The procedure works perfectly without the INTO Clause on execute

If I change returning clause to
RETURNING array[fileid]

It runs but only returns the first fileid not all fileids inserted


thanks


Return value of CREATE TABLE

2020-09-10 Thread Mike Martin
Is this possible?
Basically I want to manage temp table status for use in a procedure.

The procedure looks at pg_catalog to get information for processing.

So basically I would like to be able to know what namespace a temp table is
created in, so that I can constrain lookup.

example

CREATE TEMP TABLE tagdata (test int,test2 numeric(10,2));
SELECT relname,relpersistence ,relnamespace
,pa.atttypid,attname,attnum
,nspname
FROM pg_catalog.pg_class pc
JOIN pg_attribute pa ON pc.oid=pa.attrelid
JOIN pg_namespace pn ON pn.oid=relnamespace
WHERE relname = 'tagdata' AND attnum>0

Which returns (when its run for the second time in different tabs in
pgadmin)
relname   persistence namespace typeid colname colnum schema
"tagdata" "p" "2200" "23""fileid"1
"public"
"tagdata" "p" "2200" "25""tagname"  2  "public"
"tagdata" "p" "2200" "1009""tagvalue"  3   "public"
"tagdata" "t"   "483934""23""test"  1
"pg_temp_10"
"tagdata" "t""538079"   "23""test"  1
"pg_temp_13"
"tagdata" "t""538079""1700"   "test2"2
"pg_temp_13"

So I would like some way of knowing exactly which schema the temp table has
been created in, I cant see anything obvious

thanks

Mike


Re: Return value of CREATE TABLE

2020-09-10 Thread Mike Martin
Thanks , exactly what I was looking for

On Thu, 10 Sep 2020 at 13:16, Christoph Moench-Tegeder 
wrote:

> ## Mike Martin (redt...@gmail.com):
>
> > So basically I would like to be able to know what namespace a temp table
> is
> > created in, so that I can constrain lookup.
>
> pg_my_temp_schema() returns the OID of the session's temporary schema
> ("or 0 if none", according to the docs).
>
> Regards,
> Christoph
>
> --
> Spare Space
>


"Server versions prior to 8.0 do not support savepoints" when using JDBC ...

2020-11-29 Thread Martin Goodson

Hello.

I wonder if anyone can assist with this?

Some of my developers are reporting that they are getting the following 
message when attempting to connect to the database using jdbc:


Caused by: org.postgresql.util.PSQLException: Server versions prior to 
8.0 do not support savepoints


at

org.postgresql.jdbc3.AbstractJdbc3Connection.setSavepoint(AbstractJdbc3Connection.java:95)

Database version is 10.5 running under RHEL 7.4. The postgreSQL jdbc 
driver version they're employing is postgresl-42.2.9.


The database itself has absolutely no issues creating savepoints (I 
checked it, just in case ...) so I'm assuming it must be some issue with 
the driver / how the driver has been configured.


As you can see I've not been given a a lot to go on, unfortunately. Has 
anyone seen this before? Is this a common issue and my google-fu has 
failed me? :)


Many thanks.

Regards,

M.

--
Martin Goodson

Whatever you've got planned, forget it. I'm the Doctor.
I'm 904 years old. I'm from the planet Gallifrey in the
constellation of Kasterborous. I am The Oncoming Storm,
the Bringer of Darkness and you are ... basically just a
rabbit, aren't you? OK, carry on, just a general ...
warning.




Re: "Server versions prior to 8.0 do not support savepoints" when using JDBC ...

2020-11-29 Thread Martin Goodson

On 29/11/2020 18:10, Adrian Klaver wrote:

On 11/29/20 10:06 AM, Martin Goodson wrote:

Hello.

I wonder if anyone can assist with this?

Some of my developers are reporting that they are getting the 
following message when attempting to connect to the database using jdbc:


Caused by: org.postgresql.util.PSQLException: Server versions prior to 
8.0 do not support savepoints


at

org.postgresql.jdbc3.AbstractJdbc3Connection.setSavepoint(AbstractJdbc3Connection.java:95) 



Database version is 10.5 running under RHEL 7.4. The postgreSQL jdbc 
driver version they're employing is postgresl-42.2.9.



How was the database installed and from where?


Hello, Adrian. The database was installed using an enterprisedb 
installer, to its own dedicated set of directories.



Do you have more then one instance of Postgres running on the machine?


We do not, no. It is the sole PostgreSQL database cluster running on 
that box.



What is the connection code being used?


An excellent question. I will chase up with them, and find out. I 
believe they're using tomcat, but don't know the exact code involved.


FWIW the connection is not local, the attempted connection is from a 
different server.


Regards,

Martin.

--
Martin Goodson

Whatever you've got planned, forget it. I'm the Doctor.
I'm 904 years old. I'm from the planet Gallifrey in the
constellation of Kasterborous. I am The Oncoming Storm,
the Bringer of Darkness and you are ... basically just a
rabbit, aren't you? OK, carry on, just a general ...
warning.




Re: "Server versions prior to 8.0 do not support savepoints" when using JDBC ...

2020-11-29 Thread Martin Goodson

On 29/11/2020 18:28, Adrian Klaver wrote:

On 11/29/20 10:22 AM, Martin Goodson wrote:

On 29/11/2020 18:10, Adrian Klaver wrote:

On 11/29/20 10:06 AM, Martin Goodson wrote:

Hello.

I wonder if anyone can assist with this?

Some of my developers are reporting that they are getting the 
following message when attempting to connect to the database using 
jdbc:


Caused by: org.postgresql.util.PSQLException: Server versions prior 
to 8.0 do not support savepoints


at

org.postgresql.jdbc3.AbstractJdbc3Connection.setSavepoint(AbstractJdbc3Connection.java:95) 



Database version is 10.5 running under RHEL 7.4. The postgreSQL jdbc 
driver version they're employing is postgresl-42.2.9.



How was the database installed and from where?


Hello, Adrian. The database was installed using an enterprisedb 
installer, to its own dedicated set of directories.



Do you have more then one instance of Postgres running on the machine?


We do not, no. It is the sole PostgreSQL database cluster running on 
that box.



What is the connection code being used?


An excellent question. I will chase up with them, and find out. I 
believe they're using tomcat, but don't know the exact code involved.


FWIW the connection is not local, the attempted connection is from a 
different server.


And you know that this is actually the server being reached?

Do you see the attempted connection in the logs?


I do, yes. We log connections, and I can see connections incoming from 
their application servers.


At the moment we only log connections and erroring statements, but 
tomorrow I think I'll be temporarily enabling logging of all 
disconnections and all statements, to see what (if anything) they're 
actually executing on the database side before they see their error ...


Regards,

Martin.

-
Martin Goodson

Whatever you've got planned, forget it. I'm the Doctor.
I'm 904 years old. I'm from the planet Gallifrey in the
constellation of Kasterborous. I am The Oncoming Storm,
the Bringer of Darkness and you are ... basically just a
rabbit, aren't you? OK, carry on, just a general ...
warning.




Re: "Server versions prior to 8.0 do not support savepoints" when using JDBC ...

2020-11-29 Thread Martin Goodson

On 29/11/2020 18:46, David G. Johnston wrote:

On Sun, Nov 29, 2020 at 11:06 AM Martin Goodson 
wrote:


Hello.

I wonder if anyone can assist with this?

Some of my developers are reporting that they are getting the following
message when attempting to connect to the database using jdbc:

Caused by: org.postgresql.util.PSQLException: Server versions prior to
8.0 do not support savepoints

at


org.postgresql.jdbc3.AbstractJdbc3Connection.setSavepoint(AbstractJdbc3Connection.java:95)

Database version is 10.5 running under RHEL 7.4. The postgreSQL jdbc
driver version they're employing is postgresl-42.2.9.



My best guess is that they are not actually running 42.2.9, they are
running an older version that was released before PostgreSQL v10 came out
and the version detection code doesn't recognize the new version format
since 10.

I didn't dig into this too deeply but I did download the zip for 42.2.9 and
those classes don't even exist at that point since the build was re-done
using pre-processors.

David J.

Oh. That's very interesting. Thank you, David. I will chase up with the 
developers tomorrow to confirm with them the version of the postgres 
jdbc driver they are using.


I'm no coder, it didn't occur to me to actually check with the source. 
Nice one :)


Many thanks.

Regards,

Martin.

--
Martin Goodson

Whatever you've got planned, forget it. I'm the Doctor.
I'm 904 years old. I'm from the planet Gallifrey in the
constellation of Kasterborous. I am The Oncoming Storm,
the Bringer of Darkness and you are ... basically just a
rabbit, aren't you? OK, carry on, just a general ...
warning.




Re: "Server versions prior to 8.0 do not support savepoints" when using JDBC ...

2020-12-02 Thread Martin Goodson

On 02/12/2020 23:46, Dave Cramer wrote:

On Sun, 29 Nov 2020 at 14:09, Martin Goodson 
wrote:


On 29/11/2020 18:46, David G. Johnston wrote:

On Sun, Nov 29, 2020 at 11:06 AM Martin Goodson 
Hello.

I wonder if anyone can assist with this?

Some of my developers are reporting that they are getting the following
message when attempting to connect to the database using jdbc:

Caused by: org.postgresql.util.PSQLException: Server versions prior to
8.0 do not support savepoints

at




org.postgresql.jdbc3.AbstractJdbc3Connection.setSavepoint(AbstractJdbc3Connection.java:95)


Database version is 10.5 running under RHEL 7.4. The postgreSQL jdbc
driver version they're employing is postgresl-42.2.9.



My best guess is that they are not actually running 42.2.9, they are
running an older version that was released before PostgreSQL v10 came out
and the version detection code doesn't recognize the new version format
since 10.

I didn't dig into this too deeply but I did download the zip for 42.2.9

and

those classes don't even exist at that point since the build was re-done
using pre-processors.

David J.


Oh. That's very interesting. Thank you, David. I will chase up with the
developers tomorrow to confirm with them the version of the postgres
jdbc driver they are using.

I'm no coder, it didn't occur to me to actually check with the source.
Nice one :)

Many thanks.

Regards,

Martin.

--
Martin Goodson

Whatever you've got planned, forget it. I'm the Doctor.
I'm 904 years old. I'm from the planet Gallifrey in the
constellation of Kasterborous. I am The Oncoming Storm,
the Bringer of Darkness and you are ... basically just a
rabbit, aren't you? OK, carry on, just a general ...
warning.



I'm guessing this is now resolved ?


Dave Cramer
www.postgres.rocks


Technically no ... but probably yes.

On Monday I followed up on the ticket and was informed that they had 
made some changes to their code and the problem was no longer occurring ...


No detail was provided on what had been tweaked, unfortunately.

I plan on closing the ticket after a week.

--
Martin Goodson

Whatever you've got planned, forget it. I'm the Doctor.
I'm 904 years old. I'm from the planet Gallifrey in the
constellation of Kasterborous. I am The Oncoming Storm,
the Bringer of Darkness and you are ... basically just a
rabbit, aren't you? OK, carry on, just a general ...
warning.




Re: PL/java

2020-12-07 Thread Martin Gainty
Nota Bene:
Almost all official interfaces to/from Oracle are coded in Java
FWIK google's implementations are 95% python
(although you would need to understand JSON formatting for import/export 
requirements)
for those reasons i tread lightly before disparaging either language

Going Forward:
try to implement PL/SQL that rob/joshua mentioned
if you're calling no-joy after implementing PL/SQL
then we can take a look at interfacing to Postgres ETL binaries with a JNI 
interface
provided the JNI is spot on with signatures for each referenced function

Un Saludo
m


From: Rob Sargent 
Sent: Monday, December 7, 2020 11:16 AM
To: pgsql-general@lists.postgresql.org 
Subject: Re: PL/java


On 12/7/20 8:25 AM, Adrian Klaver wrote:
> On 12/7/20 7:18 AM, Hemil Ruparel wrote:
>> I want to say this. I never liked any extension language. It's like
>> Java is not meant to interact with databases.
>
> The JDBC folks might disagree.
>
That interaction is strictly sql to db, data to app.  As it should be.




Should pgAdmin 3 be saved?

2021-02-06 Thread Gabriel Martin
Hello,

I know that since the launch of pgAdmin 4 in 2016, pgAdmin has been using a
web-based model, as announced as early as 2014. You only need to do a
little research on the Web to understand all the reasons behind this
choice, which I do not seek to question.

However, I wonder if it would be relevant to keep pgAdmin 3 alive. I'm not
so much talking about improving the interface of pgAdmin 3, but simply
about keeping the software functional in new systems. I tried to install it
in the latest versions of Linux Mint and Ubuntu, but the software keeps
bugging, which it didn't do in older versions of the same systems 5 or 6
years ago.

Do you think there's a possibility that people could fix pgAdmin 3, even if
it means making a fork or whatever? Because, although pgAdmin 4 and other
tools such as OmniDB and DBeaver are probably excellent, some people, for
various reasons, might want to continue using pgAdmin 3 in 2021. I'm not an
expert in the field, but I'm thinking that maybe someone here might be able
to save this nice little application software from disappearing by
investing a few days to maintain it.

Thank you for your time,

GM


Re: Should pgAdmin 3 be saved?

2021-02-07 Thread Gabriel Martin
Thanks for the answer, Tom. It's nice to get an answer so quickly ;)

Following your advice, I will probably post something about this on
the "pgadmin-hackers" list, a sub-list of the project lists dedicated
to "pgAdmin development and patches." I'm just going to wait a little
bit before, in order to let people react here and to avoid being on
multiple mailing lists at the same time.

I'm crossing my fingers that someone competent perceives the interest
of not letting pgAdmin 3 die completely. It would be great to keep the
good old PgAdmin App Version (aka PgAdmin 3) functional in parallel to
the newer Hosted Web Version of PgAdmin (aka PgAdmin 4).

regards, gm

Le sam. 6 févr. 2021, à 14 h 26, Tom Lane  a écrit :
>
> Gabriel Martin  writes:
> > Do you think there's a possibility that people could fix pgAdmin 3, even if
> > it means making a fork or whatever?
>
> You'd probably be better advised to ask this on the pgadmin project lists.
>
> Certainly, anybody who cares to fork and maintain pgAdmin 3 could do so.
> It's a matter of finding people with the time, interest, and technical
> ability.
>
> regards, tom lane




Re: Encryption of Data Specific to a Tenant in PostgreSQL database | General Idea

2021-02-11 Thread Martin Ritchie
I have had good luck with security barrier views and performance. The main
thing security barrier does is ensure that where statements are processed
correctly.

The big consideration IMHO is how many tenants are you dealing with. A
couple of tenants, then best to give them separate databases. More than a
couple, but less than 50 or so, then best to give them all separate
schemas. If there are thousands of tenants (like an internet application)
then security barrier views are usually the easiest option to manage.

Martin Ritchie
*Geotab*
Senior DBA
Direct +1 (519) 741-7660
Toll-free +1 (877) 436-8221
Visit www.geotab.com
Twitter <https://twitter.com/geotab> | Facebook
<https://www.facebook.com/Geotab> | YouTube
<https://www.youtube.com/user/MyGeotab> | LinkedIn
<https://www.linkedin.com/company/geotab/>


On Thu, Feb 11, 2021 at 9:33 PM Rob Sargent  wrote:

> DBA=~super user
> If your clients have dba privs you need separate (vertical)servers
>
>
>


Proposal to introduce a shuffle function to intarray extension

2022-07-15 Thread Martin Kalcher

Dear list,

i am dealing with an application that processes fairly large arrays of 
integers. It makes heavy use of the intarray extension, which works 
great in most cases. However, there are two requirements that cannot be 
addressed by the extension and are rather slow with plain SQL. Both can 
be met with shuffling:


- Taking n random members from an integer array
- Splitting an array into n chunks, where each member is assigned to a 
random chunk


Shuffling is currently implemented by unnesting the array, ordering the 
members by random() and aggregating them again.



  create table numbers (arr int[]);

  insert into numbers (arr)
  select array_agg(i)
  from generate_series(1, 400) i;


  select arr[1:3]::text || ' ... ' || arr[398:400]::text
  from (
select array_agg(n order by random()) arr
from (
  select unnest(arr) n from numbers
) plain
  ) shuffled;

  -
   {2717290,3093757,2426384} ... {3011871,1402540,1613647}

  Time: 2348.961 ms (00:02.349)


I wrote a small extension (see source code below) to see how much we can 
gain, when the shuffling is implemented in C and the results speak for 
themselves:



  select arr[1:3]::text || ' ... ' || arr[398:400]::text
  from (
select shuffle(arr) arr from numbers
  ) shuffled;

  
   {1313971,3593627,86630} ... {50764,430410,3901128}

  Time: 132.151 ms


I would like to see a function like this inside the intarray extension. 
Is there any way to get to this point? How is the process to deal with 
such proposals?


Best regards,
Martin Kalcher


Source code of extension mentioned above:


#include "postgres.h"
#include "port.h"
#include "utils/array.h"

PG_MODULE_MAGIC;

PG_FUNCTION_INFO_V1(shuffle);

void _shuffle(int32 *a, int len);

Datum
shuffle(PG_FUNCTION_ARGS)
{
  ArrayType  *a = PG_GETARG_ARRAYTYPE_P_COPY(0);

  int len;

  if (array_contains_nulls(a))
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
 errmsg("array must not contain nulls")));

  len = ArrayGetNItems(ARR_NDIM(a), ARR_DIMS(a));

  if (len > 1)
_shuffle((int32 *) ARR_DATA_PTR(a), len);

  PG_RETURN_POINTER(a);
}

void
_shuffle(int32 *a, int len) {
  int i, j;
  int32 tmp;

  for (i = len - 1; i > 0; i--) {
j = random() % (i + 1);
tmp = a[i];
a[i] = a[j];
a[j] = tmp;
  }
}







Re: Proposal to introduce a shuffle function to intarray extension

2022-07-16 Thread Martin Kalcher

Am 16.07.22 um 18:53 schrieb Mladen Gogala:

On 7/15/22 04:36, Martin Kalcher wrote:

Dear list,

i am dealing with an application that processes fairly large arrays of
integers. It makes heavy use of the intarray extension, which works
great in most cases. However, there are two requirements that cannot
be addressed by the extension and are rather slow with plain SQL. Both
can be met with shuffling:

- Taking n random members from an integer array
- Splitting an array into n chunks, where each member is assigned to a
random chunk

Shuffling is currently implemented by unnesting the array, ordering
the members by random() and aggregating them again.



Martin, have you considered PL/Python and NumPy module?


Hey Mladen,

thank you for your advice. Unfortunately the performance of shuffling 
with NumPy is about the same as with SQL.


  create function numpy_shuffle(arr int[])
  returns int[]
  as $$
import numpy
numpy.random.shuffle(arr)
return arr
  $$ language 'plpython3u';

  select arr[1:3]::text || ' ... ' || arr[398:400]::text
  from (
select numpy_shuffle(arr) arr from numbers
  ) shuffled;

  ---
   {674026,3306457,1727170} ... {343875,3825484,1235246}

  Time: 2315.431 ms (00:02.315)

Am i doing something wrong?

Martin




"Missing" column in Postgres logical replication update message

2022-08-03 Thread Kevin Martin
We have a replication slot set up on a database in Postgres 12.8. For one
of the tables, when a row is created, we see a series of records come
through - an INSERT followed by a handful of UPDATEs. All of these messages
in the WAL files show all columns, except for the last UPDATE message,
which is missing one of the columns. (The column in question is a JSONB
field, and the data is not overly large - less than 1000 chars.)  We
think this is causing the data to come into our data lake (via Stitch) with
that column as NULL.

My understanding is that all INSERT and UPDATE messages written to the
replication logs are supposed to include all columns. But I can't find a
definitive answer on that.

So, my first question is: Is it normal / expected for UPDATE messages in
the replication logs to exclude any columns in the table?

And, of course, if that is unexpected behavior, I'd love to hear any
thoughts on what may cause it.

Thanks.

-Kevin


FYI.  I have this question posted also on StackOverflow:
https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message


Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Kevin Martin
My apologies.  Read that in the mailing list instructions, and still
forgot.  Thanks for the reminder.

On Wed, Aug 3, 2022 at 11:12 AM Adrian Klaver 
wrote:

> On 8/3/22 08:09, Kevin Martin wrote:
>
> Please reply to list also
> Ccing list
>
> I don't have answers to below at the moment, just getting thread back to
> list so others who might have answers can see it.
>
> > Thanks for the reply, Adrian.
> >
> > We're looking at the messages in the replication slot using
> > pg_logical_slot_peek_changes in the source db.
> >
> > In those messages, we see some UPDATEs that do not include one of the
> > columns in the table.
> >
> > I'm not sure what statements are producing the updates to the table
> > from the application, if that is what you are asking.  Does the update
> > against the database have to include all columns in order for the
> > replication log to have them all?
> >
> > I thought that any UPDATE message in the replication logs is designed to
> > include all values for all columns.  Is that correct?
> >
> > The data is not showing up in the replica table.  In this case, though,
> > the replication slot is being queried by Stitch to produce a copy in
> > Snowflake.  That is probably somewhat irrelevant to the current
> > question, though, since we appear to be seeing missing data in the
> > replication slot messages on the source.
> >
> > I'm on the receiving side of this issue and am working with my DBA on
> > trying to figure it out, so I'm not fully versed in how all of this
> > works.  I can try to get more information if it helps.  I have seen the
> > output from the peek function, and there are clearly some UPDATE
> > messages that have the column / values in question and some that do not.
> >
> > On Wed, Aug 3, 2022 at 10:47 AM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 8/3/22 06:50, Kevin Martin wrote:
> >  > We have a replication slot set up on a database in Postgres 12.8.
> > For
> >  > one of the tables, when a row is created, we see a series of
> records
> >  > come through - an INSERT followed by a handful of UPDATEs. All of
> > these
> >  > messages in the WAL files show all columns, except for the last
> > UPDATE
> >  > message, which is missing one of the columns. (The column in
> > question is
> >  > a JSONB field, and the data is not overly large - less than 1000
> >  > chars.)  We think this is causing the data to come into our data
> > lake
> >  > (via Stitch) with that column as NULL.
> >
> > See the messages where and/or how?
> >
> > What is the UPDATE command that is being given on primary?
> >
> > Is the data showing up in the replica table?
> >
> >  >
> >  > My understanding is that all INSERT and UPDATE messages written
> > to the
> >  > replication logs are supposed to include all columns. But I can't
> > find a
> >  > definitive answer on that.
> >  >
> >  > So, my first question is: Is it normal / expected for UPDATE
> > messages in
> >  > the replication logs to exclude any columns in the table?
> >  >
> >  > And, of course, if that is unexpected behavior, I'd love to hear
> any
> >  > thoughts on what may cause it.
> >  >
> >  > Thanks.
> >  >
> >  > -Kevin
> >  >
> >  >
> >  > FYI.  I have this question posted also on StackOverflow:
> >  >
> >
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> > <
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> >
> >
> >  >
> > <
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> > <
> https://stackoverflow.com/questions/73222107/missing-column-in-postgres-logical-replication-update-message
> >>
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: "Missing" column in Postgres logical replication update message

2022-08-03 Thread Kevin Martin
Thanks for the response, Tom.  I think that's exactly what's going on here.

On Wed, Aug 3, 2022 at 11:28 AM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 8/3/22 08:09, Kevin Martin wrote:
> >> I thought that any UPDATE message in the replication logs is designed
> to
> >> include all values for all columns.  Is that correct?
>
> Buried deep in the wire protocol specification is this bit:
>
> 
> The following message parts are shared by the above messages.
>
> TupleData
>
> Int16
> Number of columns.
>
> Next, one of the following submessages appears for each column (except
> generated columns):
>
> ...
>
> Byte1('u')
> Identifies unchanged TOASTed value (the actual value is not sent).
> 
>
> So that right there is two cases where we omit data for a column.
> I suspect the "unchanged toasted data" case is what matters for
> your purposes.  Maybe you've found some code that fails to implement
> that correctly?
>
> regards, tom lane
>


Missing query plan for auto_explain.

2022-08-30 Thread Matheus Martin
Our Postgres recently started reporting considerably different
execution times for the same query. When executed from our JDBC
application the Postgres logs report an average execution time of 1500
ms but when the query is manually executed through `psql` it doesn't
take longer than 50 ms.

With a view to investigate discrepancies in the plan we enabled
`auto_explain` in `session_preload_libraries` with
`auto_explain.log_min_duration = '1s'`. All application servers were
bounced to ensure new connections were created and picked up the
changes. However this trouble query does not have an explain plan
printed, even when its execution time exceeds the threshold (other
queries do though).

Does anyone have ideas of why the explain plan is not being printed?

Sample log entry for trouble query executed from application:
```
Aug 26 09:11:33 db-931 postgres[8106]: [66-1] 2022-08-26 09:11:33 GMT
[8106]: [5-1] db=betwave,user=betwave_app_readonly_user LOG:
duration: 1423.481 ms  bind : /*@
org.sixty6.as66.QueryID:report.framework.userSearch */select
users.user_id, users.unique_identifier, user_profile.name,
user_profile.email, organisation.organisation_id,
organisation.description, user_realm.user_realm_id,
user_realm.description as col_8, user_realm.type,
user_realm_category.description as col_10, vip_schema.description as
col_11, vip_level.description as col_12, affiliate.description as
col_13, users.status, users.creation_date,
user_statistics.last_user_session_creation_date,
users.real_base_currency, users.fun_base_currency,
local_balance_account.balance
Aug 26 09:11:33 db-931 postgres[8106]: [66-2] #011from users users
Aug 26 09:11:33 db-931 postgres[8106]: [66-3] #011join user_profile user_profile
Aug 26 09:11:33 db-931 postgres[8106]: [66-4] #011on
user_profile.user_id = users.user_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-5] #011join user_realm user_realm
Aug 26 09:11:33 db-931 postgres[8106]: [66-6] #011on
user_realm.user_realm_id = users.user_realm_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-7] #011join organisation organisation
Aug 26 09:11:33 db-931 postgres[8106]: [66-8] #011on
organisation.organisation_id = user_realm.organisation_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-9] #011join
acl_allowed_organisation acl_allowed_organisation
Aug 26 09:11:33 db-931 postgres[8106]: [66-10] #011on
acl_allowed_organisation.organisation_id =
organisation.organisation_id and acl_allowed_organisation.permission =
$1 and acl_allowed_organisation.user_id = $2
Aug 26 09:11:33 db-931 postgres[8106]: [66-11] #011join affiliate affiliate
Aug 26 09:11:33 db-931 postgres[8106]: [66-12] #011on
affiliate.affiliate_id = users.affiliate_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-13] #011join
acl_allowed_affiliate acl_allowed_affiliate
Aug 26 09:11:33 db-931 postgres[8106]: [66-14] #011on
acl_allowed_affiliate.affiliate_id = affiliate.affiliate_id and
acl_allowed_affiliate.permission = $3 and
acl_allowed_affiliate.user_id = $4
Aug 26 09:11:33 db-931 postgres[8106]: [66-15] #011join
acl_allowed_user_realm_category acl_allowed_user_realm_category
Aug 26 09:11:33 db-931 postgres[8106]: [66-16] #011on
acl_allowed_user_realm_category.user_realm_category_id =
user_realm.user_realm_category_id and
acl_allowed_user_realm_category.permission = $5 and
acl_allowed_user_realm_category.user_id = $6
Aug 26 09:11:33 db-931 postgres[8106]: [66-17] #011join
user_statistics user_statistics
Aug 26 09:11:33 db-931 postgres[8106]: [66-18] #011on
user_statistics.user_id = users.user_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-19] #011join vip_level vip_level
Aug 26 09:11:33 db-931 postgres[8106]: [66-20] #011on
vip_level.vip_level_id = users.vip_level_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-21] #011join vip_schema vip_schema
Aug 26 09:11:33 db-931 postgres[8106]: [66-22] #011on
vip_schema.vip_schema_id = vip_level.vip_schema_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-23] #011join
user_realm_category user_realm_category
Aug 26 09:11:33 db-931 postgres[8106]: [66-24] #011on
user_realm_category.user_realm_category_id =
user_realm.user_realm_category_id
Aug 26 09:11:33 db-931 postgres[8106]: [66-25] #011left join
local_balance_account local_balance_account
Aug 26 09:11:33 db-931 postgres[8106]: [66-26] #011on
local_balance_account.user_id = users.user_id and
local_balance_account.currency = users.real_base_currency and
local_balance_account.balance_category = $7 and
local_balance_account.primary_balance = $8
Aug 26 09:11:33 db-931 postgres[8106]: [66-27] #011where users.type in
($9, $10) and user_realm.type in ($11, $12, $13) and
users.unique_identifier like $14 escape '\'
Aug 26 09:11:33 db-931 postgres[8106]: [66-28] #011order by
users.unique_identifier asc, user_realm.user_realm_id asc
Aug 26 09:11:33 db-931 postgres[8106]: [66-29] #011limit $15
Aug 26 09:11:33 db-931 postgres[8106]: [66-30] 2022-08-26 09:11:33 GMT
[8106]: [6-1] db=betwave,user=betwave_app_readonly_user DETAIL:
parameters: $1 = 'READ', $2 = '1468137', $3 = 'READ

Re: Missing query plan for auto_explain.

2022-08-30 Thread Matheus Martin
Good idea on using an actual prepared statement but unfortunately it didn't
produce any different result.

Could you please elaborate a bit on your advice concerning
ExecutorEnd/PortalCleanup? I am afraid it doesn't mean much to me.

On Tue, 30 Aug 2022 at 12:16, Alvaro Herrera 
wrote:

> On 2022-Aug-30, Matheus Martin wrote:
>
> > Our Postgres recently started reporting considerably different
> > execution times for the same query. When executed from our JDBC
> > application the Postgres logs report an average execution time of 1500
> > ms but when the query is manually executed through `psql` it doesn't
> > take longer than 50 ms.
>
> I don't know why the plan is not saved by auto_explain (maybe we're
> missing ExecutorEnd calls somewhere?  that would be strange), but one
> frequent reason for queries to show different plan in JDBC than psql is
> the use of prepared statements.  Did you try using "PREPARE
> yourquery(...)" and then EXPLAIN EXECUTE(...)?  Sometimes that helps to
> recreate the original problem.
>
>
> (Apparently, ExecutorEnd is called from PortalCleanup; what happens with
> the portal for an extended-protocol query?)
>
> --
> Álvaro Herrera PostgreSQL Developer  —
> https://www.EnterpriseDB.com/
>


Re: Missing query plan for auto_explain.

2022-08-30 Thread Matheus Martin
The threshold for `auto_explain` was changed to 500 ms and explain plans
are still not being logged.

On Tue, 30 Aug 2022 at 13:30, Julien Rouhaud  wrote:

> Hi,
>
> On Tue, Aug 30, 2022 at 01:16:43PM +0200, Alvaro Herrera wrote:
> > On 2022-Aug-30, Matheus Martin wrote:
> >
> > > Our Postgres recently started reporting considerably different
> > > execution times for the same query. When executed from our JDBC
> > > application the Postgres logs report an average execution time of 1500
> > > ms but when the query is manually executed through `psql` it doesn't
> > > take longer than 50 ms.
> >
> > I don't know why the plan is not saved by auto_explain (maybe we're
> > missing ExecutorEnd calls somewhere?  that would be strange), but one
> > frequent reason for queries to show different plan in JDBC than psql is
> > the use of prepared statements.  Did you try using "PREPARE
> > yourquery(...)" and then EXPLAIN EXECUTE(...)?  Sometimes that helps to
> > recreate the original problem.
> >
> > (Apparently, ExecutorEnd is called from PortalCleanup; what happens with
> > the portal for an extended-protocol query?)
>
> AFAICS log_min_duration_statements threshold is based on the full query
> processing time while auto_explain is only based on the executor runtime,
> so
> one more likely explanation is that out of the 1423ms, more than 423ms were
> spent in the planner?
>


Re: Missing query plan for auto_explain.

2022-09-01 Thread Matheus Martin
We tried running the prepared statement six times as suggested but wasn't
still able to recreate the original problem.

Perhaps more concerning/relevant is that we have not found any explanation
to why the explain plan is not being logged by `auto_explain`. Could this
be a bug? Shall we report it?

On Tue, 30 Aug 2022 at 17:45, Alvaro Herrera 
wrote:

> On 2022-Aug-30, Matheus Martin wrote:
>
> > Good idea on using an actual prepared statement but unfortunately it
> didn't
> > produce any different result.
>
> I should have also mentioned to try the EXPLAIN EXECUTE six times and
> see if the last one produces a different plan.  That's when it switches
> from planning every time to planning with generic arguments, as I
> recall.
>
> > Could you please elaborate a bit on your advice concerning
> > ExecutorEnd/PortalCleanup? I am afraid it doesn't mean much to me.
>
> That wasn't advice actually, just a note that the code might be doing
> that thing wrong, causing auto_explain to miss it.  This is an unproven
> hypothesis that is likely to be all wrong.
>
> --
> Álvaro Herrera PostgreSQL Developer  —
> https://www.EnterpriseDB.com/
> "I'm impressed how quickly you are fixing this obscure issue. I came from
> MS SQL and it would be hard for me to put into words how much of a better
> job
> you all are doing on [PostgreSQL]."
>  Steve Midgley,
> http://archives.postgresql.org/pgsql-sql/2008-08/msg0.php
>


Re: Missing query plan for auto_explain.

2022-09-02 Thread Matheus Martin
Yes, we do see some plans logged by the auto_explain. We couldn't find a
`auto_explain.log_min_duration_statements` setting but
`log_min_duration_statement` as in
https://www.postgresql.org/docs/current/runtime-config-logging.html is set
to 100 ms.

Unfortunately, due to the amount of traffic we have, we cannot lower the
threshold for the auto_explain.

`log_lock_waits` is turned on and the logs do not indicate any locks
related to the tables in the query.

On Fri, 2 Sept 2022 at 05:49, Julien Rouhaud  wrote:

> Hi,
>
> On Thu, Sep 01, 2022 at 08:20:13PM +0100, Matheus Martin wrote:
> > We tried running the prepared statement six times as suggested but wasn't
> > still able to recreate the original problem.
> >
> > Perhaps more concerning/relevant is that we have not found any
> explanation
> > to why the explain plan is not being logged by `auto_explain`. Could this
> > be a bug? Shall we report it?
>
> Just to be sure, do you get at least some plans logged by auto_explain when
> queries are executed by the JDBC application?
>
> Can you try to temporarily lower auto_explain.log_min_duration_statements
> to
> less than 50ms and see what auto_explain sees for the execution time (and
> planning time).
>
> Another possibility would be some conflicting locks held.  If the conflict
> happens during the planning auto_explain still won't be triggered as it's
> outside the executor.  Also, have you enabled log_lock_waits?
>


Re: Missing query plan for auto_explain.

2022-09-05 Thread Matheus Martin
`auto_explain.log_min_duration` is set to 500 ms.


On Mon, 5 Sept 2022 at 12:35, Peter J. Holzer  wrote:

> On 2022-09-02 10:58:58 +0100, Matheus Martin wrote:
> > Yes, we do see some plans logged by the auto_explain. We couldn't find a
> > `auto_explain.log_min_duration_statements` setting
>
> This is weird as the documentation says:
>
> | Note that the default behavior is to do nothing, so you must set at
> | least auto_explain.log_min_duration if you want any results.
>
> What does
> show auto_explain.log_min_duration;
> return?
>
> > but `log_min_duration_statement` as in
> > https://www.postgresql.org/docs/current/ runtime-config-logging.html
> > is set to 100 ms.
>
> I don't think this affects auto_explain.
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Missing query plan for auto_explain.

2022-09-08 Thread Matheus Martin
We do have JIT enabled `jit=on` with `jit_above_cost=10`.

I am sorry but I don't quite understand what role JIT plays in
this situation with `auto_explain`. Could you please elaborate on that?

On Tue, 6 Sept 2022 at 00:29, Maxim Boguk  wrote:

>
>
> On Tue, Aug 30, 2022 at 1:38 PM Matheus Martin <
> matheus.mar...@voidbridge.com> wrote:
>
>> Our Postgres recently started reporting considerably different execution 
>> times for the same query. When executed from our JDBC application the 
>> Postgres logs report an average execution time of 1500 ms but when the query 
>> is manually executed through `psql` it doesn't take longer than 50 ms.
>>
>> With a view to investigate discrepancies in the plan we enabled 
>> `auto_explain` in `session_preload_libraries` with 
>> `auto_explain.log_min_duration = '1s'`. All application servers were bounced 
>> to ensure new connections were created and picked up the changes. However 
>> this trouble query does not have an explain plan printed, even when its 
>> execution time exceeds the threshold (other queries do though).
>>
>> Does anyone have ideas of why the explain plan is not being printed?
>>
>> Sample log entry for trouble query executed from application:
>> ```
>> Aug 26 09:11:33 db-931 postgres[8106]: [66-1] 2022-08-26 09:11:33 GMT 
>> [8106]: [5-1] db=betwave,user=betwave_app_readonly_user LOG:  duration: 
>> 1423.481 ms  bind :
>>
>>
> My understanding of how to auto_explain work - it deals only for execution
> calls, but in your case duration: 1423.481 ms on BIND call, before query
> execution.
> At least in my understanding - auto_explain cannot work and will not help
> in case of a slow BIND call (because it's a time when the query is planned
> but not executed).
> According documentation:
> "Query planning typically occurs when the Bind message is processed. If
> the prepared statement has no parameters, or is executed repeatedly, the
> server might save the created plan and re-use it during subsequent Bind
> messages for the same prepared statement."
> Hard to say what the reason for slow planning, but one (there could be
> others) likely reason is JIT work. Do you have JIT enabled?
>
>
> --
> Maxim Boguk
> Senior Postgresql DBA
> https://dataegret.com/
>
> Phone UA: +380 99 143 
> Phone AU: +61  45 218 5678
>
> LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
> Skype: maxim.boguk
>
> "Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно
> когда я так делаю ещё раз?"
>
>


Re: Missing query plan for auto_explain.

2022-09-12 Thread Matheus Martin
 rows=1
width=60) (actual time=0.008..0.008 rows=1 loops=23)

 Index Cond: ((user_realm_id)::text = (users.user_realm_id)::text)

 Filter: ((type)::text = ANY ('{ADMIN,HUB,PLAYER}'::text[]))

 Buffers: shared hit=69
   ->  Index
Scan using pk_organisation on organisation  (cost=0.27..0.43 rows=1
width=16) (actual time=0.006..0.006 rows=1 loops=23)
 Index
Cond: ((organisation_id)::text = (user_realm.organisation_id)::text)

 Buffers: shared hit=69
 ->  Index Only
Scan using pk_acl_allowed_organisation on acl_allowed_organisation
 (cost=0.29..1.38 rows=1 width=9) (actual time=0.007..0.007 rows=1 loops=23)
   Index Cond:
((user_id = '1468137'::bigint) AND (organisation_id =
(user_realm.organisation_id)::text) AND (permission = 'READ'::text))
   Heap
Fetches: 0
   Buffers:
shared hit=47
   ->  Index Scan using
pk_affiliate on affiliate  (cost=0.28..0.86 rows=1 width=21) (actual
time=0.005..0.005 rows=1 loops=23)
 Index Cond:
(affiliate_id = users.affiliate_id)
 Buffers: shared
hit=69
 ->  Index Only Scan using
pk_acl_allowed_user_realm_category on acl_allowed_user_realm_category
 (cost=0.28..0.59 rows=1 width=9) (actual time=0.008..0.008 rows=1 loops=23)
   Index Cond: ((user_id =
'1468137'::bigint) AND (user_realm_category_id =
(user_realm.user_realm_category_id)::text) AND (permission = 'READ'::text))
   Heap Fetches: 0
   Buffers: shared hit=47
   ->  Index Scan using
pk_user_statistics on user_statistics  (cost=0.43..0.51 rows=1 width=16)
(actual time=0.030..0.030 rows=1 loops=23)
 Index Cond: (user_id =
user_profile.user_id)
 Buffers: shared hit=92
dirtied=1
 ->  Index Scan using pk_vip_level on
vip_level  (cost=0.14..0.22 rows=1 width=20) (actual time=0.005..0.005
rows=1 loops=23)
   Index Cond: (vip_level_id =
users.vip_level_id)
   Buffers: shared hit=46
   ->  Index Scan using pk_vip_schema on vip_schema
 (cost=0.13..0.63 rows=1 width=22) (actual time=0.005..0.005 rows=1
loops=23)
 Index Cond: ((vip_schema_id)::text =
(vip_level.vip_schema_id)::text)
 Buffers: shared hit=46
 ->  Index Scan using pk_user_realm_category on
user_realm_category  (cost=0.13..0.18 rows=1 width=1032) (actual
time=0.005..0.005 rows=1 loops=23)
   Index Cond: ((user_realm_category_id)::text =
(user_realm.user_realm_category_id)::text)
   Buffers: shared hit=46
   ->  Index Scan using local_balance_account_unq01 on
local_balance_account  (cost=0.29..6.45 rows=1 width=15) (actual
time=0.020..0.020 rows=1 loops=23)
 Index Cond: ((user_id = users.user_id) AND
((balance_category)::text = 'CASH'::text))
 Filter: ((currency)::text =
(users.real_base_currency)::text)
 Buffers: shared hit=67
 Planning:
   Buffers: shared hit=859
 Planning Time: 50.833 ms
 Execution Time: 5.480 ms
(85 rows)

On Fri, 9 Sept 2022 at 22:06, Maxim Boguk  wrote:

>
>
> On Thu, Sep 8, 2022 at 1:18 PM Matheus Martin <
> matheus.mar...@voidbridge.com> wrote:
>
>> We do have JIT enabled `jit=on` with `jit_above_cost=10`.
>>
>> I am sorry but I don't quite understand what role JIT plays in
>> this situation with `auto_explain`. Could you please elaborate on that?
>>
>>
> In your log - time spent during the execution stage (where auto-exlain can
> help) but during the bind/planning stage (e.g. generation of plan).
> So you have a problem not with slow query execution, but with slow query
> planning, so autoexplain cannot help in that case.
> JIT is one possible explanation of planning stage slowdown.
>
> Can you run explain (analyze, costs, buffers, timing) of your query?
> And check how long the planning stage took and check if JIT was used or
> not (and how much time spent during JIT if it had been used).
>
>
>
> --
> Maxim Boguk
> Senior Postgresql DBA
> https://dataegret.com/
>
> Phone UA: +380 99 143 
> Phone AU: +61  45 218 5678
>
>


pg_multixact_member file limits

2023-01-09 Thread Martin Ritchie
Are there any limits on the number of records in
the postgresql/12/main/pg_multixact/members directory? We have a database
that has grown to tens of thousands of files in this directory during an
autovacuum after a large data purge. It shrank after the autovacuum
completed.

Martin Ritchie
*Geotab*
Senior DBA
Direct +1 (519) 741-7660
Toll-free +1 (877) 436-8221
Visit www.geotab.com
Twitter <https://twitter.com/geotab> | Facebook
<https://www.facebook.com/Geotab> | YouTube
<https://www.youtube.com/user/MyGeotab> | LinkedIn
<https://www.linkedin.com/company/geotab/>


Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-12 Thread Martin Goodson
anyone offer any insight on what I need to check? At the moment it all 
seems somewhat ... mystifying.


I am assuming there must be something wrong with the box/our 
configuration somewhere, but where to look? If anyone can help - even if 
it's to tell me I'm an idiot for missing one or more incredibly basic 
things somehow - I would be very grateful.


Many thanks.

Regards,

M.

--
Martin Goodson.

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."





Re: Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-12 Thread Martin Goodson

On 12/03/2023 21:52, Tom Lane wrote:

Martin Goodson  writes:

So I simply removed the --with-gssapi, and tried again.
AND IT FAILED AGAIN.
This time it failed claiming it couldn't find the ldap library. Which is
most -definitely- present.

Hard to debug this sort of thing remotely when you don't supply the exact
error messages.  But ... do you have openldap-devel installed, or just
the base openldap package?


The compile step and make world steps work perfectly if the script is
run under root.

That is odd.  Permissions problems on the libraries, maybe?

regards, tom lane


Hi, Tom.

Sorry, I can get the complete log tomorrow - it's on my work PC, not my 
home. I clearly made insufficient notes, for which I apologize :(


Not sure about permissions on libraries. We just open up a session under 
root and execute yum install , and that has always worked in 
the past. Not sure what I'd need to check? I can perhaps ask our 
friendly neighbourhood UNIX sysadmin to check those?


We did install openldap and openldap-devel, however:

yum install pam-devel
yum install libxml2-devel
yum install libxslt-devel
yum install openldap
yum install openldap-devel
yum install uuid-devel
yum install readline-devel
yum install openssl-devel
yum install libicu-devel
yum install uuid-devel
yum install gcc
yum install make

Regards,

M.


--
Martin Goodson.

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."





Re: Help? Unexpected PostgreSQL compilation failure using generic compile script

2023-03-15 Thread Martin Goodson

On 13/03/2023 00:02, Adrian Klaver wrote:


On 3/12/23 14:43, Martin Goodson wrote:

Hello.

For reasons I won't bore you with, we compile PostgreSQL from source 
rather than use the standard packages for some of our databases.





So a fairly basic script that has been used for years suddenly fails 
on a fairly generic RHEL 7.9 server.


I am no compilation expert. Obviously. Have I mised something basic? 
As I said, we've not seen problems like this before. Could there be 
some sort of issue on the box's configuration? If it works for root 
but not our usual build user could there be a user config with our 
account? Can anyone offer any insight on what I need to check? At the 
moment it all seems somewhat ... mystifying.


SELinux issues?

Have you looked at the system logs to see if they shed any light?


Apologies for the delay in replying, it's been a busy week.

After a spot more testing today I found the problem, and an embarrassing 
one it was too. Can't believe I didn't spot it earlier.


One of my colleagues had earlier used our 'generic build account' to 
install an older version of PostgreSQL on the same server, and had set 
the account's PATH and LD_LIBRARY_PATH to point to that version in the 
.bash_profile script.  That's something we don't normally do - our 
'build account' is deliberately left as a clean slate, as it were.


Bit bizarre it was somehow only causing problems with the compile check 
on the gssapi and ldap libraries, but there you go.


Feel a bit of a twit now, but definitely something I'll be explicitly 
checking beforehand on future compiles :(


--
Martin Goodson.

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."





Shell Command within function

2018-10-26 Thread Mike Martin
Is this possible?
I have a script which imports csvlogs into a table, and it would be useful
to truncate the log files after import

thanks

Mike


Problem with commit in function

2018-10-30 Thread Mike Martin
I have the following function
-- FUNCTION: public.update_log()

-- DROP FUNCTION public.update_log();

CREATE OR REPLACE FUNCTION public.update_log(
)
RETURNS void
LANGUAGE 'sql'

COST 100
VOLATILE
AS $BODY$

truncate table postgres_log_tmp  ;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Mon.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Tue.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Wed.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Thu.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Fri.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Sat.csv' WITH
csv;
COPY postgres_log_tmp FROM '/mnt/pgdata/data/log/postgresql-Sun.csv' WITH
csv;
INSERT INTO postgres_log SELECT * from postgres_log_tmp ON
CONFLICT(session_id, session_line_num) DO NOTHING;
--COMMIT;
truncate table postgres_log_tmp  ;

$BODY$;

ALTER FUNCTION public.update_log()
OWNER TO postgres;

If I leave the second truncate statement nothing is written to
postgres_log. I assume the insert doesnt finish

Any way to force it to finish before the truncation?


Re: editable spreadsheet style interface

2018-10-30 Thread Martin Mueller
I have used Aqua Data Studio for several years. Jetbrains recently released a 
similar product. Academic licensing is affordable (~ $200 a year) and very 
cheap if considered in terms of the time it saves you.

From: David Gauthier 
Date: Tuesday, October 30, 2018 at 2:06 PM
To: "pgsql-gene...@postgresql.org" 
Subject: editable spreadsheet style interface

I think I know the answer to this one but I'll ask anyway...

Is there a spreadsheet style interface to a PG DB where users can...
- lock records
- edit records
- submit changes (transaction)

Is there any after-market tool for PG that does something like this ?




Trouble Upgrading Postgres

2018-11-03 Thread Charles Martin
I'd be grateful for some help. I am trying to move a large database from
PostgreSQL 9.6 on Centos 6 to a different server using PostgreSQL 11 on
Centos 7. I can't do a pg_dump because it always fails on the largest
table. So tried to do pb_basebackup and copy that to the new PG 11 server.
Except that pg_upgrade expects the new and old versions of PG to be
side-by-side. So I installed 9.6 on the new server, ran initdb, verified
that it started, then stopped it and edited postgresql.conf data path to
the location of the pg_basebackup files. Then 9.6 would no longer start. So
how can I get my PG 9.6 data into a new PG 11 database?

Probably related to my troubles are my attempts to get replication set up.
But before I dive back into that, I thought I'd better try getting my 9.6
data into the new 9.6 server, then run PG 11's pg_upgrade and mount the
data in PG 11. Then maybe I can get replication started.

I've read that logical replication can be used to migrate from 9.6 to 11,
but haven't found any documentation on doing that.

Chuck Martin


Re: Trouble Upgrading Postgres

2018-11-03 Thread Charles Martin
When I do a pg_dump using PG 9.6, I got this:

pg_dump: Dumping the contents of table "docfile" failed: PQgetCopyData()
failed.

pg_dump: Error message from server: server closed the connection
unexpectedly

This probably means the server terminated abnormally

before or while processing the request.

pg_dump: The command was: COPY public.docfile (docfile_pkey,
docfileoriginalname, ordernumber, versionnum, docfilecontents, docfilepath,
docfileextension, enddatetime, endby, editnum, insby, insdatetime, modby,
moddatetime, active, doc_fkey) TO stdout;

I've looked and been unable to find where Centos 7, or Postgres 9.6, stores
the path to the config/data directory outside the data/postgresql.conf
file. But I agree there must be something somewhere.

Chuck

On Sat, Nov 3, 2018 at 6:06 PM Adrian Klaver 
wrote:

> On 11/3/18 2:56 PM, Charles Martin wrote:
>
> Please reply to list also.
> Ccing list.
>
> > Yes, if I could get pg_dump to work, I think I'd be fine. Maybe. But it
> > doesn't.
>
> Post the error you got to the list and we maybe able to help.
> Also which version of Postgres where you using to take the dump?
>
> >
> > I agree that I've confused Postgres, but I don't know how to resolve the
> > confusion. It is complicated by the fact that my original Centos 7
> > install included Postgres 9.2, so those files are hanging around, along
> > with 9.6 and 11.
> >
> > I posted the error messages I got when postgresql.conf had the data
> > directory set to my basebackup data:
> >
> > *postgresql-9.6.service: main process exited, code=exited,
> status=1/FAILURE*
> >
> > *
> > *
> >
> > Not very helpful.
> >
> >
> > systemctl status postgresql-9.6 provided a bit more info:
> >
> > *●*postgresql-9.6.service - PostgreSQL 9.6 database server
> >
> > Loaded: loaded (/usr/lib/systemd/system/postgresql-9.6.service;
> > disabled; vendor preset: disabled)
> >
> > Active: *failed*(Result: exit-code) since Sat 2018-11-03 15:05:30
> > EDT; 15s ago
> >
> >   Docs: https://www.postgresql.org/docs/9.6/static/
> >
> >Process: 32570 ExecStart=/usr/pgsql-9.6/bin/postmaster -D ${PGDATA}
> > *(code=exited, status=1/FAILURE)*
> >
> >Process: 32563
> > ExecStartPre=/usr/pgsql-9.6/bin/postgresql96-check-db-dir ${PGDATA}
> > (code=exited, status=0/SUCCESS)
> >
> >   Main PID: 32570 (code=exited, status=1/FAILURE)
> >
> >
> > Yet this went away, and PG 9.6 started, when I changed postgresql.conf
> > to point to the new (empty) data directory, which is confusing.
>
> No not confusing. Not that familiar  with RPM packaging as I am with the
> Debian/Ubunto packaging. Still if I remember correctly it also allows
> multiple instances of Postgres to run. To do that it has its own system
> of tracking the data directories. Where you created the new data
> directory is obviously where the package scripts expect to find it.  The
> pg_basebackup directory is not.
>
> >
> > Chuck
> >
> >
> > On Sat, Nov 3, 2018 at 5:17 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 11/3/18 12:57 PM, Charles Martin wrote:
> >  > I'd be grateful for some help. I am trying to move a large
> > database from
> >  > PostgreSQL 9.6 on Centos 6 to a different server using PostgreSQL
> > 11 on
> >  > Centos 7. I can't do a pg_dump because it always fails on the
> > largest
> >  > table.
> >
> > I would answer Ron's question on this first as solving it would be
> the
> > easiest fix.
> >
> >  >So tried to do pb_basebackup and copy that to the new PG 11
> >  > server. Except that pg_upgrade expects the new and old versions
> > of PG to
> >  > be side-by-side. So I installed 9.6 on the new server, ran initdb,
> >
> > The is probably the issue, you now have two 9.6 data directory
> > instances, the one you created with initdb and the one that came over
> > with pg_basebackup. I am guessing the editing below has left the
> server
> > in a confused state about which directory to use. The error messages
> > you
> > got when trying to restart the server would be helpful.
> >
> >  > verified that it started, then stopped it and edited
> postgresql.conf
> >  > data path to the location of the pg_basebackup files. Then 9.6
> > would no
> >  > longer start. So how can I get my PG 9.6 data into a new PG 11
> > database?
>

Re: Trouble Upgrading Postgres

2018-11-04 Thread Charles Martin
Adtrian said:
>> pg_dump: Error message from server: server closed the connection
>> unexpectedly

>Is this error the client reporting?
>Is this the same that is showing up in the server log?

Yes, that's the client message, i.e. what appeared in the terminal window
that gave the command. The server log shows:

2018-11-02 20:30:46 EDT [20405]: [4-1] user=,db= LOG:  server process (PID
30438) was terminated by signal 9: Killed

2018-11-02 20:30:46 EDT [20405]: [5-1] user=,db= DETAIL:  Failed process
was running: COPY public.docfile (docfile_pkey, docfileoriginalname,
ordernumber, versionnum, docfilecontents, docfilepath, d$

2018-11-02 20:30:46 EDT [20405]: [6-1] user=,db= LOG:  terminating any
other active server processes

2018-11-02 20:30:46 EDT [20415]: [10-1] user=,db= WARNING:  terminating
connection because of crash of another server process

2018-11-02 20:30:46 EDT [20415]: [11-1] user=,db= DETAIL:  The postmaster
has commanded this server process to roll back the current transaction and
exit, because another server process exited abnor$



>So where is the server located relative to the pg_dump client?
>On the same machine?
>If so is it a virtual machine e.g AWS?
>Across a local or remote network?


 I gave the command in a terminal session after SSHing to the server from
the same network. It is not a virtual machine.


Lsaurenz said:


>You probably have a corrupted database.
>You should get that fixed first, then you can upgrade.
>Maybe you should hire a professional for that.


I suspect this is is correct, both that there is corruption in the table
and that I need a professional to help. If someone here is available, I'm
interested.


Andreas said:


>which exact minor version please?


PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-23), 64-bit


Re: Trouble Upgrading Postgres

2018-11-04 Thread Charles Martin
Yep, you called it:

Nov  2 20:30:45 localhost kernel: Out of memory: Kill process 30438
(postmaster) score 709 or sacrifice child
Nov  2 20:30:45 localhost kernel: Killed process 30438, UID 26,
(postmaster) total-vm:3068900kB, anon-rss:1695392kB, file-rss:1074692kB

So it's running out of memory when trying to dump this table. The "old"
server has 4GB of ram, the "new" server 20GB.


On Sun, Nov 4, 2018 at 3:13 PM Adrian Klaver 
wrote:

> On 11/4/18 8:38 AM, Charles Martin wrote:
> >
> > Adtrian said:
> >>> pg_dump: Error message from server: server closed the connection
> >>> unexpectedly
> >
> >  >Is this error the client reporting?
> >  >Is this the same that is showing up in the server log?
> >
> > Yes, that's the client message, i.e. what appeared in the terminal
> > window that gave the command. The server log shows:
> >
> > 2018-11-02 20:30:46 EDT [20405]: [4-1] user=,db= LOG:  server process
> > (PID 30438) was terminated by signal 9: Killed
> >
> > 2018-11-02 20:30:46 EDT [20405]: [5-1] user=,db= DETAIL:  Failed process
> > was running: COPY public.docfile (docfile_pkey, docfileoriginalname,
> > ordernumber, versionnum, docfilecontents, docfilepath, d$
> >
> > 2018-11-02 20:30:46 EDT [20405]: [6-1] user=,db= LOG:  terminating any
> > other active server processes
> >
> > 2018-11-02 20:30:46 EDT [20415]: [10-1] user=,db= WARNING:  terminating
> > connection because of crash of another server process
> >
> > 2018-11-02 20:30:46 EDT [20415]: [11-1] user=,db= DETAIL:  The
> > postmaster has commanded this server process to roll back the current
> > transaction and exit, because another server process exited abnor$
> >
> >
> >
> >>So where is the server located relative to the pg_dump client?
> >>On the same machine?
> >>If so is it a virtual machine e.g AWS?
> >>Across a local or remote network?
> >
> >
> >   I gave the command in a terminal session after SSHing to the server
> > from the same network. It is not a virtual machine.
> >
> >
> > Lsaurenz said:
> >
> >
> >>You probably have a corrupted database.
> >>You should get that fixed first, then you can upgrade.
> >>Maybe you should hire a professional for that.
> >
> >
> > I suspect this is is correct, both that there is corruption in the table
> > and that I need a professional to help. If someone here is available,
> > I'm interested.
>
> Given that this involves your largest table I would confirm that the
> signal 9 kill was not coming from the system OOM killer. Take a look at
> the system logs to see what they show over the same time period.
>
> >
> >
> > Andreas said:
> >
> >
> >  >which exact minor version please?
> >
> >
> > PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> > 20120313 (Red Hat 4.4.7-23), 64-bit
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Trouble Upgrading Postgres

2018-11-05 Thread Charles Martin
Tom said:

>That's kind of odd: a COPY shouldn't really consume very much working
>memory.  I suspect that much of the process's apparent VM consumption may
>be shared buffers ... what have you got shared_buffers set to on the old
>server?  If it's more than half a GB or so, maybe reducing it would help.

This is not a VM, but hardware.

Adrian said:

>In addition to the other suggestions, what is the exact pg_dump command
>you are using?

The last time:

[postgres@mandj tmp]$ pg_dump martinandjones >
/mnt/4tbB/pgbackup/2018-11-02/mandj.bak



On Sun, Nov 4, 2018 at 8:16 PM Adrian Klaver 
wrote:

> On 11/4/18 2:55 PM, Charles Martin wrote:
> > Yep, you called it:
> >
> > Nov  2 20:30:45 localhost kernel: Out of memory: Kill process 30438
> > (postmaster) score 709 or sacrifice child
> > Nov  2 20:30:45 localhost kernel: Killed process 30438, UID 26,
> > (postmaster) total-vm:3068900kB, anon-rss:1695392kB, file-rss:1074692kB
> >
> > So it's running out of memory when trying to dump this table. The "old"
> > server has 4GB of ram, the "new" server 20GB.
> >
>
> In addition to the other suggestions, what is the exact pg_dump command
> you are using?
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Trouble Upgrading Postgres

2018-11-05 Thread Charles Martin
Adrian said:

>Tom was referring to this from your previous post:
>(postmaster) total-vm:3068900kB,
>where vm(VM) is Virtual Memory:
>
https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
>So what is your shared_buffers:
>https://www.postgresql.org/docs/10/static/runtime-config-resource.html
>set to?

Ok, thanks for explaining this. Here is the current value:
"shared_buffers" "131072" "8kB"



On Mon, Nov 5, 2018 at 9:06 AM Adrian Klaver 
wrote:

> On 11/5/18 5:56 AM, Charles Martin wrote:
> > Tom said:
> >
> >  >That's kind of odd: a COPY shouldn't really consume very much working
> >  >memory.  I suspect that much of the process's apparent VM consumption
> may
> >  >be shared buffers ... what have you got shared_buffers set to on the
> old
> >  >server?  If it's more than half a GB or so, maybe reducing it would
> help.
> >
> > This is not a VM, but hardware.
>
> Tom was referring to this from your previous post:
>
> (postmaster) total-vm:3068900kB,
>
> where vm(VM) is Virtual Memory:
>
>
> https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
>
> So what is your shared_buffers:
>
> https://www.postgresql.org/docs/10/static/runtime-config-resource.html
>
> set to?
>
> >
> > Adrian said:
> >
> >  >In addition to the other suggestions, what is the exact pg_dump command
> >  >you are using?
> >
> > The last time:
> >
> > [postgres@mandj tmp]$ pg_dump martinandjones >
> > /mnt/4tbB/pgbackup/2018-11-02/mandj.bak
> >
> >
> >
> >
> > On Sun, Nov 4, 2018 at 8:16 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 11/4/18 2:55 PM, Charles Martin wrote:
> >  > Yep, you called it:
> >  >
> >  > Nov  2 20:30:45 localhost kernel: Out of memory: Kill process
> 30438
> >  > (postmaster) score 709 or sacrifice child
> >  > Nov  2 20:30:45 localhost kernel: Killed process 30438, UID 26,
> >  > (postmaster) total-vm:3068900kB, anon-rss:1695392kB,
> > file-rss:1074692kB
> >  >
> >  > So it's running out of memory when trying to dump this table. The
> > "old"
> >  > server has 4GB of ram, the "new" server 20GB.
> >  >
> >
> > In addition to the other suggestions, what is the exact pg_dump
> command
> > you are using?
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Trouble Upgrading Postgres

2018-11-05 Thread Charles Martin
Daniel said:

>It's plausible that, with only 4GB of RAM, the table that fails
to dump has some very large rows that can't be allocated, especially
since both the backend and pg_dump need to have it simultaneously
in memory.
> >pg_dump: The command was: COPY public.docfile (docfile_pkey,
> >docfileoriginalname, ordernumber, versionnum, docfilecontents,
>> docfilepath, docfileextension, enddatetime, endby, editnum, insby,
>> insdatetime, modby, moddatetime, active, doc_fkey) TO stdout;
>The "docfilecontents" column suggests that it might contain
large contents. If its type is bytea, it's going to be expanded
to twice its size to build the hex representation.
>You may get a sense on how big is the biggest row expressed
as text with this query:
 SELECT max(length(contents.*::text)) FROM public.docfile;
>If it's big enough that it might cause the OOM issue,
try to run pg_dump remotely through an SSH tunnel [1], which you
can already do in terms of network permissions since you log in with
SSH, so pg_dump itself does not use any memory on the server.
>Also, if the machine doesn't have swap space, it might be
that just adding a few GB's of swap would make the operation
succeed.

This appears to be the case. I ran:

SELECT max(length(docfilecontents::text)) FROM docfile;
 and after a very long time, got:
ERROR: invalid memory alloc request size 1636085512 SQL state: XX000

Adrian said:
>> Ok, thanks for explaining this. Here is the current value:
>> "shared_buffers""131072""8kB"
>It should be a single value something like this for the default:
>shared_buffers = 128MB

The results I pasted were from:

SELECT * FROM pg_settings

Maybe I didn't get it the right way.

The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
of RAM, so I've created a swap file of 1GB.

Tom said:
>> Ok, thanks for explaining this. Here is the current value:
>> "shared_buffers" "131072" "8kB"
>Well, that's 1GB, which might be ambitious inside a VM with a hard
restriction to 4GB total RAM.  Postgres can get by with a *lot* less.
>Try knocking it down to a tenth of that and see if it makes a difference

I think I also based this on a rule-of-thumb that it should be no more than
25% of RAM. Should I test pg_dump with the added VM before reducing
shared_buffers?

On Mon, Nov 5, 2018 at 10:13 AM Adrian Klaver 
wrote:

> On 11/5/18 7:04 AM, Charles Martin wrote:
> > Adrian said:
> >
> >  >Tom was referring to this from your previous post:
> >  >(postmaster) total-vm:3068900kB,
> >  >where vm(VM) is Virtual Memory:
> >  >
> https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
> >  >So what is your shared_buffers:
> >  >https://www.postgresql.org/docs/10/static/runtime-config-resource.html
> >  >set to?
> >
> > Ok, thanks for explaining this. Here is the current value:
> > "shared_buffers""131072""8kB"
> >
>
> It should be a single value something like this for the default:
>
> shared_buffers = 128MB
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Trouble Upgrading Postgres

2018-11-05 Thread Charles Martin
Ron said:

>We were expecting the output of the "SHOW SHARED_BUFFERS;" command.

Ok, the result from that command is:

1GB

>The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
of RAM, so I've created a swap file of 1GB.

>Times have changed...  (I'd have made a 4GB swap file.)

I have a spare drive that is 230G, so I have enough space. I suppose I can
set swapoff, delete the swapfile, create a new 4G one, and set swapon. Or
is there a better way?

On Mon, Nov 5, 2018 at 11:56 AM Ron  wrote:

> On 11/05/2018 10:50 AM, Charles Martin wrote:
> [snip]
>
> The results I pasted were from:
>
> SELECT * FROM pg_settings
>
> Maybe I didn't get it the right way.
>
>
> We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
>
>
> The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
> of RAM, so I've created a swap file of 1GB.
>
>
> Times have changed...  (I'd have made a 4GB swap file.)
>
>
> --
> Angular momentum makes the world go 'round.
>


Re: Trouble Upgrading Postgres

2018-11-05 Thread Charles Martin
The first query timed out, but the second one returned this:

0 "623140"
1 "53"
2 "12"
3 "10"
4 "1"
5 "1"
7 "1"
[null] "162"

Not quite sure what that means, but if there is just a small number of
overly-large records, I might be able to delete them. If I can find them.

On Mon, Nov 5, 2018 at 12:54 PM Daniel Verite 
wrote:

> Charles Martin wrote:
>
> > SELECT max(length(docfilecontents::text)) FROM docfile;
> > and after a very long time, got:
> > ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
>
> It would mean that at least one row has a "docfilecontents"
> close to 0.5GB in size. Or that the size fields in certain rows
> are corrupted, although that's less plausible if you have
> no reason to suspect hardware errors.
>
> Does the following query work:
>
>  SELECT max(octet_length(docfilecontents)) FROM docfile;
>
> or maybe a histogram by size in hundred of megabytes:
>
>  SELECT octet_length(docfilecontents)/(1024*1024*100),
>   count(*)
>   FROM docfile
>   GROUP BY octet_length(docfilecontents)/(1024*1024*100);
>
> Note that the error message above does not say that there's not enough
> free memory, it says that it won't even try to allocate that much, because
> 1636085512 is over the "varlena limit" of 1GB.
> AFAICS I'm afraid that this table as it is now cannot be exported
> by pg_dump, even if you had enough free memory, because any individual
> row in COPY cannot exceed 1GB in text format.
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: Trouble Upgrading Postgres

2018-11-06 Thread Charles Martin
Thanks, Daniel.

Using your idea, I found the records over 400MB, and deleted them in the
application. The largest two were in inactive matters, and the third is
still available elsewhere if needed. I'll try pg_dump again after work
hours and see if it works now. Hopefully it will, now that I've

Adrian, I'll try changing shared_buffers the next time I can restart
postgres, at least if deleting the largest records and adding VM hasn't
worked.


On Tue, Nov 6, 2018 at 6:47 AM Daniel Verite 
wrote:

> Charles Martin wrote:
>
> >  but the second one returned this:
> >
> > 0 "623140"
> > 1 "53"
> > 2 "12"
> > 3 "10"
> > 4 "1"
> > 5 "1"
> > 7 "1"
> > [null] "162"
> >
> > Not quite sure what that means, but if there is just a small number of
> > overly-large records, I might be able to delete them. If I can find them.
>
> The query was:
>
>   SELECT octet_length(docfilecontents)/(1024*1024*100),
>count(*)
>FROM docfile
>GROUP BY octet_length(docfilecontents)/(1024*1024*100);
>
> The results above show that there is one document weighing over 700 MB
> (the first column being the multiple of 100MB), one between 500 and
> 600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
> so on.
>
> The hex expansion performed by COPY must allocate twice that size,
> plus the rest of the row, and if that resulting size is above 1GB, it
> will error out with the message you mentioned upthread:
> ERROR: invalid memory alloc request size .
> So there's no way it can deal with the contents over 500MB, and the
> ones just under that limit may also be problematic.
>
> A quick and dirty way of getting rid of these contents would be to
> nullify them. For instance, nullify anything over 400MB:
>
> UPDATE docfile SET docfilecontents=NULL
>   WHERE octet_length(docfilecontents) > 1024*1024*400;
>
> Or a cleaner solution would be to delete them with the application if
> that's possible. You may turn the above query into a SELECT that
> retrieve the fields of interest (avoid SELECT * because of the huge
> column).
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: Trouble Upgrading Postgres

2018-11-06 Thread Charles Martin
The column is a bytea.

I'm working out a way to limit the size on the front end.

Chuck


On Tue, Nov 6, 2018 at 1:44 PM  wrote:

> As someone pointed out, there is a limit with bytea (Blob's).
> To test if it is bytea, use a COPY with a select statement :
>
> COPY ( select A, B,C ,D ...etc FROM table ) TO 'outfile' ;
> Leaveing out the bytea column.
> If this works, then then one of the bytea columns is way to big.
>
> Ben Duncan - Business Network Solutions, Inc. 336 Elton Road Jackson MS,
> 39212
> "Never attribute to malice, that which can be adequately explained by
> stupidity"
> - Hanlon's Razor
>
>
>  Original Message 
> Subject: Re: Trouble Upgrading Postgres
> From: Tom Lane 
> Date: Tue, November 06, 2018 11:53 am
> To: Adrian Klaver 
> Cc: Daniel Verite , Charles Martin
> , pgsql-general
> 
>
> Adrian Klaver  writes:
> > On 11/6/18 8:27 AM, Daniel Verite wrote:
> >> Adrian Klaver wrote:
> >>> To me that looks like a bug, putting data into a record you cannot get
> out.
>
> >> Strictly speaking, it could probably get out with COPY in binary format,
> >> but pg_dump doesn't use that.
>
> Another possibility, seeing that the problematic data is bytea, is that
> it might depend on whether you use hex or escape bytea_output format.
> Hex format is reliably twice the size of the stored data, but escape
> format could be anywhere from the same size as the stored data to four
> times the size, depending on the contents. pg_dump is agnostic about this
> and will just dump using the prevailing bytea_output setting, so you might
> be able to get it to work by changing that setting.
>
> regards, tom lane
>
>


simple division

2018-12-04 Thread Martin Mueller
I have asked this question before and apologize for not remembering it.  How do 
you do simple division in postgres and get 10/4 with decimals?

This involves cast and numeric in odd ways that are not well explained in the 
documentation. For instance, you’d expect an example in the Mathematical 
Functions. But there isn’t.

The documentation of string functions is exemplary. The documentation of 
mathematical less so. Remember that it may be used by folks like me whose math 
is shaky. The MySQL documentation is better on this simple operation.


-
Martin Mueller
Professor emeritus of English and Classics
Northwestern University




Re: simple division

2018-12-04 Thread Martin Mueller
I didn't formulate my question properly, because the query went like
 "select alldefects /wordcount" 
where alldefects and wordcount are integers.   But none of the different ways 
of putting the double colon seemed to work. 

The Postgres notation of this simple procedure is very unintuitive. I haven't 
been able to remember several times, and most people think of me as a person 
with a reasonably good memory. There is no obvious place in the documentation 
to look this up.


On 12/4/18, 2:45 PM, "David G. Johnston"  wrote:

On Tue, Dec 4, 2018 at 1:38 PM Joshua D. Drake  
wrote:
> I may be misunderstanding the question but:
Indeed...
> select cast(x/y as numeric(10,4));

Your answer is 2.0 instead of the correct 2.5 - you need to cast
before the division, not after.

David J.




Re: simple division

2018-12-04 Thread Martin Mueller
It worked, and I must have done something wrong. I'm probably not the only 
person who would find something like the following helpful:


division (integer division truncates the result)10/33
division (with decimal results) 10/3::numeric   3.
division (rounded)  round(10/3::numeric, 2) 3.33

From an end user's the question "how do I divide two integers and limit the 
number of decimals" is surely a common one. And  if you look it up somewhere, 
division is probably the search word.  Now you could argue that the user should 
already know about formatting and rounding.  But some don't.

If you wanted to look up a rare wordform in a famous early 20th century 
dictionary of Old English, you had to know the root form of the word. If you 
already knew the root form, there is a good chance that you didn't need to look 
it up in the first place. If you didn't know the root form, the dictionary was 
no use. 

In this, single stop shopping for the three most common problems of simple 
division makes life easier for users.







On 12/4/18, 3:06 PM, "David G. Johnston"  wrote:

On Tue, Dec 4, 2018 at 1:57 PM Martin Mueller
 wrote:
>
> I didn't formulate my question properly, because the query went like
>  "select alldefects /wordcount"
> where alldefects and wordcount are integers.   But none of the different 
ways of putting the double colon seemed to work.

IDK...the first thing that came to mind was to just stick it at the
end of the expression:

select x/y::numeric from (values (10,4)) vals (x,y)

And it worked...

If you want to propose a concrete documentation patch more power to
you but this doesn't come up enough to think that what we have is
materially deficient.  I'm sorry you are having trouble with it but
the lists do provide quick and customized answers for situations like
this.

David J.




Re: simple division

2018-12-05 Thread Martin Mueller
I take the point that two decades of backward compatibility should and will 
win. That said,  it's an easy enough thing to right the balance for novices and 
put in a really obvious place in the documentation what you should do if you 
want to divide two integers and get the results with the number of decimals of 
your choice. I made one suggestion how this could be done. A better way might 
be a short paragraph like

A note on division:  if you divide two constants or variables defined as 
integers, the default will be an integer. If you want the result with decimals, 
add "::numeric".  If you want to limit the decimals, use the round() function:
Select 10/3:3
Select 10/3::numeric3.3
Round(select 10/3::numeric, 3)  3.333
For more detail see the sections on ...





`
On 12/5/18, 9:23 AM, "Tom Lane"  wrote:

Geoff Winkless  writes:
> IMO it's fundamentally broken that SQL doesn't cast the result of a
> divide into a numeric value - the potential for unexpected errors
> creeping into calculations is huge; however that's the standard and
> no-one's going to change it now.
> Having said that it's worth noting that those in the Other Place think
> that it's broken enough to go against the standard (they have a DIV b
> for integer divide and a/b for float).

Well, this isn't really blame-able on the SQL standard; it's a
Postgres-ism.  What the spec says (in SQL99, 6.26 ) is

 1) If the declared type of both operands of a dyadic arithmetic
operator is exact numeric, then the declared type of the
result is exact numeric, with precision and scale determined
as follows:

a) Let S1 and S2 be the scale of the first and second operands
  respectively.

b) The precision of the result of addition and subtraction is
  implementation-defined, and the scale is the maximum of S1
  and S2.

c) The precision of the result of multiplication is
  implementation-defined, and the scale is S1 + S2.

d) The precision and scale of the result of division is
  implementation-defined.

 2) If the declared type of either operand of a dyadic arithmetic
operator is approximate numeric, then the declared type of the
result is approximate numeric. The precision of the result is
implementation-defined.

Postgres' integer types map onto the standard as exact numerics with
scale 0.  (The precision aspect is a bit squishy, since their maximum
values aren't powers of 10, but let's disregard that.)  Postgres'
integer division operator meets the spec with the stipulation that
the "implementation-defined" scale of the result is 0.  Other SQL
implementations can and do define that differently --- if they even
have an "integer" data type, which some do not.

Anyway, the bottom line here is that we're balancing surprise factor
for novices against twenty-plus years of backwards compatibility,
and the latter is going to win.

regards, tom lane





Code for getting particular day of week number from month

2018-12-11 Thread Mike Martin
Hi
For a particular sequence I needed to do (schedule 2nd monday in month for
coming year) I created the following query

select to_char(min(date::date) + interval '1 week','DD/MM/')  date
--gets first date for day of month (monday in this case) then adds week and
finally formats it to desired date string

from generate_series(
  '2018-12-01'::date,
--start date
  '2020-12-01'::date,
--end date
  '1 day'::interval
) date

where extract(dow from date) =1
--sets day of week
GROUP BY (extract(year from date)*100)+extract(month from date)
--groups by month and year
ORDER BY cast(min(date) as date)
--sets order back to date

I couldn't see anything on google so thought I'd share it

Mike


How to compare dates from two tables with blanks values

2018-12-19 Thread Mike Martin
I have a situation where I need to update dates in a primary table from
regular imports of data, eg: this is the base select query

select d.row_id,
fname||lname,'joineddate',d.joineddate,'joineddate',s.joineddate,0 as bool1
from import s join  members d on d.contact_id=s.contact_id where

cast(nullif(d.joineddate,NULL) as timestamp) !=
cast(nullif(s.joineddate,'') as timestamp)

This gives zero records, however I cant seem to get a query that works.
For non-date fields I just use
Coalesce(fieldprime,'')!=coalesce(fieldiimport,'') which works fine but
chokes on dates where there is a blank value

thanks in advance


Change from 9.6 to 11?

2018-12-20 Thread Chuck Martin
I hope someone here can see something that eludes me. I've recently moved a
database from PostgreSQL 9.6 to 11, and there are a few oddities. The
following select statement returns zero rows when it should return one.
This is one of a small number of records that exist, but are not returned
by the query. When I include the main table, event, and any one of the
associated tables, the record is returned, but no record is returned with
the entire statement. All the primary keys (_pkey) and foreign keys (_fkey)
are integers. The field I suspect as the possible culprit, event.InsBy, is
a character column I'm converting to do a lookup on a primary key
(integer): event.InsBy::int = usr.Usr_pkey. Maybe PG 11 doesn't recognize
the same syntax for cast as PG 9.6? Or maybe I'm overlooking something else
basic. Thanks for reading!

SELECT
event.Event_pkey,ombcase.CaseName,status.StatusID,eventtype.EventTypeID,event.datetime
AT TIME ZONE 'US/Eastern',event.endtime AT TIME ZONE
'US/Eastern',event.EventDeadline,eventtype.EventIsDeadline,eventtype.OnCalendar,event.EventConfirmed,event.Description,event.PrivateNote,primaryresp.PrimaryrespID,event.IsPrivate,event.Hours,event.EventDone,event.EventDoneDate,event.Importance,event.Urgency,event.InCourt,event.LeadTime,doc.DocID,doctype.DocTypeID,project.ProjectName,usr.UserID,eventtype.Hours,event.Active,event.EditNum,event.InsBy,event.InsDateTime,event.ModBy,event.ModDateTime,event.Case_fkey,event.Eventtype_fkey,event.Project_fkey,event.Doc_fkey,event.Primaryresp_fkey,project.Project_pkey,primaryresp.PrimaryrespHolidayGroup,primaryresp.Usr_pkey,backup.Usr_pkey,doctype.Doctype_pkey,primaryresp.Primaryrespbackup_fkey,ombcase.Status_fkey,event.Parentevent_fkey,event.EventPostRemand
FROM
event,ombcase,status,eventtype,primaryresp,doc,doctype,project,usr,backup
WHERE event.Case_fkey = Case_pkey
AND event.Eventtype_fkey = Eventtype_pkey
AND event.Project_fkey = Project_pkey
AND event.Primaryresp_fkey = primaryresp.Usr_pkey
AND event.Doc_fkey = Doc_pkey
AND Doctype_fkey = Doctype_pkey
AND usr.Backup_fkey = backup.Usr_pkey
AND ombcase.Status_fkey = status.Status_pkey
AND event.InsBy::int = usr.Usr_pkey
AND event.Event_pkey = 1060071
ORDER BY EventDone, DateTime DESC

Chuck Martin
Avondale Software


Re: How to compare dates from two tables with blanks values

2018-12-21 Thread Mike Martin
thanks for this. I did get it to work using coalesce and nullif on opposite
sides of the where condition, but the IS DISTINCT FROM certainly sounds a
better approach. Coming from SQLServer until last year , never came across
it before

Mike

On Wed, 19 Dec 2018 at 10:57, Arnaud L.  wrote:

> Le 19/12/2018 à 11:41, Mike Martin a écrit :
> > cast(nullif(d.joineddate,NULL) as timestamp) !=
> cast(nullif(s.joineddate,'') as timestamp)
> Try with
> d.joineddate IS DISTINCT FROM s.joineddate
>
> https://www.postgresql.org/docs/current/functions-comparison.html
>
> Cheers
> --
> Arnaud
>
>


Re: Change from 9.6 to 11?

2018-12-21 Thread Chuck Martin
On Thu, Dec 20, 2018 at 10:12 PM Adrian Klaver 
wrote:

> On 12/20/18 5:51 PM, Chuck Martin wrote:
>
> Please reply to list also.
> Ccing list.
>
> >
> >
> > On Thu, Dec 20, 2018 at 7:56 PM Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 12/20/18 12:35 PM, Chuck Martin wrote:
> >  > I hope someone here can see something that eludes me. I've
> recently
> >  > moved a database from PostgreSQL 9.6 to 11, and there are a few
> >  > oddities. The following select statement returns zero rows when it
> >  > should return one. This is one of a small number of records that
> > exist,
> >  > but are not returned by the query. When I include the main table,
> > event,
> >  > and any one of the associated tables, the record is returned, but
> no
> >  > record is returned with the entire statement. All the primary keys
> >  > (_pkey) and foreign keys (_fkey) are integers. The field I
> > suspect as
> >  > the possible culprit, event.InsBy, is a character column I'm
> > converting
> >  > to do a lookup on a primary key (integer): event.InsBy::int =
> >  > usr.Usr_pkey. Maybe PG 11 doesn't recognize the same syntax for
> > cast as
> >  > PG 9.6? Or maybe I'm overlooking something else basic. Thanks for
> > reading!
> >
> > So if in the WHERE you leave out the:
> >
> > AND event.InsBy::int = usr.Usr_pkey
> >
> > and in the SELECT you add:
> >
> > event.InsBy, event.InsBy::int AS InsByInt
> >
> > what do you see?
> >
> >
> > I get 91 copies of the record. One for each record in the usr table.
>
> But do the event.InsBy, event.InsBy::int AS InsByInt values match each
> other?
>
> Just had a thought, what if you join just the event and usr tables on:
>
> event.InsBy::int = usr.Usr_pkey
>
> Trying to determine whether your suspected culprit really is the culprit.


Thanks, Adrian. This led me to the problem. The data in InsBy was invalid.
That is to say, a join wasn’t possible because no record exists with that
primary key. Not sure how that occurred, but now I know why. Had I
anticipated this might happen, I would have used an outer join.

I appreciate your help solving this minor, but annoying, issue.

>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> --
Chuck Martin
Avondale Software


getting pg_basebackup to use remote destination

2018-12-29 Thread Chuck Martin
I thought I knew how to do this, but I apparently don't. I have to set up a
new server as a standby for a PG 11.1 server. The main server has a lot
more resources than the standby. What I want to do is run pg_basebackup on
the main server with the output going to the data directory on the new
server. But when I give this command:

pg_basebackup -D "ssh root@10.0.1.16:/mnt/dbraid/data" -P -v -X s


it instead writes to my root drive which doesn't have the space, so it
fails and deletes the partial backup.


While I think I could figure out how to backup to a local directory then
rsync it to the new server, I'd like to avoid that due to the 750GB size.


Is there a way to tell pg_basebackup to use a remote destination for
output? Or do I have to run pg_basebackup on the standby server?


And while I'm asking, has anyone yet written a guide/tutorial for PG 11
replication? Everything I find online is very old.

Chuck Martin
Avondale Software


Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Chuck Martin
On Sun, Dec 30, 2018 at 11:20 AM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 29/12/2018 20:04, Chuck Martin wrote:
> > I thought I knew how to do this, but I apparently don't. I have to set
> > up a new server as a standby for a PG 11.1 server. The main server has a
> > lot more resources than the standby. What I want to do is run
> > pg_basebackup on the main server with the output going to the data
> > directory on the new server. But when I give this command:
> >
> > pg_basebackup -D "ssh root@10.0.1.16:/mnt/dbraid/data" -P -v -X s
> >
> >
> > it instead writes to my root drive which doesn't have the space, so it
> > fails and deletes the partial backup.
>
> What you might be thinking of is the "old" method of doing base backups
> before pg_basebackup:  Call pg_start_backup() and then do file system
> operations (tar, scp, whatever) to move the data files to where you want
> them.  This is mostly obsolete.  You should run pg_basebackup on the
> host where you want to set up your standby


Thanks. It’s been a while since I set up replication. Not to mention
several Postgres versions. I’ve started pg_basebackup from the standby. It
failed once due to an ssh error, but I reloaded sshd and started again. May
take a while. It about 750gb.

> .
>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
> --
Chuck Martin
Avondale Software


Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Chuck Martin
Maybe I need to rethink ths and take Jeff's advice. I executed this:

pg_basebackup -h [main server's URL] -U postgres -P -v -X s -D
/mnt/dbraid/data

8 hours ago, and it is now still at 1%. Should it be that slow? The
database in question is about 750 GB, and both servers are on the same GB
ethernet network.

Chuck Martin
Avondale Software


On Sun, Dec 30, 2018 at 3:28 PM Jeff Janes  wrote:

> On Sat, Dec 29, 2018 at 2:05 PM Chuck Martin 
> wrote:
>
>> I thought I knew how to do this, but I apparently don't. I have to set up
>> a new server as a standby for a PG 11.1 server. The main server has a lot
>> more resources than the standby. What I want to do is run pg_basebackup on
>> the main server with the output going to the data directory on the new
>> server.
>>
>
> pg_basebackup consumes few resources on the standby anyway in the mode you
> are running it, other than network and disk.  And those are inevitable
> given your end goal, so if you could do what you want, I think it still
> wouldn't do what you want.
>
> If you really want to spare the network, you can run compression on the
> server side then decompress on the standby.  Currently you can't compress
> on the server when invoking it on the standby, so:
>
> pg_basebackup -D - -Ft -X none |pxz | ssh 10.0.1.16 "tar -xJf - -C
> /somewhere/data_test"
>
> Unfortunately you can't use this along with -X stream or -X fetch.
>
> Really I would probably compress to a file and then use scp/rsync, rather
> the streaming into ssh.  That way if ssh gets interrupted, you don't lose
> all the work.
>
> Cheers,
>
> Jeff
>
>>


Re: getting pg_basebackup to use remote destination

2018-12-31 Thread Chuck Martin
On Mon, Dec 31, 2018 at 12:05 PM Jeff Janes  wrote:

> On Sun, Dec 30, 2018 at 6:17 PM Chuck Martin 
> wrote:
>
>> Maybe I need to rethink ths and take Jeff's advice. I executed this:
>>
>> pg_basebackup -h [main server's URL] -U postgres -P -v -X s -D
>> /mnt/dbraid/data
>>
>> 8 hours ago, and it is now still at 1%. Should it be that slow? The
>> database in question is about 750 GB, and both servers are on the same GB
>> ethernet network.
>>
>
> Over gigabit ethernet, it should not be that slow.  Unless the network is
> saturated with other traffic or something.  Might be time to call in the
> network engineers.  Can you transfer static files at high speeds between
> those two hosts using scp or rsync?  (Or use some other technique to take
> PostgreSQL out of the loop and see if your network is performing as it
> should)
>
> Are you seeing transfers at a constant slow rate, or are their long
> freezes or something?  Maybe the initial checkpoint was extremely slow?
> Unfortunately -P option (even with -v) doesn't make this easy to figure
> out.  So alas it's back to old school stopwatch and a pen and paper (or
> spreadsheet).
>
> Cheers,
>
> Jeff
>
Using iperf, the transfer speed between the two servers (from the main to
the standby) was 938 Mbits/sec. If I understand the units correctly, it is
close to what it can be.

Your earlier suggestion was to do the pg_basebackup locally and rsync it
over. Maybe that would be faster. At this point, it is saying it is 6%
through, over 24 hours after being started.

Chuck Martin
Avondale Software


Query help

2019-01-01 Thread Chuck Martin
Sorry if this is too basic a question for this list, but I don't fully get
how to use aggregates (sum()) and group-by together. I'm trying to get a
list of transactions where the total for a given account exceeds a given
number. I'm not sure an example is needed, but if so, consider this
simplified data:

accountid.   name
1  bill
2. james
3  sarah
4  carl

transaction
id. amount.  accountid. name
1.  50.  1   bill
2.  25.  2   james
3   35   4   carl
4.  75.  1   bill
5   25.  1   bill
6   50   3   sarah

results wanted-all transactions where account total >= 50

id. amount.  accountid.name
1.  50.  1   bill
3.  75.  1   bill
4   25.  1   bill
5   50   3   sarah

I've tried to understand how to use GROUP BY and HAVING, but the penny
won't drop. I keep getting errors saying that all columns in the SELECT
have to also be in the GROUP BY, but nothing I've done seems to produce the
correct results. I think because the GROUP BY contains multiple columns, so
each row is treated as a group. It also is difficult to parse out since in
the real world, many more tables and columns are involved.

Chuck Martin
Avondale Software


Re: getting pg_basebackup to use remote destination

2019-01-03 Thread Chuck Martin
On Thu, Jan 3, 2019 at 3:46 PM Stephen Frost  wrote:

> Greetings Chuck,
>
> * Chuck Martin (clmar...@theombudsman.com) wrote:
> > Using iperf, the transfer speed between the two servers (from the main to
> > the standby) was 938 Mbits/sec. If I understand the units correctly, it
> is
> > close to what it can be.
>
> That does look like the rate it should be going at, but it should only
> take about 2 hours to copy 750GB at that rate.


That’s what I was expecting.

>
> How much WAL does this system generate though...?  If you're generating
> a very large amount then it's possible the WAL streaming is actually
> clogging up the network and causing the rate of copy on the data files
> to be quite slow.  You'd have to be generating quite a bit of WAL
> though.


It shouldn’t be excessive, but I’ll look closely at that.

>
>
> > Your earlier suggestion was to do the pg_basebackup locally and rsync it
> > over. Maybe that would be faster. At this point, it is saying it is 6%
> > through, over 24 hours after being started.
>
> For building out a replica, I'd tend to use my backups anyway instead of
> using pg_basebackup.  Provided you have good backups and reasonable WAL
> retention, restoring a backup and then letting it replay WAL from the
> archive until it can catch up with the primary works very well.  If you
> have a very high rate of WAL then you might consider taking a full
> backup and then taking an incremental backup (which is much faster, and
> reduces the amount of WAL required to be only that needed for the length
> of time that the incremental backup is started until the replica has
> caught up to WAL that the primary has).
>
> There's a few different backup tools out there which can do parallel
> backup and in-transit compression, which loads up the primary's CPUs
> with process doing compression but should reduce the overall time if the
> bottleneck is the network.


I’ll check out some solutions this weekend.

I appreciate the tips.

Chuck

>
>
> Thanks!
>
> Stephen
>
-- 
Chuck Martin
Avondale Software


Array_agg and dimensions in Array

2019-01-13 Thread Mike Martin
I have come across a problem which I cant seem to solve in a nice way
Basically I have a (small) table of tags
What I need to is combine two concatenated fields with a literal value as
an array element.

First thought was using array_agg with a pre-created array as

select array_agg(ARRAY['-metadata',optname||'='||optvalue]))  metaopt from
encodeopts
where  alias is not null and opttype in ('tag','tagn')
group by transref,fileid ) a

However this results in a multi-dimensional array, rather than a single
dimensioned one, which makes it impossible to join with the rest of an
array created elsewhere in the query

This works, but is very cludgy

select ARRAY['-map_metadata','-1']||array_agg(metaopt) from
(select unnest(array_agg(ARRAY['-metadata',optname||'='||optvalue]))
metaopt from encodeopts
where  alias is not null and opttype in ('tag','tagn')
group by transref,fileid ) a

So does this

select
string_to_array(string_agg('-metadata',||'||'||optname||'='||optvalue])),'||')
metaopt from encodeopts
where  alias is not null and opttype in ('tag','tagn')
group by transref,fileid

but again cludgy

Any ideas appreciated

Mike


Query help

2019-01-26 Thread Chuck Martin
I'm having trouble formulating a query. This is a simplified version of the
tables:

ombcase

case_pkey integer, primary key
casename varchar
insdatetime timestamp w/o time zone
status_fkey integer, foreign key

status

status_pkey integer, primary key
statusid varchar

statuschange

statuschange_pkey integer, primary key
insdatetime timestamp w/o time zone
ombcase_fkey integer, foreign key
oldstatus_fkey integer, foreign key
newstatus_fkey integer, foreign key
active integer, not nullable

The idea should be obvious, but to explain, insdatetime is set when a new
record is created in any table. All records in ombcase have a foreign key
to status that can't be null. When status changes, a record is created in
statuschange recording the old and new status keys, and the time (etc).

The goal is to find records in ombcase that have not had a status change in
xx days. If the status has not changed, there will be no statuschange
record.

This query returns the age of each ombcase and the last statuschange
record, but only if there is a statuschange record:

--Finds the age and last status change for open cases, but not age of cases
with no status change

SELECT   casename, age(ombcase.insdatetime) AS caseage,
age(laststatuschange.created_at) AS statusage

FROM

(SELECT

case_fkey, MAX(insdatetime) AS created_at

FROM

statuschange

GROUP BY

case_fkey) AS laststatuschange

INNER JOIN

ombcase

ON

laststatuschange.case_fkey = case_pkey

RIGHT JOIN status

ON status_fkey = status_pkey

WHERE lower(statusid) NOT LIKE ('closed%')

AND case_pkey <> 0


I want to use coalesce
(age(ombcase.insdatetime),age(statuschange.insdatetime) ) to return the
time that a case has been in a status—or without a status change. But first
I have to find the cases with no statuschange record. I was able to do
that, too, using this query:


--find cases in status too long

SELECT  casename, coalesce
(age(ombcase.insdatetime),age(statuschange.insdatetime) )

FROM ombcase

LEFT JOIN statuschange

ON case_fkey = case_pkey

LEFT JOIN status

ON status_fkey = status_pkey

AND lower(statusid) NOT LIKE ('closed%')

AND coalesce ( age(ombcase.insdatetime), age(statuschange.insdatetime) ) >
'2 months'


But this query will return all statuschange records for an ombcase record
that has multiple ones.


Any suggestions on how to combine the two ideas?

Chuck Martin
Avondale Software
-- 
Chuck Martin
Avondale Software


Re: Query help

2019-01-26 Thread Charles Martin
On Sat, Jan 26, 2019 at 6:30 PM Ron  wrote:

> On 1/26/19 5:04 PM, Chuck Martin wrote:
>
> I'm having trouble formulating a query. This is a simplified version of
> the tables:
>
> ombcase
> 
> case_pkey integer, primary key
> casename varchar
> insdatetime timestamp w/o time zone
> status_fkey integer, foreign key
>
> status
> 
> status_pkey integer, primary key
> statusid varchar
>
> statuschange
> 
> statuschange_pkey integer, primary key
> insdatetime timestamp w/o time zone
> ombcase_fkey integer, foreign key
> oldstatus_fkey integer, foreign key
> newstatus_fkey integer, foreign key
> active integer, not nullable
>
> The idea should be obvious, but to explain, insdatetime is set when a new
> record is created in any table. All records in ombcase have a foreign key
> to status that can't be null. When status changes, a record is created in
> statuschange recording the old and new status keys, and the time (etc).
>
> The goal is to find records in ombcase that have not had a status change
> in xx days. If the status has not changed, there will be no statuschange
> record.
>
>
> Does statuschange.*ins*datetime record when an ombcase record was first
> inserted, or when the status_fkey associated with ombcase.case_pkey was
> updated?
>

No, it only creates a statuschange record when the status is first changed,
not when the ombcase record is created.

>
>
> And why not add upddatetime to ombcase?  That would solve all your
> problems.
>

I do record the time of the last update, but that could reflect a change of
any column (most I didn’t list).

>
>
> --
> Angular momentum makes the world go 'round.
>
-- 

Charles L. Martin
Martin Jones & Piemonte
BUSINESS email: serv...@martinandjones.us
Personal email: clmar...@ssappeals.com
Decatur Office:
123 N. McDonough St.
Decatur, GA 30030
404-373-3116
Fax 404-373-4110

Charlotte Office:
4601 Charlotte Park Drive, Suite 390
Charlotte, NC 28217
704-399-8890
Fax 888-490-1315


Re: Query help

2019-01-27 Thread Chuck Martin
Chuck Martin
Avondale Software


On Sun, Jan 27, 2019 at 2:55 PM Adrian Klaver 
wrote:

> On 1/26/19 3:04 PM, Chuck Martin wrote:
> [snip]
> Outline form:
>
> 1) If a record is in ombcase it has a status('in a status') by definition.
>
>  From query below you are not looking for just records in ombcase, but
> those that have a statusid other then 'closed%' in status table.
>
> 2) For the criteria in 1) you want to find the age of the last
> statuschange.
>
> To me that leads to something like:
>
> SELECT
> case_pkey
> FROM
> ombcase AS
> JOIN
> status
> ON
> ombcase.case_pkey = status.status_fkey
> LEFT JOIN
> statuschange
> ON  -- Or statuschange.ombcase_fkey. Not clear from above.
> statuschange.case_fkey = ombcase.status_pkey
> GROUP BY
> ombcase.pkey
> HAVING
> status.LOWER(statusid) NOT LIKE ('closed%')
> AND
> max(coalesce(statuschange.insdatetime, ombcase.insdatetime))
> < 'some date'
>
> Obviously not tested.
>

Thanks, Adrian. This got me a lot closer, but I'm puzzled by the number of
records returned. There are 3120 ombcase records with a statusid that is <>
'closed%':

SELECT count(ombcase.case_pkey)

FROM ombcase,status

WHERE ombcase.status_fkey = status.status_pkey  AND lower(status.statusid)
NOT LIKE  ('closed%')


But 3378 are returned by:

SELECT  ombcase.case_pkey, ombcase.casename,
COALESCE(AGE(statuschange.insdatetime), AGE(ombcase.insdatetime)) AS
age_in_status

FROM ombcase

INNER JOIN status

ON ombcase.status_fkey = status.status_pkey

LEFT JOIN statuschange

ON statuschange.case_fkey = ombcase.case_pkey

GROUP BY ombcase.case_pkey, status.statusid, statuschange.insdatetime,
ombcase.insdatetime

HAVING LOWER(status.statusid) NOT LIKE ('closed%')

AND ombcase.case_pkey <> 0

AND MAX(COALESCE(AGE(statuschange.insdatetime), AGE(ombcase.insdatetime)))
> '2 months'
ORDER BY age_in_status DESC

I don't know where the extra 258 records came from, and I think I need to
keep working on it until the query returns 3120 records.


Re: Query help

2019-01-27 Thread Chuck Martin
On Sun, Jan 27, 2019 at 8:07 AM Peter J. Holzer  wrote:

> On 2019-01-26 18:04:23 -0500, Chuck Martin wrote:
> [snip]
> > The idea should be obvious, but to explain, insdatetime is set when a new
> > record is created in any table. All records in ombcase have a foreign
> key to
> > status that can't be null. When status changes, a record is created in
> > statuschange recording the old and new status keys, and the time (etc).
> >
> > The goal is to find records in ombcase that have not had a status change
> in xx
> > days. If the status has not changed, there will be no statuschange
> record.
>
> The easiest way is to use set operations:
>
> select case_pkey from ombcase;
> gives you all the ombcase ids.
>
> select ombcase_fkey from statuschange where insdatetime >= now()::date -
> xx;
> gives you all ombcase ids which had a status change in the last xx days.
>
> Therefore,
> select case_pkey from ombcase
> except
> select ombcase_fkey from statuschange where insdatetime >= now()::date -
> xx;
> gives you all ombcase ids which did /not/ have a status change in the
> last xx days.
>

I was not familiar with set operations, but studied up a bit and thought I
was getting there. Not quite, though. I have two queries that individually
return 1) all ombcase records with no statuschange record, and 2) the
newest statuschange record for each case that has a statuschange record.
But just putting UNION between then doesn't work. Here are my queries:

--First, find all open cases with no statuschange record
SELECT

case_pkey,statuschange_pkey,case_fkey,ombcase.insdatetime,statuschange.insdatetime
FROM
ombcase
LEFT JOIN
statuschange
ON
  statuschange.case_fkey = case_pkey
  AND case_pkey <> 0
LEFT JOIN
status
 ON status_fkey = status_pkey
  WHERE lower(statusid) NOT LIKE  ('closed%')
  AND statuschange.statuschange_pkey IS NULL
UNION
  --Now find the last status change record for each case that has one
  SELECT DISTINCT ON (case_fkey)

case_pkey,statuschange_pkey,case_fkey,ombcase.insdatetime,statuschange.insdatetime
FROM
statuschange,ombcase,status
WHERE case_fkey = case_pkey
AND status_fkey = status_pkey
AND LOWER(statusid) NOT LIKE ('closed%')
ORDER BY case_fkey, statuschange.insdatetime DESC

If I run each part separately, I get the expected number of records.
When I combine them with UNION, I get "missing FROM-clause entry for
table "statuschange"
So I'm very close here, and these two return the exact number of
records I'm expecting. So I just need to get them added together. Then
I expect I can put the whole thing in a WHERE clause with "AND
ombcase.case_pkey IN ([the combined results])"


>
> Another way would be to use a CTE
> (https://www.postgresql.org/docs/10/queries-with.html) to extract the
> last status change for each ombcase and then do a left join of ombcase
> to that CTE.
>
> hp
>
>
> --
>_  | Peter J. Holzer| we build much bigger, better disasters now
> |_|_) || because we have much more sophisticated
> | |   | h...@hjp.at | management tools.
> __/   | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
>


Re: Query help

2019-01-27 Thread Chuck Martin
On Sun, Jan 27, 2019 at 5:27 PM Chuck Martin 
wrote:

> On Sun, Jan 27, 2019 at 8:07 AM Peter J. Holzer  wrote:
>
>> On 2019-01-26 18:04:23 -0500, Chuck Martin wrote:
>> [snip]
>> > The idea should be obvious, but to explain, insdatetime is set when a
>> new
>> > record is created in any table. All records in ombcase have a foreign
>> key to
>> > status that can't be null. When status changes, a record is created in
>> > statuschange recording the old and new status keys, and the time (etc).
>> >
>> > The goal is to find records in ombcase that have not had a status
>> change in xx
>> > days. If the status has not changed, there will be no statuschange
>> record.
>>
>> The easiest way is to use set operations:
>>
>> select case_pkey from ombcase;
>> gives you all the ombcase ids.
>>
>> select ombcase_fkey from statuschange where insdatetime >= now()::date -
>> xx;
>> gives you all ombcase ids which had a status change in the last xx days.
>>
>> Therefore,
>> select case_pkey from ombcase
>> except
>> select ombcase_fkey from statuschange where insdatetime >= now()::date -
>> xx;
>> gives you all ombcase ids which did /not/ have a status change in the
>> last xx days.
>>
>
> I was not familiar with set operations, but studied up a bit and thought I
> was getting there. Not quite, though. I have two queries that individually
> return 1) all ombcase records with no statuschange record, and 2) the
> newest statuschange record for each case that has a statuschange record.
> But just putting UNION between then doesn't work. Here are my queries:
>
> --First, find all open cases with no statuschange record
> SELECT
>
> case_pkey,statuschange_pkey,case_fkey,ombcase.insdatetime,statuschange.insdatetime
> FROM
> ombcase
> LEFT JOIN
> statuschange
> ON
>   statuschange.case_fkey = case_pkey
>   AND case_pkey <> 0
> LEFT JOIN
> status
>  ON status_fkey = status_pkey
>   WHERE lower(statusid) NOT LIKE  ('closed%')
>   AND statuschange.statuschange_pkey IS NULL
> UNION
>   --Now find the last status change record for each case that has one
>   SELECT DISTINCT ON (case_fkey)
>
> case_pkey,statuschange_pkey,case_fkey,ombcase.insdatetime,statuschange.insdatetime
> FROM
> statuschange,ombcase,status
> WHERE case_fkey = case_pkey
> AND status_fkey = status_pkey
> AND LOWER(statusid) NOT LIKE ('closed%')
> ORDER BY case_fkey, statuschange.insdatetime DESC
>
> If I run each part separately, I get the expected number of records. When I 
> combine them with UNION, I get "missing FROM-clause entry for table 
> "statuschange"
> So I'm very close here, and these two return the exact number of records I'm 
> expecting. So I just need to get them added together. Then I expect I can put 
> the whole thing in a WHERE clause with "AND ombcase.case_pkey IN ([the 
> combined results])"
>
> This was pretty easy to resolve. Putting parentheses around each half of
the query caused it to return the right results. Then I could reduce the
columns to just ombcase.case_pkey and use an IN statement. I think this
gets me where I need to be. I appreciate the help!

Chuck


  1   2   >