Enforcing users to write schemas when creating tables

2017-12-20 Thread Gabriel Furstenheim Milerud
Hi,
I'm trying to enforce db users to write a schema when creating a table.
That is:

 create table some_schema.my_table (a int); -- should succeed
 create my_table (a int); -- should fail

 I don't know if that is possible.

What I've tried so far is to create a schema which is first in the search
path and where the user has no permission to create tables. I've done the
following (and failed):


create schema no_table_schema;
alter schema no_table_schema owner to another_user; -- just in case
revoke all on schema no_table_schema from my_user cascade;
set search_path = no_table_schema;

create table test_table (a int); -- this should fail because user
should not have permission in no_table_schema, but it does not
drop table no_table_schema.test_table; -- This succeeds, the table was
created

One thing that might affect is that my_user is a superuser.

So I have two questions, first is how do I revoke create on a schema for a
certain user. I guess there is something that I'm not doing properly. Then,
is that enough my purpose? Or maybe there are easier ways to force users to
provide schema when creating.


Thanks
Gabriel Fürstenheim


Re: Enforcing users to write schemas when creating tables

2017-12-20 Thread Laurenz Albe
Gabriel Furstenheim Milerud wrote:
> I'm trying to enforce db users to write a schema when creating a table. That 
> is:
> 
>  create table some_schema.my_table (a int); -- should succeed
>  create my_table (a int); -- should fail
> 
>  I don't know if that is possible.
> 
> What I've tried so far is to create a schema which is first in the search 
> path and where the user has no permission to create tables. I've done the 
> following (and failed):
> 
> 
> create schema no_table_schema;
> alter schema no_table_schema owner to another_user; -- just in case
> revoke all on schema no_table_schema from my_user cascade;
> set search_path = no_table_schema;
>
> create table test_table (a int); -- this should fail because user should 
> not have permission in no_table_schema, but it does not
> drop table no_table_schema.test_table; -- This succeeds, the table was 
> created
> 
> One thing that might affect is that my_user is a superuser.
> 
> So I have two questions, first is how do I revoke create on a schema for a 
> certain user. I guess there is something that I'm not doing properly.
> Then, is that enough my purpose? Or maybe there are easier ways to force 
> users to provide schema when creating.

There is no way to deny a superuser access to a schema.
Don't use superusers for anything else than administration.

One way I can think of to force users to create tables with
schema qualified names is to set "search_path" to "pg_catalog".
Then only the temporary schema and the catalog schema can be
used without qualification.

Every user can use "SET search_path = ..." to change the setting,
but a script that does that documents at least where the table
*might* be created.

Yours,
Laurenz Albe



Re: Debugging a function - what's the best way to do this quickly?

2017-12-20 Thread Merlin Moncure
On Tue, Dec 19, 2017 at 1:24 PM, Kevin Burke  wrote:
> I'm writing a function that looks a little like this:
>
> DROP FUNCTION IF EXISTS myfunction;
> CREATE OR REPLACE FUNCTION myfunction(arg1 uuid,
> _symbol text,
> _start timestamp with time zone,
> _end timestamp with time zone
> ) RETURNS TABLE (arg5 date, arg6 float)
> AS $$
> WITH cte1 AS ( ... ),
> cte2 AS ( ... ),
> cte3 AS ( ... ),
> cte4 AS ( ... ),
> cte5 AS ( ... )
> SELECT X as arg5, Y as arg6 FROM cte5;
> $$
>
> The function is not returning the correct results; I think the problem is in
> cte2 or cte3. What's the easiest way to debug this? I would like to send
> some test inputs through the program, observe the output from cte3, and
> modify the values and see if I get the correct new answers. Here are the
> approaches I know right now:
>
> - Modify the function return to contain the columns for cte3. (I don't think
> there is a way to indicate RETURNS * or similar wildcard)
> - Reload the function.
> - Call the function with the test arguments, and view the resulting table.
> Modify/reload/rerun as appropriate.
>
> Or:
>
> - Copy the function to another file.
> - Delete the function prologue and epilogue
> - Replace every use of the input arguments with the hardcoded values I want
> to test with
> - Run the file, making changes as necessary.
>
> This seems pretty cumbersome. Is there an easier way I am missing?
> Specifically it would be neat if it was easier to visualize the intermediate
> steps in the query production. If there are professional tools that help
> with this I would appreciate pointers to those as well.

If you have a lot of chained CTEs and the problem lies within that
chain, copying the query and subbing arguments is likely the best
option.  For really nasty situations I tend to convert the CTEs, one
at a time, to temp tables, reviewing the results on each step.  I've
scaled back my use of CTEs a lot lately for this and other reasons
(mainly problems with statistics) although I really appreciate the
lack of catalog bloat.

I also heavily abuse 'RAISE NOTICE' for debugging purposes.  Something
like this:

CREATE OR REPLACE FUNCTION Notice(TEXT) RETURNS VOID AS
$$
BEGIN
  RAISE NOTICE '%', $1;
END;
$$ LANGUAGE PLPGSQL;

CREATE OR REPLACE FUNCTION NoticeValue(anyelement) RETURNS anyelement AS
$$
  SELECT $1 FROM (SELECT NOTICE($1::TEXT)) q;
$$ LANGUAGE SQL;

'NoticeValue' can be used just about anywhere, for example:

SELECT a FROM foo WHERE...
could be quickly converted to:
SELECT NoticeValue(a) AS a FROM foo WHERE

Don't forget, we can convert records to json and 'notice' them:
SELECT a,  NoticeValue(to_json(a)) FROM foo WHERE

Dynamic SQL (via EXECUTE) can be a real pleasure to debug (not so much
to write and review), particularly if you (securely) do your own
parameterization since you can just print out the entire query.  From
a debugging standpoint, that's as good as it gets.

Also, there is a pl/pgsql debugger.  I don't have any experience with
it, maybe somebody else can comment.  I work exclusively with psql,
and so tend to use techniques that work well there.

merlin



Re: Re: PostgreSQL needs percentage function

2017-12-20 Thread Chris Travers
If you want one:

create or replace percent(numeric, numeric) returns numeric language sql as
$$ select ($1 / 100) * $2; $$;

On Tue, Dec 19, 2017 at 4:51 PM, Melvin Davidson 
wrote:

>
>
> On Tue, Dec 19, 2017 at 4:54 AM, Pavel Stehule 
> wrote:
>
>>
>>
>> 2017-12-19 10:13 GMT+01:00 Nick Dro :
>>
>>> This is exactly why I think there should be some build-in function for
>>> that...
>>> Percentage calculation exists in almost any databse and information
>>> system - it requires from use to implement many functions on thier own for
>>> something that is very basic.
>>> The general idea of my request is that postgresql should have build in
>>> function for percentage calculation it doesn't have to me the exact example
>>> I gave. Any thing will be better than none.
>>>
>>
>> I don't agree - there is not compatibility or portability reasons. The
>> calculation using numeric operators is pretty simple, and possibility to
>> create own function is here too.
>>
>> So there is not any reason to increase a postgresql catalogue.
>>
>> Please. don't do top post
>>
>> Regards
>>
>> Pavel
>>
>>
>>>
>>>
>>>
>>> ב דצמ׳ 18, 2017 18:28, Michael Nolan כתב:
>>>
>>>
>>>
>>> On Mon, Dec 18, 2017 at 6:23 AM, Nick Dro 
>>> wrote:
>>>
>>> Hi,
>>> Why PostgreSQL doesn't have build-in function to calculate percentage?
>>> somthing like percent(number,%
>>> for example:
>>> select percent(100,1) will calculate 1% of 100 = 1
>>> select percent(25,20) will calculate 20% of 25 = 5
>>>
>>> Seems like a nice addition to the math functions list:
>>> https://www.postgresql.org/docs/9.5/static/functions-math.html
>>>
>>> This is veryhelpull function, many uses percentage calculation in thier
>>> work and it will simplify the process. Percentage calculation is considered
>>> a basic math operation and I think postgresql should support it as a
>>> build-in function.
>>> Seems like easy to implment isn't it?
>>>
>>>
>>> It's a bit trickier than that, because you';ll have to deal with
>>> integers, real, double precision, etc.  You may also want to deal with null
>>> values.  I found it more useful to write a function that displays X as a
>>> percentage of Y, rounded to 1 digit to the right of the decimal point.
>>> --
>>> Mike Nolan
>>>
>>>
>>
> >Percentage calculation exists in almost any databse and information
> system
>
> That is not exactly true. AFAIK, only Oracle has a Percentage function.
> SQL Server and MySQL do not.
> It has already been shown that it is just as easy to code percentage
> inline  (EG: SELECT (50.8 x 5.2) / 100 AS pct;   ## .026416
> as it is to call a function SELECT pct(50.8, 5.2);
> Please do not false statements to justify a request for a non-needed
> enhancement.
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


postgresql 9.5 has ocuuered OOM

2017-12-20 Thread mark
I have set shared_buffers is 1/4 of memory.
work_mem is 2% of memory.
max_connections is 50.
momery size is 16GB.
postgresql process used over 70% of memory and occuered OOM.
what should I do to deal with this problem?



Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread Tomas Vondra
On 12/20/2017 04:08 PM, mark wrote:
> I have set shared_buffers is 1/4 of memory.
> work_mem is 2% of memory.
> max_connections is 50.

That means if you have all 50 connections active, they may easily
consume 100% of memory, because 50 * 2 is 100. It's even easier if the
connections are executing complex queries, because each query may use
multiple work_mem buffers. So 2% seems a bit too high.

> momery size is 16GB.
> postgresql process used over 70% of memory and occuered OOM.

So, did a single process use 70% of memory, or all postgres processes
combined?

If just a single process, it might be a poor plan choice (e.g. hash
aggregate may easily cause that).

If all processes combined, then perhaps it's due to work_mem being too high.

> what should I do to deal with this problem?
> 

Hard to say, until you provide enough information.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread Andreas Kretschmer



Am 20.12.2017 um 16:08 schrieb mark:

postgresql process used over 70% of memory and occuered OOM.
what should I do to deal with this problem?


https://www.postgresql.org/docs/current/static/kernel-resources.html
18.4.4. Linux Memory Overcommit

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




RE: Re: Re: PostgreSQL needs percentage function

2017-12-20 Thread Nick Dro


 I said: "Percentage calculation exists in almost any databse and information system"
I didn't say it exists in any RDB. I meant that any system that is using databse like information system uses percentace calculation therefor if most of the users of the databse need such function it make sence to have it in it's base code - ready for use rather than ask each one to implment his own.
 
 


ב דצמ׳ 19, 2017 17:51, Melvin Davidson כתב:



On Tue, Dec 19, 2017 at 4:54 AM, Pavel Stehule  wrote:



2017-12-19 10:13 GMT+01:00 Nick Dro :




This is exactly why I think there should be some build-in function for that...
Percentage calculation exists in almost any databse and information system - it requires from use to implement many functions on thier own for something that is very basic.
The general idea of my request is that postgresql should have build in function for percentage calculation it doesn't have to me the exact example I gave. Any thing will be better than none.




 
I don't agree - there is not compatibility or portability reasons. The calculation using numeric operators is pretty simple, and possibility to create own function is here too.
 
So there is not any reason to increase a postgresql catalogue.
 
Please. don't do top post
 
Regards
 
Pavel
 




 
 


ב דצמ׳ 18, 2017 18:28, Michael Nolan כתב:



On Mon, Dec 18, 2017 at 6:23 AM, Nick Dro  wrote:


Hi,
Why PostgreSQL doesn't have build-in function to calculate percentage? somthing like percent(number,%
for example:
select percent(100,1) will calculate 1% of 100 = 1
select percent(25,20) will calculate 20% of 25 = 5
 
Seems like a nice addition to the math functions list:
https://www.postgresql.org/docs/9.5/static/functions-math.html
 
This is veryhelpull function, many uses percentage calculation in thier work and it will simplify the process. Percentage calculation is considered a basic math operation and I think postgresql should support it as a build-in function.
Seems like easy to implment isn't it?




 
It's a bit trickier than that, because you';ll have to deal with integers, real, double precision, etc.  You may also want to deal with null values.  I found it more useful to write a function that displays X as a percentage of Y, rounded to 1 digit to the right of the decimal point.
--
Mike Nolan











>Percentage calculation exists in almost any databse and information system
 
That is not exactly true. AFAIK, only Oracle has a Percentage function. SQL Server and MySQL do not. 
It has already been shown that it is just as easy to code percentage inline  (EG: SELECT (50.8 x 5.2) / 100 AS pct;   ## .026416 
as it is to call a function SELECT pct(50.8, 5.2); 
Please do not false statements to justify a request for a non-needed enhancement.
 
 
 
-- 

Melvin DavidsonI reserve the right to fantasize.  Whether or not you  wish to share my fantasy is entirely up to you. 









Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread Uwe
On Wednesday, December 20, 2017 11:08:51 PM PST mark wrote:
> I have set shared_buffers is 1/4 of memory.
> work_mem is 2% of memory.
> max_connections is 50.
> momery size is 16GB.
> postgresql process used over 70% of memory and occuered OOM.
> what should I do to deal with this problem?

IIRC work_mem is a PER CONNECTION setting, that means every connection can use 
2% of your total memory.  I'd try setting work_mem to something like 16MB or 
32MB and see if the queries still execute properly. You want to set work_mem 
as high as needed to allow proper query execution but as low as possible to 
avoid running out of memory.



Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread David G. Johnston
On Wed, Dec 20, 2017 at 9:44 AM, Uwe  wrote:

> IIRC work_mem is a PER CONNECTION setting,
>
​The docs for this setting clearly state that a single connection/session
can itself use multiple times this values for a single query.

https://www.postgresql.org/docs/10/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY

David J.
​


Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread Uwe
On Wednesday, December 20, 2017 9:59:24 AM PST David G. Johnston wrote:
> On Wed, Dec 20, 2017 at 9:44 AM, Uwe  wrote:
> > IIRC work_mem is a PER CONNECTION setting,
> 
> ​The docs for this setting clearly state that a single connection/session
> can itself use multiple times this values for a single query.
> 
> https://www.postgresql.org/docs/10/static/runtime-config-resource.html#RUNTI
> ME-CONFIG-RESOURCE-MEMORY
> 

So even worse for the OP. No wonder he runs out of memory. 
Thanks for the correction.

Uwe





Caching of sproc

2017-12-20 Thread Rakesh Kumar
I want to confirm my understanding.  When a stored procedure is called the 
first time the code's compiled version is cached and used next time it is 
executed.  However it is cached once for every session and not globally for all 
sessions. In other words, if we have a large number of concurrent sessions (x 
num of sessions) executing the same procedures (y num of procedures), the 
memory pressure will be 
x * y.

The reason why I am asking this is that the application we are moving from 
Oracle is heavy on stored procedures with some packages running into 8000-1 
lines of code.



Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread Scott Marlowe
On Wed, Dec 20, 2017 at 9:25 AM, Andreas Kretschmer
 wrote:
>
>
> Am 20.12.2017 um 16:08 schrieb mark:
>>
>> postgresql process used over 70% of memory and occuered OOM.
>> what should I do to deal with this problem?
>
>
> https://www.postgresql.org/docs/current/static/kernel-resources.html
> 18.4.4. Linux Memory Overcommit

More specifically:
https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT

To reiterate the basic message there, at no time should the OS think
that killing big processes is OK. It is NOT. At no time should your OS
be terminating big processes all on its own.

Also it's far better to starve your work_mem and keep all the
processes running than to ever run any out of memory. But if one does
run out of memory it should only cause a problem for that one process,
not the backend writer etc.



Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread Andreas Kretschmer



Am 20.12.2017 um 17:24 schrieb Tomas Vondra:

That means if you have all 50 connections active, they may easily
consume 100% of memory, because 50 * 2 is 100. It's even easier if the
connections are executing complex queries, because each query may use
multiple work_mem buffers. So 2% seems a bit too high.


Nice hint! You are right.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




psycopg2 and java gssapi questions

2017-12-20 Thread Mike Feld

Is it possible to authenticate with Postgres from astandalone application using 
gssapi? In other words, I am able to authenticatewith Postgres when a human has 
logged in to either Windows or Linux andgenerated a ticket, but is it possible 
for say a Django site or Javaapplication running on some server somewhere to 
authenticate with Postgresusing gssapi? I realize that psycopg2 has a 
connection parameter for “krbsrvname”,but how does it generate a ticket? Is 
this the only alternative to secure authentication since Postgres does not 
support secure ldap (ldaps)?


Re: psycopg2 and java gssapi questions

2017-12-20 Thread basti
Hello,
have a look at

https://www.postgresql.org/docs/current/static/auth-methods.html

There are details about LDAP and GSSAPI.

On 20.12.2017 20:42, Mike Feld wrote:
> Is it possible to authenticate with Postgres from a standalone
> application using gssapi? In other words, I am able to authenticate with
> Postgres when a human has logged in to either Windows or Linux and
> generated a ticket, but is it possible for say a Django site or Java
> application running on some server somewhere to authenticate with
> Postgres using gssapi? I realize that psycopg2 has a connection
> parameter for “krbsrvname”, but how does it generate a ticket? Is this
> the only alternative to secure authentication since Postgres does not
> support secure ldap (ldaps)?



Re: psycopg2 and java gssapi questions

2017-12-20 Thread Mike Feld


This shows you how to setup GSSAPI authentication server side, which I have 
already done and have working. My question is from client side, without a human 
logged in to generate the ticket.
>>Hello,
>>have a look at

>>https://www.postgresql.org/docs/current/static/auth-methods.html

>>There are details about LDAP and GSSAPI.

>>On 20.12.2017 20:42, Mike F wrote:
>>> Is it possible to authenticate with Postgres from a standalone
>>> application using gssapi? In other words, I am able to authenticate with
>>> Postgres when a human has logged in to either Windows or Linux and
>>> generated a ticket, but is it possible for say a Django site or Java
>>> application running on some server somewhere to authenticate with
>>> Postgres using gssapi? I realize that psycopg2 has a connection
>>> parameter for “krbsrvname”, but how does it generate a ticket? Is this
>>> the only alternative to secure authentication since Postgres does not
>>> support secure ldap (ldaps)?






problems with postgresql 10.1 hba_conf on fedora 27

2017-12-20 Thread support-tiger

can someone please help ?

postgresql-10

fedora 27

our hba_conf that worked with pg 9x

local    all   all   postgres    peer

host    all    all   127.0.0.1/32 md5

host    all    all   ::1/128  md5

now gives error on restart and we cannot find the  log file

postgres remains so damn difficult and time wasting to quickly get up 
and running  vs other db's and docs don't help much





--
Support Dept
Tiger Nassau, Inc.
www.tigernassau.com
406-624-9310





Re: problems with postgresql 10.1 hba_conf on fedora 27

2017-12-20 Thread David G. Johnston
On Wed, Dec 20, 2017 at 4:05 PM, support-tiger 
wrote:

> can someone please help ?
>
> postgresql-10
>
> fedora 27
>

​Installed via yum?
​

> our hba_conf that worked with pg 9x
>

​There are six of them, which one specifically?

now gives error on restart and we cannot find the  log fil
> ​e
>

​And what error would that be?​

David J.


Re: problems with postgresql 10.1 hba_conf on fedora 27

2017-12-20 Thread James Keener
> postgres remains so damn difficult and time wasting to quickly get up 
and running  vs other db's and docs don't help much

See, I feel exactly the opposite. The docs are some of the best of any database 
or open source software I've used. Postgres also does the "right" thing almost 
all the time, making it predictable.

> now gives error on restart and we cannot find the  log file

What error? Where did you look for the log files? Did you try running the 
deamon directly to see if that gives any additional information?

Jim

On December 20, 2017 6:05:39 PM EST, support-tiger  
wrote:
>can someone please help ?
>
>postgresql-10
>
>fedora 27
>
>our hba_conf that worked with pg 9x
>
>local    all   all   postgres    peer
>
>host    all    all   127.0.0.1/32 md5
>
>host    all    all   ::1/128  md5
>
>now gives error on restart and we cannot find the  log file
>
>postgres remains so damn difficult and time wasting to quickly get up 
>and running  vs other db's and docs don't help much
>
>
>
>
>-- 
>Support Dept
>Tiger Nassau, Inc.
>www.tigernassau.com
>406-624-9310

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: problems with postgresql 10.1 hba_conf on fedora 27

2017-12-20 Thread rob stone


On Wed, 2017-12-20 at 16:05 -0700, support-tiger wrote:
> can someone please help ?
> 
> postgresql-10
> 
> fedora 27
> 
> our hba_conf that worked with pg 9x
> 
> localall   all   postgrespeer
> 
> hostallall   127.0.0.1/32 md5
> 
> hostallall   ::1/128  md5
> 
> now gives error on restart and we cannot find the  log file
> 
> postgres remains so damn difficult and time wasting to quickly get
> up 
> and running  vs other db's and docs don't help much
> 
> 
> 
> 

Hi,

"local" is for socket connections, yet you have an address of
"postgres". How did this function in the past?

Exactly what do you mean by "gives error on restart". The database
starts and you can connect, then you shut it down and then you start it
again, it fails?

Cheers,
Rob



Re:Re: postgresql 9.5 has ocuuered OOM

2017-12-20 Thread mark



Hi Tomas,Uwe,David G
Thanks for your reply.


>So, did a single process use 70% of memory, or all postgres processes
>combined?

all postgres processes use over  70% of memory.
>If just a single process, it might be a poor plan choice (e.g. hash
>aggregate may easily cause that).


because delete clause used a lot of memory .

delete clause is below:

DELETE  FROM test WHERE testid in (select r_id from test1 where p_id_id in ( 
select re_id from ap_en where link = $1))

delete from  test  where test1_id = $1 AND id = $2

because delete clause is using select condition. maybe It make memory useage 
high.

if I decrease the work_mem size,It will affect delete clause  execution 
efficiency,

I want the session unit to set work_mem size.

the OS level (cgconfig)  to set all postgres processes memory usage.

How about this setting ?




Regards,

Mark









At 2017-12-21 00:24:35, "Tomas Vondra"  wrote:
>On 12/20/2017 04:08 PM, mark wrote:
>> I have set shared_buffers is 1/4 of memory.
>> work_mem is 2% of memory.
>> max_connections is 50.
>
>That means if you have all 50 connections active, they may easily
>consume 100% of memory, because 50 * 2 is 100. It's even easier if the
>connections are executing complex queries, because each query may use
>multiple work_mem buffers. So 2% seems a bit too high.
>
>> momery size is 16GB.
>> postgresql process used over 70% of memory and occuered OOM.
>
>So, did a single process use 70% of memory, or all postgres processes
>combined?
>
>If just a single process, it might be a poor plan choice (e.g. hash
>aggregate may easily cause that).
>
>If all processes combined, then perhaps it's due to work_mem being too high.
>
>> what should I do to deal with this problem?
>> 
>
>Hard to say, until you provide enough information.
>
>regards
>
>-- 
>Tomas Vondra  http://www.2ndQuadrant.com
>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: problems with postgresql 10.1 hba_conf on fedora 27

2017-12-20 Thread David G. Johnston
On Wednesday, December 20, 2017, rob stone  wrote:

>
> > localall   all   postgrespeer
> >
> > hostallall   127.0.0.1/32 md5
> >
> > hostallall   ::1/128  md5
> >
> > now gives error on restart and we cannot find the  log file
> >
>
> "local" is for socket connections, yet you have an address of
> "postgres". How did this function in the past?
>
>
Sorta, but correct enough.  Local lines and host lines have differing
numbers of columns.  As written the auth-method is "Postgres" with
auth-option of "md5" (i.e., there is no address column for local) which is
simply wrong.  Whether it's an email typo or the actual file...

I suspect it's possible we used to ignore malformed lines whereas now we
error...

David J.


Re: problems with postgresql 10.1 hba_conf on fedora 27

2017-12-20 Thread support-tiger



On 12/20/2017 06:33 PM, David G. Johnston wrote:
On Wednesday, December 20, 2017, rob stone > wrote:



> local    all   all       postgres                peer
>
> host    all    all 127.0.0.1/32   md5
>
> host    all    all       ::1/128                  md5
>
> now gives error on restart and we cannot find the  log file
>

"local" is for socket connections, yet you have an address of
"postgres". How did this function in the past?


Sorta, but correct enough.  Local lines and host lines have differing 
numbers of columns.  As written the auth-method is "Postgres" with 
auth-option of "md5" (i.e., there is no address column for 
local) which is simply wrong.  Whether it's an email typo or the 
actual file...


bingo!  Thks - should be "local all postgres  peer"  (how many times we 
looked at this and didn't see it  (:   But if this setup is not good, we 
would like to know what it should be - our case is for accessing from 
Ruby app using pg gem on same server (at least for now)


No, the docs for understanding  hba_conf are not good (yes we can read 
and are fairly smart)  - we made suggestions the last time for several 
case examples and were ignored - okay, simplicity of pouchdb/couchdb  is 
getting our attention










I suspect it's possible we used to ignore malformed lines whereas now 
we error...


David J.



--
Support Dept
Tiger Nassau, Inc.
www.tigernassau.com
406-624-9310



What does tcop stand for?

2017-12-20 Thread Daniel Westermann
Hi, 

currently browsing the source code in src/include/tcop. What does tcop stand 
for? Can not find any hints in the files. 

Thanks 
Daniel 


Re: What does tcop stand for?

2017-12-20 Thread Tatsuo Ishii
> Hi, 
> 
> currently browsing the source code in src/include/tcop. What does tcop stand 
> for? Can not find any hints in the files. 

Traffic cop.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: What does tcop stand for?

2017-12-20 Thread Daniel Westermann
>> Hi, 
>> 
>> currently browsing the source code in src/include/tcop. What does tcop stand 
>> for? Can not find any hints in the files. 

>Traffic cop. 

Thanks 
cop mean? 


Re: What does tcop stand for?

2017-12-20 Thread Tatsuo Ishii
>>> currently browsing the source code in src/include/tcop. What does tcop 
>>> stand for? Can not find any hints in the files. 
> 
>>Traffic cop. 
> 
> Thanks 
> cop mean? 

A cop means a policeman. Probably the name came from that it is
responsible for the traffic control between frontend and backend.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: What does tcop stand for?

2017-12-20 Thread Daniel Westermann
 currently browsing the source code in src/include/tcop. What does tcop 
 stand for? Can not find any hints in the files. 
>> 
>>>Traffic cop. 
>> 
>> Thanks 
>> cop mean? 

>A cop means a policeman. Probably the name came from that it is 
>responsible for the traffic control between frontend and backend. 

Ah, so simple :) I thought it is an abbreviation. Thx 
Daniel 


Re: psycopg2 and java gssapi questions

2017-12-20 Thread Achilleas Mantzios

On 20/12/2017 21:42, Mike Feld wrote:

Is this the only alternative to secure authentication since Postgres does not 
support secure ldap (ldaps)?


Have you checked out : |ldaptls||| parameter ? 
https://www.postgresql.org/docs/10/static/auth-methods.html#AUTH-LDAP

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt