Re: Temporary tablespaces on a RAM disk

2020-03-18 Thread Daniel Westermann (DWE)
>Laurenz Albe  writes:

>> On Tue, 2020-03-17 at 19:57 +, Daniel Westermann (DWE) wrote:
>>
>>> >> is someone using temporary tablespaces on a RAM disk ? Any experiences 
>>> >> with that?
>>> >> I did some quick tests and checked the archives but could not find any
>>> >> information that either confirmed it is a bad idea nor the opposite.
>>>
>>> >Taking a step back, wouldn't it be better to increase "work_mem"
>>> >and "temp_buffers" and set "temp_file_limit", so that temporary
>>> >files are avoided whenever possible and temporary tables are cached?
>>>
>>> I don't think you can avoid that for all cases, especially when working
>>> with large data sets. That was one of the reasons for my initial question.
>>
>> But with your proposed setup, any query that needs more temp space
>> than your RAM disk has will fail.  I don't think that is good for large
>> data sets.

>Perhaps not, but disk filling on the same volume as WAL is also a
>serious problem in case the process that eventually took the storage to
>100% and got an ENoSpace was the WAL write :-)

>Er, but any standard 1-tablespace configuration is at risk of that,
>generally.

Thank you all for your input.

Regards
Daniel


Re: How can I set all constraints to be deferrable for a DB/schema

2020-03-18 Thread Guillaume Lelarge
Hi,

Le mer. 18 mars 2020 à 04:14, David Gauthier  a
écrit :

> Hi:
>
> psql (9.6.7, server 11.3) (linux)
>
> Is there a way to set all constraints on all tables of a DB and/or schema
> to be deferrable ?  Or do I have to do them one-by-one ?
>
>
You have to do them one by one. Or write a DO script to do it (but it will
still involve many ALTER TABLE statements).


-- 
Guillaume.


propogate RAISE levels < EXCEPTION through MS Access?

2020-03-18 Thread stan
Working on a project that has an MS Access front end. I tend to use some
RAISE NOTICE messages to debug things. Turns out these are not propagated
through MS Access. RAISE EXCEPTIONS are, as a pop up.

Anyone know how to change this?

Realize it is a bit off topic for this list.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




SET LOCAL doesn't become undefined after transaction is commited

2020-03-18 Thread Kostya
Hi
We are using dynamic settings via SET LOCAL for row level security.
I would like to clarify that I'm seeing the correct behaviour
I create a new session to the database, I run the following query:

SELECT current_setting("session.my_tenant_id");

I will get the following error:
pq: unrecognized configuration parameter \"session.my_tenant_id\"

However, once I run the following transaction
BEGIN READ WRITE
SET LOCAL session.my_tenant_id="f00";
SELECT * FROM someTable;
COMMIT;

Then
SELECT current_setting("session.my_tenant_id");
will return an empty string "". Basically "session.my_tenant_id" becomes
defined.

Is this supposed to work like this or could this be a bug?
Is it possible to entirely undefine "session.my_tenant_id" after the
transaction was committed? Basically I would like to return to the initial
state of the PSQL connection.

Thanks in advance.


PostgreSQL 10 not archiving some WAL files

2020-03-18 Thread Norberto Dellê
Hi

I have a strange case in some PostgreSQL instances that I monitor. First
the software versions and setup of a specific one:

   - Windows Server 2012 Foundation
   - PostgreSQL 10.10 x64
   - archive_command = 'copy% p "D: \\ backup \\ wal \\% f" / y'

This setup worked very well for a long time, but since we upgraded
PostgreSQL to newer versions (10.x), sometimes it just skips archiving some
wal files. This seems to happen mainly when the server is shut down.
There's no messages about errors in the logs, and also no corresponding
.done file in archive_status, it just goes to the next wal file.

I would like to know if any of you had or has a problem like this, and if
you had, how you solved it.


Re: SET LOCAL doesn't become undefined after transaction is commited

2020-03-18 Thread Tom Lane
Kostya  writes:
> However, once I run the following transaction
> BEGIN READ WRITE
> SET LOCAL session.my_tenant_id="f00";
> SELECT * FROM someTable;
> COMMIT;

> Then
> SELECT current_setting("session.my_tenant_id");
> will return an empty string "". Basically "session.my_tenant_id" becomes
> defined.

> Is this supposed to work like this or could this be a bug?
> Is it possible to entirely undefine "session.my_tenant_id" after the
> transaction was committed? Basically I would like to return to the initial
> state of the PSQL connection.

It is supposed to work like that.  There is no provision for rolling
back the existence of a GUC altogether, and if there were, it would
break the actually intended use-case, namely GUCs created by dynamically
loaded extensions.  If an extension is loaded during a transaction,
it won't disappear if the transaction is rolled back, so neither
should its GUCs.

The real problem here is that you're abusing a feature that was never
meant to be used for user-defined variables.  It does not have the
right behavior in corner cases (as you're seeing here), it does not
have the features you'd want (eg ability to declare the type of
a variable), and it definitely doesn't have the scalability to lots
of variables that I'd expect a user-focused feature to have.

There's a fairly long-running thread about creating a feature that
*is* meant for user variables:

https://www.postgresql.org/message-id/flat/cafj8prdy+m9ooxfo10r7j0pakccaum-tweatrdsrslgmb1v...@mail.gmail.com

I haven't checked on the state of that lately, but you might read up
on it and help review/test it, or try to push the definition in the
direction you need if it doesn't seem like quite the right thing.

regards, tom lane




Re: Fwd: PG12 autovac issues

2020-03-18 Thread Adrian Klaver

On 3/18/20 6:57 AM, Justin King wrote:
Please reply to list also
Ccing list



Here are the settings, these are the only ones that are not set to
default with the exception of a few tables that have been overridden
with a different value due to lots of updates and few rows:


And those values are?


Thanks for the response, hopefully this will help:


The below is helpful, but what I was referring to above was the settings 
for the overridden tables.





postgres=# select name,setting from pg_settings where name like '%vacuum%';
name = setting
autovacuum = on
autovacuum_analyze_scale_factor = 0.1
autovacuum_analyze_threshold = 2500
autovacuum_freeze_max_age = 2
autovacuum_max_workers = 8
autovacuum_multixact_freeze_max_age = 4
autovacuum_naptime = 15
autovacuum_vacuum_cost_delay = 20
autovacuum_vacuum_cost_limit = -1
autovacuum_vacuum_scale_factor = 0.2
autovacuum_vacuum_threshold = 500
autovacuum_work_mem = -1
log_autovacuum_min_duration = 0
vacuum_cleanup_index_scale_factor = 0.1
vacuum_cost_delay = 0
vacuum_cost_limit = 1000
vacuum_cost_page_dirty = 20
vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_defer_cleanup_age = 0
vacuum_freeze_min_age = 5000
vacuum_freeze_table_age = 15000
vacuum_multixact_freeze_min_age = 500
vacuum_multixact_freeze_table_age = 15000



More below.



autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 8
autovacuum_naptime = 15s
autovacuum_vacuum_threshold = 500
autovacuum_analyze_threshold = 2500
vacuum_cost_limit = 1000


Are either of the below set > 0?:

vacuum_cost_delay

autovacuum_vacuum_cost_delay



We want fairly aggressive autovacs to keep table bloat limited -- the
application latency suffers if it has to wade through dead tuples and
staying near realtime is important in our environment.

** Also, it should be noted that the autovacuum_analyze_threshold is
probably an incorrect value, we likely intended that to be 250 and
just have now realized it after poking more at the configuration.







--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Fwd: PG12 autovac issues

2020-03-18 Thread Michael Lewis
Do you have default fillfactor set on this table? If not, I would wonder if
reducing it to 50% or even 20% would allow many more HOT updates that would
reduce bloat.

Also, is there any period of lower activity on your system that you could
schedule a vacuum freeze for daily or weekly? I believe having frozen pages
would also mean all the autovacuums would be able to skip more pages and
therefore be faster.

>> autovacuum_vacuum_cost_delay = 20

This was changed to 2ms in PG12. You should reduce that most likely.


Join help, please

2020-03-18 Thread stan
I am confused. given this view:


AS
SELECT
employee.id ,
work_type.type ,
permit ,
work_type.overhead ,
work_type.descrip  
from
permitted_work
inner join employee on
employee.employee_key = permitted_work.employee_key
inner join work_type on
work_type.work_type_key = work_type.work_type_key
;

Why do I have 38475 rows, when the base table only has 855?

My thinking was that the inner joins would constrain this view to the rows
that exist in the base (permitted_work) table. 

Clearly I am misunderstanding something basic here.


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Join help, please

2020-03-18 Thread Michael Lewis
work_type.work_type_key = work_type.work_type_key

You've done a cross join.


Re: Join help, please

2020-03-18 Thread Guyren Howe
The three types of thing (permitted_work; employee; work_type) don’t stand in a 
1:1 relationship with each other. You might have multiple work_types or 
permitted_work for each employee, I’m guessing.

Each existing combination produces one row in the result. So an employee with 
three permitted_works and 4 work types will produce 12 rows in the joined 
result.

If you want one row per employee, you might consider using array_agg with 
group_by to collapse the multiple work_types or permitted_works into arrays 
alongside the employee information.
On Mar 18, 2020, 11:51 -0700, stan , wrote:
> I am confused. given this view:
>
>
> AS
> SELECT
> employee.id ,
> work_type.type ,
> permit ,
> work_type.overhead ,
> work_type.descrip
> from
> permitted_work
> inner join employee on
> employee.employee_key = permitted_work.employee_key
> inner join work_type on
> work_type.work_type_key = work_type.work_type_key
> ;
>
> Why do I have 38475 rows, when the base table only has 855?
>
> My thinking was that the inner joins would constrain this view to the rows
> that exist in the base (permitted_work) table.
>
> Clearly I am misunderstanding something basic here.
>
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>


Re: Join help, please

2020-03-18 Thread Michael Lewis
On Wed, Mar 18, 2020, 12:55 PM Michael Lewis  wrote:

> work_type.work_type_key = work_type.work_type_key
>
> You've done a cross join.
>


You meant to do permitted_work.work_type_key = work_type.work_type_key I
expect

>


Re: SET LOCAL doesn't become undefined after transaction is commited

2020-03-18 Thread Kostya
Thanks for the elaborate response.

I don't define many variables, basically only a single one.
We've implemented multi tenancy using row level policy and by utilizing the
SET LOCAL variable to pass the tenant id to enforce querying the right
tenant.

For example in the table "stuff" I have a column called "tenant_id".

Basically my table policy looks like this:

CREATE POLICY stuff_policy on stuff
USING (tenant_id::text = current_setting('session.my_tenant_id', true))
WITH CHECK (tenant_id::text = current_setting('session.my_tenant_id',
true));

So each query first does SET LOCAL and then SELECT xxx FROM stuff;

I realize its not a "tight" multi tenancy implementation, but it works well
for our needs.

Do you think this "abuse" is reasonable? It's pretty much working well for
us, but I'm now worried following your post that this may not scale well or
perhaps have some security issues I'm not aware of.

What is your take on this?

Thanks in advance.

On Wed, Mar 18, 2020 at 4:16 PM Tom Lane  wrote:

> Kostya  writes:
> > However, once I run the following transaction
> > BEGIN READ WRITE
> > SET LOCAL session.my_tenant_id="f00";
> > SELECT * FROM someTable;
> > COMMIT;
>
> > Then
> > SELECT current_setting("session.my_tenant_id");
> > will return an empty string "". Basically "session.my_tenant_id" becomes
> > defined.
>
> > Is this supposed to work like this or could this be a bug?
> > Is it possible to entirely undefine "session.my_tenant_id" after the
> > transaction was committed? Basically I would like to return to the
> initial
> > state of the PSQL connection.
>
> It is supposed to work like that.  There is no provision for rolling
> back the existence of a GUC altogether, and if there were, it would
> break the actually intended use-case, namely GUCs created by dynamically
> loaded extensions.  If an extension is loaded during a transaction,
> it won't disappear if the transaction is rolled back, so neither
> should its GUCs.
>
> The real problem here is that you're abusing a feature that was never
> meant to be used for user-defined variables.  It does not have the
> right behavior in corner cases (as you're seeing here), it does not
> have the features you'd want (eg ability to declare the type of
> a variable), and it definitely doesn't have the scalability to lots
> of variables that I'd expect a user-focused feature to have.
>
> There's a fairly long-running thread about creating a feature that
> *is* meant for user variables:
>
>
> https://www.postgresql.org/message-id/flat/cafj8prdy+m9ooxfo10r7j0pakccaum-tweatrdsrslgmb1v...@mail.gmail.com
>
> I haven't checked on the state of that lately, but you might read up
> on it and help review/test it, or try to push the definition in the
> direction you need if it doesn't seem like quite the right thing.
>
> regards, tom lane
>


Re: PostgreSQL 10 not archiving some WAL files

2020-03-18 Thread Michael Paquier
On Wed, Mar 18, 2020 at 10:57:22AM -0300, Norberto Dellê wrote:
> This setup worked very well for a long time, but since we upgraded
> PostgreSQL to newer versions (10.x), sometimes it just skips archiving some
> wal files. This seems to happen mainly when the server is shut down.
> There's no messages about errors in the logs, and also no corresponding
> .done file in archive_status, it just goes to the next wal file.
> 
> I would like to know if any of you had or has a problem like this, and if
> you had, how you solved it.

Hmm.  I don't recall seeing any of that.  This would mean that we are
either missing the generation of some .ready file, or that some .done
file gets generated when they should not in archive_status/.  What
kind of server shutdown are you doing?  Immediate so as recovery
happens at the follow-up startup.  Or is that a clean service
shutdown?
--
Michael


signature.asc
Description: PGP signature