Required checkpoints occurs too frequently

2020-12-11 Thread Atul Kumar
Hi,

We are getting this alert frequently "Required checkpoints occurs too
frequently" on postgres version 11.8

The RAM of the server is 16 GB.

and we have already set the max_wal_size= 4096 MB
min_wal_size= 192 MB.

Please help me in optimizing the same to avoid this alert.


Regards,
Atul




"Required checkpoints occurs too frequently"

2020-12-11 Thread Atul Kumar
Hi,

We are getting this alert frequently "Required checkpoints occurs too
frequently" on postgres version 11.8

The RAM of the server is 16 GB.

and we have already set the max_wal_size= 4096 MB
min_wal_size= 192 MB.

Please help me in optimizing the same to avoid this alert.


Regards,
Atul




Re: Set COLLATE on a session level

2020-12-11 Thread Peter J. Holzer
On 2020-11-30 10:11:38 +, Dirk Mika wrote:
> > > pá 20. 11. 2020 v 15:28 odesílatel Dirk Mika 
> > > napsal:
> > > > Let's assume there is an app that accesses the same database from
> > > > different countries. And in this app data should be displayed ordered. 
> > > > And
> > > > the sort order is not identical in all countries.
> > > >
> > > > Does the app have to send different SQL commands depending on the 
> > > > country?
> > > > Not nice.
> > > > Do the data have to be sorted in the app? Not nice either.
> > > >
> > >
> > > the query is the same - you just use a different COLLATE clause. For
> > > Postgres there is not any other way.
[...] 
> Suppose the SQL statements are generated by a database layer such as
> Hibernate. It seems to me that it is not possible to use a function
> that adds the COLLATE clause.

It seems to me that this is a defect in the ORM. Sorting by current
locale rules is important for many applications, so that is something an
ORM should support. How the ORM does it (setting a session parameter,
modifying the query, ...) may be backend-specific and not something the
programmer should worry about.

That said, I don't even know if Django (the only ORM I've used in any
depth) does that.

I also agree, that logically, the collation order should be a session
parameter. It is language-specific and therefore user-specific if you
have international users. (I acknowledge the potential performance
problems, but they are the same with an explicit collation clause).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Set COLLATE on a session level

2020-12-11 Thread Alexandre GRAIL

On 10/12/2020 19:33, Dirk Mika wrote:

There is a SET COLLATION command in the SQL standard that does this.
Someone just has to implement it.  It wouldn't be terribly difficult, I
think.



I think it would be analogous to the schema search path.

Usually you notice right away if the "search_path" is wrong, because
relations won't be found.

But with a bad collation set in a session, the problems would be more
subtle. For example, if someone asks you why an index isn't used to
support sorting, you'd always have to remember to ask what collation
has been set in the session.


If that information appear in the explain output, you won’'t have to ask.

For people working in multilingual environment, a collation setting 
within session absolutely makes sense. So you don’t have to specify the 
same setting at each and every query. Also, the expected ordering do 
depends on the user who do the query (and his expectation from the 
language he’s working in) and *not* the data itself.


Regards









Re: JDBC driver - is "getGeneratedKeys()" guaranteed to return the ids in the same order a batch insert was made?

2020-12-11 Thread Dave Cramer
On Thu, 10 Dec 2020 at 19:37, David G. Johnston 
wrote:

> On Wed, Dec 9, 2020 at 1:31 PM electrotype  wrote:
>
>> Agreed.
>>
>>
>> However, this isn't really the purview of JDBC - I'm doubting it does
>> anything that would cause the order to be different than what is received,
>> and the batch items are sent and results processed sequentially.
>>
>> The main question is whether any batch items are inserting multiple
>> records themselves - i.e., RETURNING * is producing multiple results.
>> Whatever order RETURNING * produces is what the driver will capture - but
>> it isn't responsible for guaranteeing that the order of multiple inserted
>> records going in matches what comes out.  PostgreSQL needs to make that
>> claim.  I don't see where it does (i've sent an email to see if adding such
>> a claim to the documentation is proper).  Done manually one can always do
>> "WITH insert returning SELECT ORDER BY", but it doesn't seem workable for
>> the driver to try and do that when adding the returning clause, which I
>> presume is what is in scope here.
>>
>> David J.
>>
>> Thank you, it's appreciated! I'm sure this clarification would help other
>> developers too.
>>
>
> My take is that there is presently no guarantee, and that with current
> efforts to add parallelism it is quite probable that observation of such
> non-orderedness is simply a matter of time.  With batching it seems best to
> combine its use with single inserts in order to avoid this problem.
>
> David J.
>

I'd have to agree.

Dave Cramer
www.postgres.rocks


Re: "Required checkpoints occurs too frequently"

2020-12-11 Thread Stephen Frost
Greetings,

* Atul Kumar (akumar14...@gmail.com) wrote:
> Please help me in optimizing the same to avoid this alert.

Please don't post the same question to multiple lists, nor post
the same question over and over to the same list with little time
between them.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: "Required checkpoints occurs too frequently"

2020-12-11 Thread Atul Kumar
Ok I m sorry, I had no idea of it.

On Friday, December 11, 2020, Stephen Frost  wrote:

> Greetings,
>
> * Atul Kumar (akumar14...@gmail.com) wrote:
> > Please help me in optimizing the same to avoid this alert.
>
> Please don't post the same question to multiple lists, nor post
> the same question over and over to the same list with little time
> between them.
>
> Thanks,
>
> Stephen
>


Re: JDBC driver - is "getGeneratedKeys()" guaranteed to return the ids in the same order a batch insert was made?

2020-12-11 Thread electrotype
My take is that there is presently no guarantee, and that with current efforts to add parallelism it 
is quite probable that observation of such non-orderedness is simply a matter of time. With batching 
it seems best to combine its use with single inserts in order to avoid this problem.


David J.


I wish that was not the conclusion, but at least it is clear!

Thanks for the help to both of you.



SQL group by help

2020-12-11 Thread Chris Stephens
I'm trying to create a visual representation of a 6x8 grid of samples on a
rack using the following SQL format:

with rack_display as (
select sr.ts rack_ts
, sr.rack_id
, r.rack_barcode
, 1 as row_pos
, max(case when rack_well = 0 then 'A1: '||sample_barcode end) as
col1
, max(case when rack_well = 1 then 'A2: '||sample_barcode end) as
col2
, max(case when rack_well = 2 then 'A3: '||sample_barcode end) as
col3
, max(case when rack_well = 3 then 'A4: '||sample_barcode end) as
col4
, max(case when rack_well = 4 then 'A5: '||sample_barcode end) as
col5
, max(case when rack_well = 5 then 'A6: '||sample_barcode end) as
col6
from rack r
, sample_rack sr
, sample s
where r.rack_id = sr.rack_id
   and sr.sample_id = s.sample_id
   and sr.rack_well < 6
group by sr.ts, sr.rack_id, r.rack_barcode, row_pos
union all
select sr.ts rack_ts
, sr.rack_id
, r.rack_barcode
, 2 as row_pos
, max(case when rack_well = 6 then 'B1: '||sample_barcode end) as
col1
, max(case when rack_well = 7 then 'B2: '||sample_barcode end) as
col2
, max(case when rack_well = 8 then 'B3: '||sample_barcode end) as
col3
, max(case when rack_well = 9 then 'B4: '||sample_barcode end) as
col4
, max(case when rack_well = 10 then 'B5: '||sample_barcode end) as
col5
, max(case when rack_well = 11 then 'B6: '||sample_barcode end) as
col6
from rack r
, sample_rack sr
, sample s
where r.rack_id = sr.rack_id
   and sr.sample_id = s.sample_id
   and sr.rack_well >= 6
   and sr.rack_well < 12
group by sr.ts, sr.rack_id, r.rack_barcode, row_pos
union all
...
)
select * from rack_display order by rack_ts, rack_id, row_pos;

the "union all"s continue for another 6 blocks. reports would filter on
rack_id and timestamp.

if timestamps for each load of a rack were guaranteed to be the same, this
would work. however, the "sr.ts" values may vary by a few seconds so there
is potential for the "group by" to break. ts differences will be a minimum
of 5 minutes for each distinct load of a rack.

what i think i need is to manufacture a group by column based off rows in
"sample_rack" that have "ts" values that are < 1 minute from each other and
rack_id is the same.  i'm coming up blank on how to accomplish that though.
my first thought was to create an interval of +/- 1 min then find all rows
that overlap and assign a group number but i'm not sure how to accomplish
that.

there's also no guarantee an entire rack is full of samples so some "cells"
of display might be null. i think that makes the use of tablefunc crosstab
a little harder. if i remember correctly, it does not handle missing values
well. i'm open to any pivoting strategy.

anyways, i thought i'd reach out for ideas while i do my own digging.

thanks for any input! let me know if i'm not being clear on the problem and
desired outcome.


Re: Required checkpoints occurs too frequently

2020-12-11 Thread Mohamed Wael Khobalatte
On Fri, Dec 11, 2020 at 3:11 AM Atul Kumar  wrote:

> Hi,
>
> We are getting this alert frequently "Required checkpoints occurs too
> frequently" on postgres version 11.8
>
> The RAM of the server is 16 GB.
>
> and we have already set the max_wal_size= 4096 MB
> min_wal_size= 192 MB.
>

There are more control settings for checkpoints, such as
checkpoint_timeout. Check out the documentation and see if your set values
make sense for your case.


Re: Required checkpoints occurs too frequently

2020-12-11 Thread Mohamed Wael Khobalatte
On Fri, Dec 11, 2020 at 8:49 PM Mohamed Wael Khobalatte <
mkhobala...@grubhub.com> wrote:

>
>
> On Fri, Dec 11, 2020 at 3:11 AM Atul Kumar  wrote:
>
>> Hi,
>>
>> We are getting this alert frequently "Required checkpoints occurs too
>> frequently" on postgres version 11.8
>>
>> The RAM of the server is 16 GB.
>>
>> and we have already set the max_wal_size= 4096 MB
>> min_wal_size= 192 MB.
>>
>
> There are more control settings for checkpoints, such as
> checkpoint_timeout. Check out the documentation and see if your set values
> make sense for your case.
>

I didn't realize this was answered, and as it was suggested, not a good
idea to double post.


Re: SQL group by help

2020-12-11 Thread Rob Sargent
What

> On Dec 11, 2020, at 10:24 AM, Chris Stephens  wrote:
> 
> 
> I'm trying to create a visual representation of a 6x8 grid of samples on a 
> rack using the following SQL format:
> 
> with rack_display as (
> select sr.ts rack_ts
> , sr.rack_id
> , r.rack_barcode
> , 1 as row_pos
> , max(case when rack_well = 0 then 'A1: '||sample_barcode end) as col1
> , max(case when rack_well = 1 then 'A2: '||sample_barcode end) as col2
> , max(case when rack_well = 2 then 'A3: '||sample_barcode end) as col3
> , max(case when rack_well = 3 then 'A4: '||sample_barcode end) as col4
> , max(case when rack_well = 4 then 'A5: '||sample_barcode end) as col5
> , max(case when rack_well = 5 then 'A6: '||sample_barcode end) as col6
> from rack r
> , sample_rack sr
> , sample s
> where r.rack_id = sr.rack_id
>and sr.sample_id = s.sample_id
>and sr.rack_well < 6
> group by sr.ts, sr.rack_id, r.rack_barcode, row_pos
> union all
> select sr.ts rack_ts
> , sr.rack_id
> , r.rack_barcode
> , 2 as row_pos
> , max(case when rack_well = 6 then 'B1: '||sample_barcode end) as col1
> , max(case when rack_well = 7 then 'B2: '||sample_barcode end) as col2
> , max(case when rack_well = 8 then 'B3: '||sample_barcode end) as col3
> , max(case when rack_well = 9 then 'B4: '||sample_barcode end) as col4
> , max(case when rack_well = 10 then 'B5: '||sample_barcode end) as 
> col5
> , max(case when rack_well = 11 then 'B6: '||sample_barcode end) as 
> col6
> from rack r
> , sample_rack sr
> , sample s
> where r.rack_id = sr.rack_id
>and sr.sample_id = s.sample_id
>and sr.rack_well >= 6
>and sr.rack_well < 12
> group by sr.ts, sr.rack_id, r.rack_barcode, row_pos
> union all
> ...
> )
> select * from rack_display order by rack_ts, rack_id, row_pos;
> 
> the "union all"s continue for another 6 blocks. reports would filter on 
> rack_id and timestamp. 
> 
> if timestamps for each load of a rack were guaranteed to be the same, this 
> would work. however, the "sr.ts" values may vary by a few seconds so there is 
> potential for the "group by" to break. ts differences will be a minimum of 5 
> minutes for each distinct load of a rack. 
> 
> what i think i need is to manufacture a group by column based off rows in 
> "sample_rack" that have "ts" values that are < 1 minute from each other and 
> rack_id is the same.  i'm coming up blank on how to accomplish that though. 
> my first thought was to create an interval of +/- 1 min then find all rows 
> that overlap and assign a group number but i'm not sure how to accomplish 
> that. 
> 
> there's also no guarantee an entire rack is full of samples so some "cells" 
> of display might be null. i think that makes the use of tablefunc crosstab a 
> little harder. if i remember correctly, it does not handle missing values 
> well. i'm open to any pivoting strategy.
> 
> anyways, i thought i'd reach out for ideas while i do my own digging.
> 
> thanks for any input! let me know if i'm not being clear on the problem and 
> desired outcome.
> 
> 

What stack are you usinglr. I would hope you could hand data off to front end 
which should have tools necessary