Re: Out of Memory errors are frustrating as heck!

2019-10-08 Thread Merlin Moncure
On Sun, Apr 14, 2019 at 3:51 PM Gunther  wrote:
>
> For weeks now, I am banging my head at an "out of memory" situation. There is 
> only one query I am running on an 8 GB system, whatever I try, I get knocked 
> out on this out of memory. It is extremely impenetrable to understand and fix 
> this error. I guess I could add a swap file, and then I would have to take 
> the penalty of swapping. But how can I actually address an out of memory 
> condition if the system doesn't tell me where it is happening?
> We can't really see anything too worrisome. There is always lots of memory 
> used by cache, which could have been mobilized. The only possible explanation 
> I can think of is that in that moment of the crash the memory utilization 
> suddenly skyrocketed in less than a second, so that the 2 second vmstat 
> interval wouldn't show it??? Nah.
>
> I have already much reduced work_mem, which has helped in some other cases 
> before. Now I am going to reduce the shared_buffers now, but that seems 
> counter-intuitive because we are sitting on all that cache memory unused!
>
> Might this be a bug? It feels like a bug. It feels like those out of memory 
> issues should be handled more gracefully (garbage collection attempt?) and 
> that somehow there should be more information so the person can do anything 
> about it.

I kind of agree that nothing according to vmstat suggests you have a
problem.  One thing you left out is the precise mechanics of the
failure; is the database getting nuked by the oom killer?  Do you have
the logs?

*) what are values of shared_buffers and work_mem and maintenance_work_mem?

*) Is this a 32 bit build? (I'm guessing no, but worth asking)

*) I see that you've disabled swap.  Maybe it should be enabled?

*) Can you get the query to run through?  an 'explain analyze' might
point to gross misses in plan; say, sort memory overuse

*) If you're still getting failures, maybe we need to look at sampling
frequency of memory usage.

*) iowait is super high.

*) I see optimization potential in this query; explain analyze would
help here too.

merlin




Re: Out of Memory errors are frustrating as heck!

2019-10-08 Thread Merlin Moncure
On Tue, Oct 8, 2019 at 12:44 PM Merlin Moncure  wrote:
> On Sun, Apr 14, 2019 at 3:51 PM Gunther  wrote:
> >
> > For weeks now, I am banging my head at an "out of memory" situation. There 
> > is only one query I am running on an 8 GB system, whatever I try, I get 
> > knocked out on this out of memory. It is extremely impenetrable to 
> > understand and fix this error. I guess I could add a swap file, and then I 
> > would have to take the penalty of swapping. But how can I actually address 
> > an out of memory condition if the system doesn't tell me where it is 
> > happening?
> > We can't really see anything too worrisome. There is always lots of memory 
> > used by cache, which could have been mobilized. The only possible 
> > explanation I can think of is that in that moment of the crash the memory 
> > utilization suddenly skyrocketed in less than a second, so that the 2 
> > second vmstat interval wouldn't show it??? Nah.
> >
> > I have already much reduced work_mem, which has helped in some other cases 
> > before. Now I am going to reduce the shared_buffers now, but that seems 
> > counter-intuitive because we are sitting on all that cache memory unused!
> >
> > Might this be a bug? It feels like a bug. It feels like those out of memory 
> > issues should be handled more gracefully (garbage collection attempt?) and 
> > that somehow there should be more information so the person can do anything 
> > about it.
>
> I kind of agree that nothing according to vmstat suggests you have a
> problem.  One thing you left out is the precise mechanics of the
> failure; is the database getting nuked by the oom killer?  Do you have
> the logs?

oops, I missed quite a bit of context upthread.  sorry for repeat noise.

merlin




Would SSD improve Index Only Scan performance by a lot?

2019-10-08 Thread Arya F
As my table has gotten bigger, it takes longer to get a single row back
when querying a row by its btree index.

Right now the database is running on a traditional HDD. SSDs have a much
faster seek time than traditional HDDs.

Would switching to an SSD improve "Index Only Scan" time greatly? by at
least 3-4 times?


Sv: Would SSD improve Index Only Scan performance by a lot?

2019-10-08 Thread Andreas Joseph Krogh
På onsdag 09. oktober 2019 kl. 01:37:06, skrev Arya F mailto:[email protected]>>: As my table has gotten bigger, it takes longer to 
get a single row back when querying a row by its btree index. Right now the 
database is running on a traditional HDD. SSDs have a much faster seek time 
than traditional HDDs. Would switching to an SSD improve "Index Only Scan" time 
greatly? by at least 3-4 times? It depends on whether the index is accessed 
often or not (wrt. caching), and (of course) the size of the index, but yes - 
"cold access" to the index (or persistent data in general) ismuch faster with 
SSD. -- Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
[email protected]  www.visena.com 
  

Re: Would SSD improve Index Only Scan performance by a lot?

2019-10-08 Thread Rick Otten
On Tue, Oct 8, 2019 at 7:37 PM Arya F  wrote:

> As my table has gotten bigger, it takes longer to get a single row back
> when querying a row by its btree index.
>
> Right now the database is running on a traditional HDD. SSDs have a much
> faster seek time than traditional HDDs.
>
> Would switching to an SSD improve "Index Only Scan" time greatly? by at
> least 3-4 times?
>

*If* your query is disk I/O bound, SSD can help a lot.

If your data is already in memory, or file system cache, and your query is
bound by CPU or bloated/corrupted indexes, or some query inefficiency, then
faster disks really won't do anything.

Depending on the data type and size of the data you may be able to help
your query performance by choosing an index type other than the
out-of-the-box btree as well (such as a hash or brin index) or maybe even a
different sort order on the index, or a partial index.


Re: Would SSD improve Index Only Scan performance by a lot?

2019-10-08 Thread Matthew Hall
For indexes the SSDs are at least 4X faster but you won't get that to happen 
unless you fix the planner tunable for the random page fetch cost first. Super 
important change for SSDs. 

Matthew Hall

> On Oct 8, 2019, at 5:12 PM, Rick Otten  wrote:
> 
> 
>> On Tue, Oct 8, 2019 at 7:37 PM Arya F  wrote:
>> As my table has gotten bigger, it takes longer to get a single row back when 
>> querying a row by its btree index.
>> 
>> Right now the database is running on a traditional HDD. SSDs have a much 
>> faster seek time than traditional HDDs. 
>> 
>> Would switching to an SSD improve "Index Only Scan" time greatly? by at 
>> least 3-4 times?
> 
> *If* your query is disk I/O bound, SSD can help a lot.
> 
> If your data is already in memory, or file system cache, and your query is 
> bound by CPU or bloated/corrupted indexes, or some query inefficiency, then 
> faster disks really won't do anything.
> 
> Depending on the data type and size of the data you may be able to help your 
> query performance by choosing an index type other than the out-of-the-box 
> btree as well (such as a hash or brin index) or maybe even a different sort 
> order on the index, or a partial index.
> 
> 


Get the planner used by a query?

2019-10-08 Thread Behrang Saeedzadeh
Is there a way to display the planner algorithm used by a query, either in
EXPLAIN or in a different way?

Regards,
Behrang (sent from my mobile)