Re: Getting following error in using cursor to fetch the records from a large table in c language(current transaction is aborted, commands ignored until end of transaction block)

2019-11-03 Thread Francisco Olarte
M Tarkeshwar Rao:

On Sat, Nov 2, 2019 at 5:16 PM M Tarkeshwar Rao
 wrote:
> Thanks Yes it is in c++. Actually we just written this code.
> Due to vaccum full cursor query failing on a connection and all the 
> subsequent queries are failing and we
> found shared errors in /var/logs of the postgres installed machine.
... Some more things followed by a huge bottom quote-to-the-sig.

Your writing style makes it really hard to decipher what you are
trying to say. I hope your problem has been solved, because I am not
going to try to decipher it or comment more.

Francisco Olarte.




select view definition from pg_views feature request

2019-11-03 Thread Michael Shapiro
It seems that the definition of a view from pg_catalog.pg_views does not
qualify the tables used in the view if the tables are in the current search
path.

Is it possible to either have the definition always qualify all tables
independent of the search_path (or else provide a new column that does
that)?


Re: select view definition from pg_views feature request

2019-11-03 Thread Tom Lane
Michael Shapiro  writes:
> It seems that the definition of a view from pg_catalog.pg_views does not
> qualify the tables used in the view if the tables are in the current search
> path.

> Is it possible to either have the definition always qualify all tables
> independent of the search_path (or else provide a new column that does
> that)?

Why don't you just change the search path to empty before selecting?

regards, tom lane




explain plan difference

2019-11-03 Thread Steve Baldwin
Can someone please help me understand this:

b2bc_dev=# *vacuum full analyze invoice;*
VACUUM
Time: 39.671 ms
b2bc_dev=# *create table invoice_copy as select * from invoice;*
SELECT 23
Time: 11.557 ms
b2bc_dev=# *alter table invoice_copy add primary key (id);*
ALTER TABLE
Time: 9.257 ms
b2bc_dev=# *vacuum full analyze invoice_copy;*
VACUUM
Time: 24.369 ms
b2bc_dev=# *explain analyze verbose select max(id::text) from invoice;*
   QUERY PLAN
-
 Aggregate  (cost=4.40..4.41 rows=1 width=32) (actual time=0.852..0.878
rows=1 loops=1)
   Output: max((id)::text)
   ->  Seq Scan on public.invoice  (cost=0.00..4.23 rows=23 width=16)
(actual time=0.016..0.440 rows=23 loops=1)
 Output: id
 Planning time: 0.359 ms
 Execution time: 1.063 ms
(6 rows)

Time: 4.266 ms
b2bc_dev=# *explain analyze verbose select max(id::text) from invoice_copy;*
QUERY PLAN

 Aggregate  (cost=4.40..4.41 rows=1 width=32) (actual time=0.981..1.019
rows=1 loops=1)
   Output: max((id)::text)
   ->  Seq Scan on public.invoice_copy  (cost=0.00..4.23 rows=23 width=16)
(actual time=0.016..0.464 rows=23 loops=1)
 Output: id, client_reference_id, buyer_id, seller_id,
invoice_date, invoice_due_date, invoice_total, po_number, details, status,
statement_id, invoice_reference_number, created, invoice_net,
ar_open_total, bill_date, return_total, api_client_id, client_id,
program_id, billing_payment_term_config_id, preauthorization_id,
tap_synced, updated, shipping_amount, tax_amount, foreign_exchange_fee,
foreign_exchange_fee_rate, return_foreign_exchange_fee,
original_shipping_amount, original_tax_amount, discount_amount,
original_discount_amount, shipping_tax_amount, shipping_discount_amount,
original_shipping_tax_amount, original_shipping_discount_amount,
captured_amount_seller, captured_amount_buyer, adjusted_amount, ar_closed_on
 Planning time: 0.441 ms
 Execution time: 1.254 ms
(6 rows)
b2bc_dev=# *select version();*
version
---
 PostgreSQL 10.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 6.4.0)
6.4.0, 64-bit
(1 row)
Time: 4.916 ms

I guess the difference doesn't have a huge bearing (as far as I can tell)
on the result, but it just seems odd that the inner-most 'Output' step
outputs the entire row in the case of the copy and only the required field
in the case of the original table. What triggers that behaviour?

Thanks,

Steve


Re: explain plan difference

2019-11-03 Thread Tom Lane
Steve Baldwin  writes:
> I guess the difference doesn't have a huge bearing (as far as I can tell)
> on the result, but it just seems odd that the inner-most 'Output' step
> outputs the entire row in the case of the copy and only the required field
> in the case of the original table. What triggers that behaviour?

The plan with the full output row is actually slightly cheaper, or at
least so the planner thinks, because it saves a projection step.
I imagine the reason you're not getting that with the original table
is that there are some dropped column(s) in the original table, forcing
the projection to be done to get rid of them.

regards, tom lane




Re: explain plan difference

2019-11-03 Thread Steve Baldwin
Thanks very much for the explanation Tom !!  You are correct - there are
dropped columns in the original.

Cheers,

Steve

On Mon, Nov 4, 2019 at 3:01 PM Tom Lane  wrote:

> Steve Baldwin  writes:
> > I guess the difference doesn't have a huge bearing (as far as I can tell)
> > on the result, but it just seems odd that the inner-most 'Output' step
> > outputs the entire row in the case of the copy and only the required
> field
> > in the case of the original table. What triggers that behaviour?
>
> The plan with the full output row is actually slightly cheaper, or at
> least so the planner thinks, because it saves a projection step.
> I imagine the reason you're not getting that with the original table
> is that there are some dropped column(s) in the original table, forcing
> the projection to be done to get rid of them.
>
> regards, tom lane
>


Re: Are my autovacuum settings too aggressive for this table?

2019-11-03 Thread Laurenz Albe
On Fri, 2019-11-01 at 18:58 +, Jason Ralph wrote:
> autovacuum_naptime = 1h
> autovacuum_vacuum_threshold = 1
> autovacuum_analyze_threshold = 5000
> autovacuum_vacuum_scale_factor = 0.002
> autovacuum_analyze_scale_factor = 0.001

These seem to be crazy settings.

Only once an hour you test if a table needs autovacuum, and then you
configure autovacuum to process tables all the time.
There are only three autovacuum workers, so most of the tables won't
get vacuumed.

Please reset all these values to their default.

If you want autovacuum to be more aggressive for a single table, run

ALTER TABLE x SET (autovacuum_vacuum_cost_delay = 2);

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com