Re: debugging intermittent slow updates under higher load

2018-12-06 Thread Chris Withers

On 05/12/2018 15:40, Alexey Bashtanov wrote:




One of the reasons could be the row already locked by another backend, 
doing the same kind of an update or something different.

Are these updates performed in a longer transactions?


Nope, the transaction will just be updating one row at a time.


Can they hit the same row from two clients at the same time?


I've looked for evidence of this, but can't find any. Certainly nothing 
running for 2-10s, queries against this table are normally a few hundred ms.



Is there any other write or select-for-update/share load on the table?


Not that I'm aware of. How would I go about getting metrics on problems 
like these?



Have you tried periodical logging of the non-granted locks?
Try querying pg_stat_activity and pg_locks (possibly joined and maybe 
repeatedly self-joined, google for it)
to get the backends that wait one for another while competing for to 
lock the same row or object.


Is there any existing tooling that does this? I'm loath to start hacking 
something up when I'd hope others have done a better job already...


Chris



Re: debugging intermittent slow updates under higher load

2018-12-06 Thread Chris Withers

On 05/12/2018 15:47, Rene Romero Benavides wrote:
Also read about hot updates and the storage parameter named 
"fill_factor", so, data blocks can be recycled instead of creating new 
ones if the updated fields don't update also indexes.


I have read about these, but I'd prefer not to be making 
opportunistic/guessing changes on this.


How can I collect metrics/logging/etc evidence to confirm what the 
problem actually is?


cheers,

Chris



Re: postgis after pg_upgrade

2018-12-06 Thread Slavcho Trnkovski
Hi,
But my understanding is that this approach is used when upgrading PostGIS.
I'm upgrading postgresql from 9.4 to 9.6 and PostGIS version remains the
same (2.4.5).
If I execute:
  drop extension postgis;
  CREATE EXTENSION postgis SCHEMA postgis;
select PostGIS_full_version();

postgis_full_version

 POSTGIS="2.4.5 r16765" PGSQL="96" GEOS="3.6.2-CAPI-1.10.2 4d2925d6"
PROJ="Rel. 4.9.3, 15 August 2016" GDAL="GDAL 1.11.4, released 2016/01/25"
LIBXML="2.9.1" LIBJSON="0.11" RASTER

everything looks OK. Is this wrong?

Regards,
Slavcho

On Tue, Dec 4, 2018 at 7:41 PM Jeremy Schneider 
wrote:

> On 11/30/18 05:11, Slavcho Trnkovski wrote:
> > I have postgresql 9.4 with postgis extension installed (latest version,
> > 2.4.5). I upgraded postgresql from 9.4 to 9.6.
> > After upgrading to 9.6 I get the following result
> > from PostGIS_full_version()
> >  select PostGIS_full_version();
> >
> >  ... (procs need upgrade for use with "96") ...
> >
> > Is there any way to resolve this besides recreating the extension?
>
> If I'm reading the postgis docs correctly, this message specifically
> means that you must do a full dump/reload of postgis-enabled databases.
>
> http://postgis.net/docs/manual-2.4/postgis_installation.html#upgrading
>
> > If you can't find the postgis_upgrade*.sql specific for upgrading
> > your version you are using a version too early for a soft upgrade
> > and need to do a HARD UPGRADE.
> >
> > The PostGIS_Full_Version function should inform you about the need
> > to run this kind of upgrade using a "procs need upgrade" message.
> >
> > By HARD UPGRADE we mean full dump/reload of postgis-enabled
> > databases.
>
> --
> http://about.me/jeremy_schneider
>


Re: debugging intermittent slow updates under higher load

2018-12-06 Thread Alexey Bashtanov




Is there any existing tooling that does this?

There must be some, google for queries involving pg_locks

I'm loath to start hacking something up when I'd hope others have done 
a better job already...
If you log all queries that take more than a second to complete, is your 
update the only one logged, or something (the would-be blocker) gets 
logged down together with it?




Re: debugging intermittent slow updates under higher load

2018-12-06 Thread Chris Withers

On 06/12/2018 11:00, Alexey Bashtanov wrote:


I'm loath to start hacking something up when I'd hope others have done 
a better job already...
If you log all queries that take more than a second to complete, is your 
update the only one logged, or something (the would-be blocker) gets 
logged down together with it?


Nope, only ones logged are these updates.

Chris



Re: debugging intermittent slow updates under higher load

2018-12-06 Thread Pavel Stehule
Hi

čt 6. 12. 2018 v 12:18 odesílatel Chris Withers  napsal:

> On 06/12/2018 11:00, Alexey Bashtanov wrote:
> >
> >> I'm loath to start hacking something up when I'd hope others have done
> >> a better job already...
> > If you log all queries that take more than a second to complete, is your
> > update the only one logged, or something (the would-be blocker) gets
> > logged down together with it?
>
> Nope, only ones logged are these updates.
>

Can you check latency on file system? Some latencies can be enforced by
overloaded file system due wrong configuration of file system cache.

https://serverfault.com/questions/471070/linux-file-system-cache-move-data-from-dirty-to-writeback

Regards

Pavel


> Chris
>
>


How to build a btree index with integer values on jsonb data?

2018-12-06 Thread Johann Spies
How can I transform the following definition to index pubyear as
integer and not text?

CREATE INDEX pubyear_idx
ON some_table_where_data_field_is_of_type_jsonb USING btree
((data -> 'REC'::text) -> 'static_data'::text) ->
'summary'::text) -> 'pub_info'::text) ->> '@pubyear'::text) COLLATE
pg_catalog."default");

While I can cast the value in a SELECT statement to integer I have
been able to do the same while creating the index.

Why btree index?  I want to do queries like

select stuff from sometable where pubyear between 2015 and 2018;

Regards
Johann
-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)



Re: order of reading the conf files

2018-12-06 Thread Rene Romero Benavides
Why do you need to know that ?

Am Do., 6. Dez. 2018 um 01:21 Uhr schrieb bhargav kamineni <
kbn98...@gmail.com>:

>
> Hi,
>
> may i know the order in which postgres reads the configuration files like
> conf , auto.conf , hba  ?
> and how does postmaster forks postgres , can we see that forking process
> in logfile ?
>
>
>
> Thanks,
> Banu.
>


-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/


Re: order of reading the conf files

2018-12-06 Thread Stephen Frost
Greetings,

* Rene Romero Benavides (rene.romer...@gmail.com) wrote:
> Why do you need to know that ?

Please don't top-post, first, and second, it certainly seems like a
worthwhile thing to want to know, for a variety of reasons, such as
"what takes precedence- ALTER SYSTEM, or a configuration in
postgresql.conf?"

> Am Do., 6. Dez. 2018 um 01:21 Uhr schrieb bhargav kamineni <
> kbn98...@gmail.com>:
> > may i know the order in which postgres reads the configuration files like
> > conf , auto.conf , hba  ?

The regular postgresql.conf file is read first, then
postgresql.auto.conf and then pg_hba.conf and pg_ident.conf.  We can't
read pg_hba.conf/pg_ident.conf before reading postgresql.conf and
postgresql.auto.conf because their location is specified in
postgresql.conf/postgresql.auto.conf.

> > and how does postmaster forks postgres , can we see that forking process
> > in logfile ?

If you have log_connections enabled, we'll log the 'connection received'
message very shortly after fork'ing.

You could also enable DEBUG2 (or higher) to see the fork from the parent
process.

Thanks!

Stephen


signature.asc
Description: PGP signature


UNION ALL: Apparently based on column order rather than on column name or alias

2018-12-06 Thread Andreas Schmid
Hi list

I realized the following behaviour of UNION ALL:

SELECT 'a' AS col1, 'b' AS col2
UNION ALL
SELECT 'c' AS col1, 'd' AS col2;

returns:

 col1 | col2
--+--
 a| b
 c| d

Now I switch the column aliases in the second SELECT-Statement:

SELECT 'a' AS col1, 'b' AS col2
UNION ALL
SELECT 'c' AS col2, 'd' AS col1;

This returns the same result:

 col1 | col2
--+--
 a| b
 c| d

Same behaviour when working just with column names, no aliases.

So my conclusion is that the result of UNION ALL depends on the column
order, not on the column names or aliases. Is this the intended
behaviour? And is it documented somewhere? What I found is the last
sentence on https://www.postgresql.org/docs/current/queries-union.html
which says
"[...] they return the same number of columns and the corresponding
columns have compatible data types [...]"
It says nothing about column order, column names or aliases. Does this
obviously imply it's the column order?

Thank you for some clarification.
Andy



Re: UNION ALL: Apparently based on column order rather than on column name or alias

2018-12-06 Thread Tom Lane
Andreas Schmid  writes:
> So my conclusion is that the result of UNION ALL depends on the column
> order, not on the column names or aliases. Is this the intended
> behaviour?

Yes, this is required by SQL spec.  Matching by column name would
be used if you wrote a CORRESPONDING clause, but we don't implement
that feature.

regards, tom lane



Re: order of reading the conf files

2018-12-06 Thread Thomas Kellerer
Stephen Frost schrieb am 06.12.2018 um 15:52:
> The regular postgresql.conf file is read first, then
> postgresql.auto.conf and then pg_hba.conf and pg_ident.conf.  We can't
> read pg_hba.conf/pg_ident.conf before reading postgresql.conf and
> postgresql.auto.conf because their location is specified in
> postgresql.conf/postgresql.auto.conf.

When are the .conf files read that are included from within "postgresql.conf"? 
The manual is not clear about that.

Are they processed before "postgresql.auto.conf" or after? 

Thomas




Re: order of reading the conf files

2018-12-06 Thread Stephen Frost
Greetings,

* Thomas Kellerer (spam_ea...@gmx.net) wrote:
> Stephen Frost schrieb am 06.12.2018 um 15:52:
> > The regular postgresql.conf file is read first, then
> > postgresql.auto.conf and then pg_hba.conf and pg_ident.conf.  We can't
> > read pg_hba.conf/pg_ident.conf before reading postgresql.conf and
> > postgresql.auto.conf because their location is specified in
> > postgresql.conf/postgresql.auto.conf.
> 
> When are the .conf files read that are included from within 
> "postgresql.conf"? 
> The manual is not clear about that.

At the time we hit the 'include' line.

> Are they processed before "postgresql.auto.conf" or after? 

postgresql.auto.conf is always last.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: postgis after pg_upgrade

2018-12-06 Thread Adrian Klaver

On 12/6/18 12:40 AM, Slavcho Trnkovski wrote:

Hi,
But my understanding is that this approach is used when upgrading PostGIS.


Which maybe necessary when upgrading the database:

http://postgis.net/docs/manual-2.4/postgis_installation.html#upgrading

Which seems to be what is happening in your case, as before you 
reinstalled the extension you got(from OP):


select PostGIS_full_version();

   postgis_full_version
---
 POSTGIS="2.4.5 r16765" *PGSQL="94" (procs need upgrade for use with
"96") *GEOS="3.6.2-CAPI-1.10.2
4d2925d6" PROJ="Rel. 4.9.3, 15 August 2016" GDAL="GDAL 1.11.4, released
2016/01/25" LIBXML="2.9.1" LIBJSON="0.11" RASTER
(1 row)



I'm upgrading postgresql from 9.4 to 9.6 and PostGIS version remains the 
same (2.4.5).

If I execute:
   drop extension postgis;
   CREATE EXTENSION postgis SCHEMA postgis;
select PostGIS_full_version();
 
         postgis_full_version


  POSTGIS="2.4.5 r16765" PGSQL="96" GEOS="3.6.2-CAPI-1.10.2 4d2925d6" 
PROJ="Rel. 4.9.3, 15 August 2016" GDAL="GDAL 1.11.4, released 
2016/01/25" LIBXML="2.9.1" LIBJSON="0.11" RASTER


everything looks OK. Is this wrong?

Regards,
Slavcho

On Tue, Dec 4, 2018 at 7:41 PM Jeremy Schneider 
mailto:schnei...@ardentperf.com>> wrote:


On 11/30/18 05:11, Slavcho Trnkovski wrote:
 > I have postgresql 9.4 with postgis extension installed (latest
version,
 > 2.4.5). I upgraded postgresql from 9.4 to 9.6.
 > After upgrading to 9.6 I get the following result
 > from PostGIS_full_version()
 >  select PostGIS_full_version();
 >
 >  ... (procs need upgrade for use with "96") ...
 >
 > Is there any way to resolve this besides recreating the extension?

If I'm reading the postgis docs correctly, this message specifically
means that you must do a full dump/reload of postgis-enabled databases.

http://postgis.net/docs/manual-2.4/postgis_installation.html#upgrading

 > If you can't find the postgis_upgrade*.sql specific for upgrading
 > your version you are using a version too early for a soft upgrade
 > and need to do a HARD UPGRADE.
 >
 > The PostGIS_Full_Version function should inform you about the need
 > to run this kind of upgrade using a "procs need upgrade" message.
 >
 > By HARD UPGRADE we mean full dump/reload of postgis-enabled
 > databases.

-- 
http://about.me/jeremy_schneider





--
Adrian Klaver
adrian.kla...@aklaver.com



n_mod_since_analyze

2018-12-06 Thread Ron



https://www.postgresql.org/docs/9.6/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW

Does the field n_mod_since_analyze use "mod" instead of "upd" because it 
includes inserts, updates and deletes?


Thanks

--
Angular momentum makes the world go 'round.



Re: Limitting full join to one match

2018-12-06 Thread Phil Endecott

John W Higgins wrote:

On Wed, Dec 5, 2018 at 4:34 PM Phil Endecott <
spam_from_pgsql_li...@chezphil.org> wrote:


Dear Experts,

I have a couple of tables that I want to reconcile, finding rows
that match and places where rows are missing from one table or the
other:

...




So my question is: how can I modify my query to output only two rows,
like this:?

+++++
|date| amount |date| amount |
+++++
| 2018-01-01 |  10.00 | 2018-01-01 |  10.00 |
| 2018-02-01 |   5.00 |||
||| 2018-03-01 |   8.00 |
| 2018-04-01 |   5.00 | 2018-04-01 |   5.00 |
| 2018-05-01 |  20.00 | 2018-05-01 |  20.00 |  1
| 2018-05-01 |  20.00 | 2018-05-01 |  20.00 |  2
+++++



Evening Phil,

Window functions are your friend here. I prefer views for this stuff - but
subqueries would work just fine.

create view a_rows as (select *,
   row_number() OVER (PARTITION BY date, amount) AS pos
from a);
create view b_rows as (select *,
   row_number() OVER (PARTITION BY date, amount) AS pos
from b);

select
  a_rows.date,
  a_rows.amount,
  a_rows.pos,
  b_rows.date,
  b_rows.amount,
  b_rows.pos
from
  a_rows full join b_rows using (date,amount,pos);



Thanks John, that's great.  I'm a little surprised that there isn't an
easier way, but this certainly works.


Regard, Phil.










Re: Limitting full join to one match

2018-12-06 Thread Phil Endecott

Hi Ron,

Ron wrote:

On 12/05/2018 06:34 PM, Phil Endecott wrote:

Dear Experts,

I have a couple of tables that I want to reconcile, finding rows
that match and places where rows are missing from one table or the
other:

db=> select * from a;
+++
|date| amount |
+++
| 2018-01-01 |  10.00 |
| 2018-02-01 |   5.00 |  <-- missing from b
| 2018-04-01 |   5.00 |
+++

db=> select * from b;
+++
|date| amount |
+++
| 2018-01-01 |  10.00 |
| 2018-03-01 |   8.00 |  <-- missing from a
| 2018-04-01 |   5.00 |
+++

db=> select a.date, a.amount, b.date, b.amount from a full join b using 
(date,amount);
+++++
|date| amount |date| amount |
+++++
| 2018-01-01 |  10.00 | 2018-01-01 |  10.00 |
| 2018-02-01 |   5.00 |||
||| 2018-03-01 |   8.00 |
| 2018-04-01 |   5.00 | 2018-04-01 |   5.00 |
+++++

This works fine until I have multiple items with the same date
and amount:

db=> select * from a;
+++
|date| amount |
+++
| 2018-01-01 |  10.00 |
| 2018-02-01 |   5.00 |
| 2018-04-01 |   5.00 |
| 2018-05-01 |  20.00 |  <--
| 2018-05-01 |  20.00 |  <--
+++


db=> select * from b;
+++
|date| amount |
+++
| 2018-01-01 |  10.00 |
| 2018-03-01 |   8.00 |
| 2018-04-01 |   5.00 |
| 2018-05-01 |  20.00 |  <--
| 2018-05-01 |  20.00 |  <--
+++


What's your PK on "a" and "b"?


These input tables can have duplicate rows, so defining a primary key
requires something like a row ID or similar.


(Also, gmail seems to think that all -- or at least most -- of your email is 
spam.)


Yes, it is becoming increasingly difficult to persuade gmail etc. that
you are not a spammer if you run your own mail server.  If you have any
interesting headers suggesting exactly what they disliked about my message,
could you please forward them off-list?  Thanks.


Regards, Phil.










Re: order of reading the conf files

2018-12-06 Thread bhargav kamineni
Thanks a lot Stephen.

On Thu, Dec 6, 2018 at 8:53 PM Stephen Frost  wrote:

> Greetings,
>
> * Thomas Kellerer (spam_ea...@gmx.net) wrote:
> > Stephen Frost schrieb am 06.12.2018 um 15:52:
> > > The regular postgresql.conf file is read first, then
> > > postgresql.auto.conf and then pg_hba.conf and pg_ident.conf.  We can't
> > > read pg_hba.conf/pg_ident.conf before reading postgresql.conf and
> > > postgresql.auto.conf because their location is specified in
> > > postgresql.conf/postgresql.auto.conf.
> >
> > When are the .conf files read that are included from within
> "postgresql.conf"?
> > The manual is not clear about that.
>
> At the time we hit the 'include' line.
>
> > Are they processed before "postgresql.auto.conf" or after?
>
> postgresql.auto.conf is always last.
>
> Thanks!
>
> Stephen
>


Re: Limitting full join to one match

2018-12-06 Thread Ravi Krishna


> Yes, it is becoming increasingly difficult to persuade gmail etc. that> you 
> are not a spammer if you run your own mail server.  If you
> have any> interesting headers suggesting exactly what they disliked about my
> message,> could you please forward them off-list?  Thanks.
> 
> 
It is for this reason (and few others) I am off gmail and other free
email accounts.I have tried gmail / outlook / yahoo / aol and all of them mark 
many
mails to this listas spam.

I like fastmail a lot and $3 per month is practically free.


Re: How to build a btree index with integer values on jsonb data?

2018-12-06 Thread Laurenz Albe
Johann Spies wrote:
> How can I transform the following definition to index pubyear as
> integer and not text?
> 
> CREATE INDEX pubyear_idx
> ON some_table_where_data_field_is_of_type_jsonb USING btree
> ((data -> 'REC'::text) -> 'static_data'::text) ->
> 'summary'::text) -> 'pub_info'::text) ->> '@pubyear'::text) COLLATE
> pg_catalog."default");
> 
> While I can cast the value in a SELECT statement to integer I have
> been able to do the same while creating the index.

Replace

  COLLATE pg_catalog."default"

with

  ::integer

> Why btree index?  I want to do queries like
> 
> select stuff from sometable where pubyear between 2015 and 2018;

Because b-tree indexes are perfect for >= and <=.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: n_mod_since_analyze

2018-12-06 Thread Laurenz Albe
Ron wrote:
> https://www.postgresql.org/docs/9.6/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW
> 
> Does the field n_mod_since_analyze use "mod" instead of "upd" because it 
> includes inserts, updates and deletes?

Yes.

It is the number that triggers autoanalyze, and all data modifications impair
the accuracy of the statistics.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




psql profiles?

2018-12-06 Thread Matt Zagrabelny
Greetings,

I'm setting up my workstation to use "gss" for auth to a variety of Pg
systems on different hosts.

I'd rather not have to specify the "-h" for a connection:

psql -h db-host-1.example.com foo

I'd rather do:

psql foo

and have it know that I connect to foo on host db-host-1.example.com.

Is this possible with psql or do I hack together some wrapper script?

Thanks for any input!

-m


Re: psql profiles?

2018-12-06 Thread Alvaro Herrera
On 2018-Dec-06, Matt Zagrabelny wrote:

> I'd rather do:
> 
> psql foo
> 
> and have it know that I connect to foo on host db-host-1.example.com.
> 
> Is this possible with psql or do I hack together some wrapper script?

Sure, just define a pg_service.conf file.
https://www.postgresql.org/docs/11/libpq-pgservice.html

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: psql profiles?

2018-12-06 Thread Matt Zagrabelny
On Thu, Dec 6, 2018 at 4:24 PM Alvaro Herrera 
wrote:

> On 2018-Dec-06, Matt Zagrabelny wrote:
>
> > I'd rather do:
> >
> > psql foo
> >
> > and have it know that I connect to foo on host db-host-1.example.com.
> >
> > Is this possible with psql or do I hack together some wrapper script?
>
> Sure, just define a pg_service.conf file.
> https://www.postgresql.org/docs/11/libpq-pgservice.html


Thanks Alvaro!

Is there any shorter version than:

psql "service=foo"

?

If not, I can make a shell alias that puts the "service=$@" into the
command.

Thanks again!

-m


Importing tab delimited text file using phpPgAdmin 5.1 GUI

2018-12-06 Thread s400t
Hello Community!
I am trying to use phpPgAdmin  (ver. 5.1 with PHP 5.6.39, server CentOS 7, 
client Win-10 (Japanese)) to import a tab delimited text file (Excel -> save as 
tab delimited text;  also used notepad to save the same text file UTF-8 
encoding) .
Earlier. I had created a table (no columns have double quotes, not mixed case), 
for example, like this:
CREATE TABLE spec (rec_id VARCHAR(32)PRIMARY KEY NOT NULL,title_category 
VARCHAR(255),doc_type VARCHAR(255),etc);
In my tab delimited text file, first row has column names (without "") and 
starting from second row, data, some with double quotes, some with comma, some 
with brackets.
When I uploaded the file for the phpPgAdmin to import, I got the following 
error:
//--ERROR: 
column "rec_id" of relation "spec" does not existLINE 1: INSERT INTO 
"public"."spec" 
("rec_id","title_c...//--
Why that extra "" in the field name? I don't have it in my text file!
How/where can I tell the phpPgAdmin not to add that extra "" around the field 
name?

Thanks for reading and suggestions.

Re: Importing tab delimited text file using phpPgAdmin 5.1 GUI

2018-12-06 Thread Adrian Klaver

On 12/6/18 7:54 PM, s4...@yahoo.co.jp wrote:

Hello Community!

I am trying to use phpPgAdmin  (ver. 5.1 with PHP 5.6.39, server CentOS 
7, client Win-10 (Japanese)) to import a tab delimited text file (Excel 
-> save as tab delimited text;  also used notepad to save the same text 
file UTF-8 encoding) .


Earlier. I had created a table (no columns have double quotes, not mixed 
case), for example, like this:


CREATE TABLE spec (
rec_id VARCHAR(32)PRIMARY KEY NOT NULL,
title_category VARCHAR(255),
doc_type VARCHAR(255),
etc);

In my tab delimited text file, first row has column names (without "") 
and starting from second row, data, some with double quotes, some with 
comma, some with brackets.


When I uploaded the file for the phpPgAdmin to import, I got the 
following error:


//--
ERROR: column "rec_id" of relation "spec" does not exist
LINE 1: INSERT INTO "public"."spec" ("rec_id","title_c...
//--

Why that extra "" in the field name? I don't have it in my text file!


Most GUI tools I am familiar with quote identifiers by default.


How/where can I tell the phpPgAdmin not to add that extra "" around the 
field name?


I don't know. I have been under the impression that phpPgAdmin was no 
longer maintained/in use.


You might have more luck here:

https://sourceforge.net/p/phppgadmin/discussion/115884




Thanks for reading and suggestions.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: querying both text and non-text properties

2018-12-06 Thread Tony Shelver
I would suggest doing testing out btree_gin with a non-insignificant amount
of data before going ahead with it.

I did a test case last month, and the size of the generated index was
_much_ bigger than the base table.
The case involved a compound key if 1 int column and 1 timestamp range
column.

On Wed, 5 Dec 2018 at 16:02, Rob Nikander  wrote:

>
>
> > On Dec 4, 2018, at 4:59 PM, Laurenz Albe 
> wrote:
> >
> > You have two options:
> >
> > A combined index:
> >
> >  CREATE EXTENSION btree_gin;
> >  CREATE INDEX ON fulltext USING gin (to_tsvector('english', doc), color);
> >
> > That is the perfect match for a query with
> >
> >  WHERE color = 'red' AND to_tsvector('german', doc) @@
> to_tsquery('english', 'word');
> >
> > But you can also create two indexes:
> >
> >  CREATE INDEX ON fulltext USING gin (to_tsvector('english', doc));
> >  CREATE INDEX ON fulltext (color);
> >
> > Then you don't need the extension, and PostgreSQL can still use them for
> the search,
> > either only one of them if the condition is selective enough, or a
> "BitmapAnd" of both.
>
> Thanks! I will try both these methods and compare the performance.
>
> Rob
>


Re: Importing tab delimited text file using phpPgAdmin 5.1 GUI

2018-12-06 Thread rob stone
Hello,

On Thu, 2018-12-06 at 21:29 -0800, Adrian Klaver wrote:
> On 12/6/18 7:54 PM, s4...@yahoo.co.jp wrote:
> 
> Most GUI tools I am familiar with quote identifiers by default.
> > How/where can I tell the phpPgAdmin not to add that extra "" around
> > the 
> > field name?
> 
> I don't know. I have been under the impression that phpPgAdmin was
> no 
> longer maintained/in use.
> 
> You might have more luck here:
> 
> https://sourceforge.net/p/phppgadmin/discussion/115884
> 
> > 
> > Thanks for reading and suggestions.
> 
> 

If you look at phppgadmin on sourceforge, the tarball files are all
dated April 15th., 2013.
It appears to have stalled at Postgres version 9.2.
I doubt if it will run on versions 10 or 11.

Cheers,
Rob