Re: Work hours?

2019-08-28 Thread Steve Atkins



> On Aug 27, 2019, at 11:27 PM, stan  wrote:
> 
> I am just starting to explore the power of PostgreSQL's time and date
> functionality. I must say they seem very powerful.
> 
> I need to write a function that, given a month, and a year as input returns
> the "work hours" in that month. In other words something like
> 
> 8 * the count of all days in the range Monday to Friday) within that
> calendar month.
> 
> Any thoughts as to the best way to approach this?

You might find this useful: 
https://gist.github.com/wttw/b6f5d0d67c31d499c05f22a4f2b6f628

It's not the most efficient approach, but it's relatively simple to customize.

Cheers,
  Steve





Re:

2019-08-28 Thread Fabio Pardi
Hi,


if you have access to the OS, then you have plenty of options.

Else, I think pg_cron might do the job


regards,

fabio pardi



On 28/08/2019 08:58, Sonam Sharma wrote:
> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.




RE: vaccum in background

2019-08-28 Thread ROS Didier
Hi
It is possible to use background workers with the pg_background extension.

Best Regards
Didier ROS
EDF

De : sonams1...@gmail.com [mailto:sonams1...@gmail.com]
Envoyé : mercredi 28 août 2019 08:59
À : pgsql-general 
Objet :

Is there any option to run reindex or vaccum in background?
Every time the session gets logged off in between.



Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


Re: psql \copy hanging

2019-08-28 Thread Arnaud L.

Le 27/08/2019 à 13:17, Arnaud L. a écrit :

I move the offending line at the end of the script, so it will run some
minutes later, maybe this will be enough.


OK, so this was enough for last night's schedule to run without problem.
I still don't get it so I'm not satisfied with this solution, but at 
least it works.

I'll keep the list informed if something new arises.

Thanks to everyone for helping !

Cheers
--
Arnaud




Re: vaccum in background

2019-08-28 Thread Sonam Sharma
Thanks, will try both answers.

On Wed, Aug 28, 2019, 12:38 PM ROS Didier  wrote:

> Hi
>
> It is possible to use background workers with the pg_background extension.
>
>
>
> Best Regards
>
> Didier ROS
>
> EDF
>
>
>
> *De :* sonams1...@gmail.com [mailto:sonams1...@gmail.com]
> *Envoyé :* mercredi 28 août 2019 08:59
> *À :* pgsql-general 
> *Objet :*
>
>
>
> Is there any option to run reindex or vaccum in background?
>
> Every time the session gets logged off in between.
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont
> établis à l'intention exclusive des destinataires et les informations qui y
> figurent sont strictement confidentielles. Toute utilisation de ce Message
> non conforme à sa destination, toute diffusion ou toute publication totale
> ou partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de
> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
> sur quelque support que ce soit. Nous vous remercions également d'en
> avertir immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for
> the addressees. The information contained in this Message is confidential.
> Any use of information contained in this Message not in accord with its
> purpose, any dissemination or disclosure, either whole or partial, is
> prohibited except formal approval.
>
> If you are not the addressee, you may not copy, forward, disclose or use
> any part of it. If you have received this message in error, please delete
> it and all copies from your system and notify the sender immediately by
> return message.
>
> E-mail communication cannot be guaranteed to be timely secure, error or
> virus-free.
>


Re: psql \copy hanging

2019-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2019 at 9:09 AM Arnaud L.  wrote:
> OK, so this was enough for last night's schedule to run without problem.
> I still don't get it so I'm not satisfied with this solution, but at
> least it works.
> I'll keep the list informed if something new arises.

I don't want to be pedantic, but I would have tried with a single
change at a time.
And my bet is: the local file would do the trick (i.e., it is a weird
share problem).

If you are going to invest some time, you could also try to write a
small file on the share just before the copy starts, so that you are
guaranteed the share is working. Something like:

echo $(date) >> $SHARE/log.txt
psql 'copy ...'
echo 'done' >> $SHARE/log.txt

Luca




Re: psql \copy hanging

2019-08-28 Thread Arnaud L.

Le 28/08/2019 à 09:43, Luca Ferrari a écrit :

I don't want to be pedantic, but I would have tried with a single
change at a time.
And my bet is: the local file would do the trick (i.e., it is a weird
share problem).


You're not don't worry. This process is quite important in our workflow 
(not critical), so I can't really afford to make tests for weeks.


Moving the offending line to the end of the script is what I'd consider 
"single change". To me it makes something clear : the problem occurs 
only at the very specific time this command is running.




If you are going to invest some time, you could also try to write a
small file on the share just before the copy starts, so that you are
guaranteed the share is working. Something like:

echo $(date) >> $SHARE/log.txt
psql 'copy ...'
echo 'done' >> $SHARE/log.txt


Well, I do know that the share is working just before this command runs 
because ALL the commands in the script write to this specific share.
The script is basically outputting many views results in a single share, 
something like :

\copy (select * from view1) TO '\\myserver\myshare\file1.csv'
\copy (select * from view2) TO '\\myserver\myshare\file2.csv'
...
\copy (select * from view99) TO '\\myserver\myshare\file99.csv'
So the \copy command right before (say view15 for instance) is writing 
to this same share just milliseconds before the the problematic command 
(view16) tries to do the same.
Since this particular view takes some time to execute, there is some 
time between the moment the file gets created on the share (which 
happens immediately when the \copy command runs if I get it right), and 
the moment psql receives content and starts writing to it (my tests 
suggest 1min to 1min30s). Either psql doesn't receive anything 
(possible, since the connection is marked as active but it does not look 
as if it's doing anything at all), or there has been some timeout.
It could have been tcp keepalive, but in Windows the default is 2h I 
believe and postgresql uses system default if nothing is specified in 
conf (which is my case).


So with all this in mind I'd rather think I have a problem with either 
psql's \copy or with my query on the server side. But I'm not rulling 
anything out of course.


One other thing I could try is using COPY TO STDOUT \g. From what I 
understand in the documentation this would not be 100% similar to what 
\copy is doing.

Anyway, if it works with the current setup I won't dig into it much deeper.

Cheers
--
Arnaud











Re: cannot CREATE INDEX because it has pending trigger events

2019-08-28 Thread Simon Kissane
On Tue, Aug 27, 2019 at 5:59 PM Laurenz Albe  wrote:
>
> On Tue, 2019-08-27 at 12:00 +1000, Simon Kissane wrote:
> > We have an application that works fine with Postgres 9.6, but fails
> > with this error when we try installing it against 11.5
> >
> > I simplified the problem down to the following reproduce script:
> >
> > BEGIN TRANSACTION;
> > CREATE TABLE resource (resource_id BIGINT NOT NULL PRIMARY KEY,
> > resource_type BIGINT NOT NULL);
> > ALTER TABLE ONLY resource ADD CONSTRAINT resource_type_fk FOREIGN KEY
> > (resource_type) REFERENCES resource (resource_id) DEFERRABLE
> > INITIALLY DEFERRED;
> > INSERT INTO resource (resource_id,resource_type) values (1,1);
> > INSERT INTO resource (resource_id,resource_type) values (2,1);
> > INSERT INTO resource (resource_id,resource_type) values (3,2);
> > CREATE UNIQUE INDEX IF NOT EXISTS resource_type_2_singleton ON
> > resource (resource_type) WHERE resource_type=2;
> > COMMIT;
> >
> > That script works fine in Postgres 9.6, but run it against 11.5 you
> > get the error:
> >
> > ERROR:  cannot CREATE INDEX "resource" because it has pending trigger
> > events
> > STATEMENT:  CREATE UNIQUE INDEX IF NOT EXISTS
> > resource_type_2_singleton ON resource (resource_type) WHERE
> > resource_type=2;
>
> This is fallout of commit 0d1885266630:
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0d1885266630eee1de5c43af463fe2b921451932
>
> This commit is the fix for a bug:
> https://www.postgresql.org/message-id/flat/A737B7A37273E048B164557ADEF4A58B53A4DC9A%40ntex2010i.host.magwien.gv.at
>
> This might be a false positive hit or not, I am not certain.
> Maybe the check is not required for AFTER triggers.
In that bug, an index is being created in a trigger. I can
certainly see how that might lead to index corruption.

But, an FK constraint trigger (assuming ON UPDATE NO
ACTION / ON DELETE NO ACTION), is not making any
data change, so there is no way it could possibly corrupt
an index. So it seems that in order to prevent the bug,
it is also banning scenarios which have no possibility of
triggering it. One check might be to see if the
function/procedure of the trigger in question is defined
STABLE (which means it can't modify any table data or
schema). (I don't know if the auto-generated FK
constraint triggers would be marked as STABLE or not,
but, if they are NO ACTION they could be.) If the
trigger is STABLE, then index corruption would be
impossible.

> Anyway, the problem can be avoided by running
>
>   SET CONSTRAINTS resource_type_fk IMMEDIATE;
>
> right before the CREATE INDEX, so I don't think it is a real problem.
In the real app, there are a lot more than just one FK, I
removed the rest in my reproduce script. But, you are right, I
could probably then do something like
SET CONSTRAINTS ALL IMMEDIATE;

I think there is a real problem in that code that used to work fine
stops working on upgrade. Ideally, either it should still work, or
if for some reason it is impossible, at least the documentation
should explain that.

Thanks
Simon




Re: Work hours?

2019-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2019 at 12:27 AM stan  wrote:
> Any thoughts as to the best way to approach this?

I've written a couple of functions to compute working hours depending
on a possible per-day hour template.
Another possible implementation besides the other proposed solutions.


Luca




How to log 'user time' in postgres logs

2019-08-28 Thread francis picabia
Recently had a problem where autovacuum was accidentally left off
and the database took 6 seconds for every task from PHP.
I had no clue the database was the issue because I
had the minimal

log_duration = on
log_line_prefix = '<%t>'

With those settings all queries seen were roughly 1ms

I need this log to show the true time it takes to get a result back.

In the Linux world we have the time command which shows the user
time reflecting all overhead added up.  I'd like postgres to show
times like that and then if there are problems I can look further,
change logging details, etc..

I checked docs, googled,  and didn't see anything obvious.


Re:

2019-08-28 Thread David G. Johnston
On Tue, Aug 27, 2019 at 11:59 PM Sonam Sharma  wrote:

> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.
>

There is not - though you can make your auto-vacuum parameters super
aggressive.

I'm having trouble imagining a scenario where running reindex or vacuum
arbitrarily at session end would be a useful thing.

David J.


Re:

2019-08-28 Thread Tom Lane
Sonam Sharma  writes:
> Is there any option to run reindex or vaccum in background?
> Every time the session gets logged off in between.

If your session is getting killed by a network idle-time timeout,
you could probably prevent that with suitable adjustment of the
server's TCP-keepalive settings.

regards, tom lane




Re: Work hours?

2019-08-28 Thread Uwe Seher
*select sum(case when extract(dow from t.d) in (1,2,3,4,5) then 1 else 0
end) * 8 as hours*
*  from generate_series(current_date::date, (current_date + '10
days'::interval), '1 day'::interval) as t(d)*

*This calculates the working days/hours between 2 dates. You can make your
firt/lastr day of the month/year to a date and feed it into the series.*


*Bye Uwe*



Am Mi., 28. Aug. 2019 um 00:27 Uhr schrieb stan :

> I am just starting to explore the power of PostgreSQL's time and date
> functionality. I must say they seem very powerful.
>
> I need to write a function that, given a month, and a year as input returns
> the "work hours" in that month. In other words something like
>
> 8 * the count of all days in the range Monday to Friday) within that
> calendar month.
>
> Any thoughts as to the best way to approach this?
>
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: Question about password character in ECPG's connection string

2019-08-28 Thread Adrian Klaver

On 8/27/19 6:18 PM, Egashira, Yusuke wrote:

Hi, Giuseppe,

Thanks to response to my question!


It seems to me that ECPG documentation does not allow specifying
username and/or password in the connection string. The correct syntax
should be:

EXEC SQL CONNECT TO "unix:postgresql://localhost/connectdb" USER
"myuser" IDENTIFIED BY "pass&word"


Yes, I could connect to database with "USER" and "IDENTIFIED BY" phrase in 
CONNECT statement.
However, I could also connect to database with password in connection string when 
my password does not contains '&' character.

1. In database,
> CREATE ROLE myuser LOGIN PASSWORD 'password';
2. In ECPG application,
EXEC SQL CONNECT "tcp:postgresql://localhost?user=myuser&password=password";
  -> The connection was succeeded.

This behavior confuse me.


My guess it that what is happening is:

1) Given password: my&pwd

2)  "tcp:postgresql://localhost?user=myuser&password=password" looks like

 "tcp:postgresql://localhost?user=myuser&password=my&pwd"

and password is parsed on the & and you also end up with an extra 
parameter pwd


Have you tried quoting the password?




If user and password cannot write in connection string, what are parameters allowed in 
"connection_option" ?
I hope I can get this information from the ECPG documentation.

Regards.

--
Yusuke, Egashira.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to log 'user time' in postgres logs

2019-08-28 Thread Adrian Klaver

On 8/28/19 5:36 AM, francis picabia wrote:


Recently had a problem where autovacuum was accidentally left off
and the database took 6 seconds for every task from PHP.
I had no clue the database was the issue because I
had the minimal

log_duration = on
log_line_prefix = '<%t>'

With those settings all queries seen were roughly 1ms

I need this log to show the true time it takes to get a result back.

In the Linux world we have the time command which shows the user
time reflecting all overhead added up.  I'd like postgres to show
times like that and then if there are problems I can look further,
change logging details, etc..

I checked docs, googled,  and didn't see anything obvious.


I'm having a hard time believing autovacuum was involved in this, given 
you say the queries took only 1ms on  average. That would have been the 
part that would have been impacted by bloated tables/out-of-date 
statistics.


Some questions:

1) How did you arrive at the 6 second figure?

2) Is the PHP application on the same server as the database?







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to log 'user time' in postgres logs

2019-08-28 Thread Tom Lane
Adrian Klaver  writes:
> On 8/28/19 5:36 AM, francis picabia wrote:
>> I had no clue the database was the issue because I
>> had the minimal
>> log_duration = on
>> log_line_prefix = '<%t>'
>> With those settings all queries seen were roughly 1ms
>> 
>> I need this log to show the true time it takes to get a result back.

> I'm having a hard time believing autovacuum was involved in this, given 
> you say the queries took only 1ms on  average.

Also: the time logged by log_duration IS the time it took, so far as the
server can tell.

Perhaps there's monstrous network delays involved, but what I'm suspicious
of is inefficient client-side processing.

regards, tom lane




Re: Work hours?

2019-08-28 Thread Christopher Browne
On Wed, 28 Aug 2019 at 01:02, Ron  wrote:

> We did something similar to that, except all the columns were in one
> single table.  It wasn't a data warehouse, though: the RDBMS we used could
> be coerced into using a date index when large ranges were needed in detail
> tables by joining it to T_CALENDAR, and doing the range filter on
> T_CALENDAR.
>

Ah, interesting!  I like it, mostly...

The one bad thing would be that this sorta mis-matches timestamp with
timezone which is a more or less continuous data type (rather than
discrete, like date).  I could see an argument, in that environment, to put
a DATE type onto detail tables if they are inevitably being joined to
T_CALENDAR.

I recall we had a case where some reports were ridiculously inefficient
because a query involved effectively a "where date_part(something, column)"
clause that made that into a Seq Scan.

Alternatively (and I'm thinking out loud here), I wonder if putting a range
type with a pair of timestamps would help with matching, as the range type
would put the full range of each day into the table; you could have full
date/time stamps match the calendar table via the range type...

select [stuff] from tz_table t, t_calendar tc
where
   [various stuff]
and
  tc.t_workday and
  tc.t_date between '2017-01-01' and '2017-02-01'
   and (to get the range bit)
 t.original_tstz <@ tc.t_range;
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


wal_level logical for streaming replication

2019-08-28 Thread Vijaykumar Jain
Hello Team,

wal_level = logical
wal_level = replica

As per docs,
wal_level determines how much information is written to the WAL. The
default value is replica, which writes enough data to support WAL archiving
and replication, including running read-only queries on a standby server.
minimal removes all logging except the information required to recover from
a crash or immediate shutdown. Finally, logical adds information necessary
to support logical decoding. Each level includes the information logged at
all lower levels. This parameter can only be set at server start.

I have had a combination where I had to upgrade a server using logical
replication and when it was done I had replicas serving read queries of the
server with wal level logical.
All fine.
I have two dumb queries .

If I change wal_level back to replica, will it corrupt wal? coz it will
then be having diff information ( r format of data ?)
What is the base reason as to why ddls are not sent via logical replication
but the hot standby has ddl changes reflected absolutely fine ?
and there is one large limitation on large object support for logical
replication?
Where can I see the limitation on size or is it just certain data types ?

I used this to generate dummy data of in a table of all data types  it only
1000 rows and it replicated fine.
I could easily stress more on this, but I did not want to do discovery and
spread rumours without known references.

Appreciate your help as always.

Thanks,
Vijay
-- 

Regards,
Vijay


Re: [ext] Re: Pointers towards identifying bulk import bottleneck (walwriter tuning?)

2019-08-28 Thread Michael Lewis
On Tue, Aug 27, 2019 at 9:45 PM Laurenz Albe 
wrote:

> Holtgrewe, Manuel wrote:
> > Switching off fsync leads to a drastic time improvement but still
> > higher wall-clock time for four threads.
>
> Don't do that unless you are ready to start from scratch with a new
> "initdb" in the case of a crash.
>
> You can do almost as good by setting "synchronous_commit = off",
> and that is crash-safe.


It seems like it depends on your definition of crash-safe. Data loss can
occur but not data corruption, right? Do you know any ballpark for how much
difference in performance it makes to turn off synchronous_commit or what
type of hardware or usage it would make the biggest (or least) difference?


Re: Recomended front ends?

2019-08-28 Thread Peter J. Holzer
On 2019-08-27 16:04:02 +0100, Daniele Varrazzo wrote:
> Using the Django ORM to create complex queries is a joy (especially
> nesting subqueries),

Not for me. I usually know what SQL I want to execute. Then I have to
convert that SQL into a weird[1] and limited query language composed of
method calls so that the ORM can turn it into SQL again (hopefully
something close to the SQL I had in mind). It hurts my brain. 

hp

[1] Well, SQL is weird, too. But it is weird in an "invented 40+ years
ago and grown organically since" way, not a "constrained by the syntax
of a different language" way.


-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Recomended front ends?

2019-08-28 Thread Peter J. Holzer
On 2019-08-27 08:16:08 -0700, Adrian Klaver wrote:
> Django takes Postgres as it's reference database which makes things easier,
> especially when you add in
> contrib.postgres(https://docs.djangoproject.com/en/2.2/ref/contrib/postgres/)

Looks nice.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: How to log 'user time' in postgres logs

2019-08-28 Thread francis picabia
The server was running Moodle.  The slow load time was noticed when loading
a quiz containing
multiple images.  All Apache log results showed a 6 seconds or a multiple
of 6 for how long
it took to retrieve each image.

Interestingly, if I did a wget, on the server, to the image link (which was
processed through a pluginfile.php URL)
even the HTML page returned of "please login first" took consistently 6
seconds.  Never 2, 3, 4, 5 or 7, 8, 9...
So whatever was wrong, there was a 6 second penalty for this.  We found the
problem
outside of the quizzes as well, so it wasn't a single part of the PHP
code.  In addition, our development
server with a clone of data, database, apps, etc., was fast at doing any of
the tests.
Of course a database brought in with a dump will not include any of the
cruft
DB structures, so the dev server was free of the problem (and it had fewer
cores, memory, etc).

I was not suspecting PostgreSQL as the culprit, because all query times in
the log showed
roughly 1 ms response.  I could tail -f the log file while loading a quiz
to watch the times
reported.

Last night the vacuum was run (on a database of about 40GB if dumped), and
since
then the quizzes and everything run as fast as would be expected.  It had
run
for about 1.5 months without vacuum.

Apache, PHP and Postgres all on the same server.  Memory and CPUs not
taxed, load kept level
while the problem quizzes were being tested.

Given this experience, I'd like something that reflected the true times
Postgres was spending
on any work it was doing.

The other possibility was it just didn't log the slower times, but I have
seen larger numbers
in the 10,000 ms range in the night when some backups and housekeeping
happens.

All  log related settings:
checkpoint_segments = 12
logging_collector = on
log_destination = 'stderr'
log_directory = 'pg_log'
log_filename = 'postgresql-%a.log'
log_truncate_on_rotation = on
log_rotation_age = 1440
log_rotation_size = 0
log_min_messages = info
log_min_error_statement = debug1
log_duration = on
log_line_prefix = '<%t>'

I know it does sound strange, but this is what we battled with for 2.5 days
until the light
came on that the vacuum had been set to off on the target system during
server migration.


On Wed, Aug 28, 2019 at 11:07 AM Adrian Klaver 
wrote:

> On 8/28/19 5:36 AM, francis picabia wrote:
> >
> > Recently had a problem where autovacuum was accidentally left off
> > and the database took 6 seconds for every task from PHP.
> > I had no clue the database was the issue because I
> > had the minimal
> >
> > log_duration = on
> > log_line_prefix = '<%t>'
> >
> > With those settings all queries seen were roughly 1ms
> >
> > I need this log to show the true time it takes to get a result back.
> >
> > In the Linux world we have the time command which shows the user
> > time reflecting all overhead added up.  I'd like postgres to show
> > times like that and then if there are problems I can look further,
> > change logging details, etc..
> >
> > I checked docs, googled,  and didn't see anything obvious.
>
> I'm having a hard time believing autovacuum was involved in this, given
> you say the queries took only 1ms on  average. That would have been the
> part that would have been impacted by bloated tables/out-of-date
> statistics.
>
> Some questions:
>
> 1) How did you arrive at the 6 second figure?
>
> 2) Is the PHP application on the same server as the database?
>
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: How to log 'user time' in postgres logs

2019-08-28 Thread Tom Lane
francis picabia  writes:
> The server was running Moodle.  The slow load time was noticed when loading
> a quiz containing
> multiple images.  All Apache log results showed a 6 seconds or a multiple
> of 6 for how long
> it took to retrieve each image.

> Interestingly, if I did a wget, on the server, to the image link (which was
> processed through a pluginfile.php URL)
> even the HTML page returned of "please login first" took consistently 6
> seconds.  Never 2, 3, 4, 5 or 7, 8, 9...
> So whatever was wrong, there was a 6 second penalty for this.

Hmm ... some weird DNS behavior, perhaps?  That is one way to explain
a pattern like this.  How long does it take to "dig" or "nslookup"
your server name?

regards, tom lane




Re: [ext] Re: Pointers towards identifying bulk import bottleneck (walwriter tuning?)

2019-08-28 Thread Laurenz Albe
On Wed, 2019-08-28 at 12:27 -0600, Michael Lewis wrote:
> > You can do almost as good by setting "synchronous_commit = off",
> > and that is crash-safe.
> 
> It seems like it depends on your definition of crash-safe. Data loss
> can occur but not data corruption, right?

Right.

> Do you know any ballpark for how much difference in performance it
> makes to turn off synchronous_commit or what type of hardware or
> usage it would make the biggest (or least) difference?

In my experience, the performance will be almost as good as
with fsync=off, which is as good as it gets.

For an exact answer for your specific system, run a simple benchmark.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: wal_level logical for streaming replication

2019-08-28 Thread Laurenz Albe
On Wed, 2019-08-28 at 21:44 +0530, Vijaykumar Jain wrote:
> If I change wal_level back to replica, will it corrupt wal? coz it
> will then be having diff information ( r format of data ?)

That's why you have to restart the server when you change that
parameter.  This way, there will be a checkpoint marking the change.

> What is the base reason as to why ddls are not sent via logical
> replication but the hot standby has ddl changes reflected absolutely
> fine ?

Physical streaming replication just replicates the database files,
so it also replicates DDL statements, which are just changes to the
catalog tables.

Basically, anything that can be recovered can be replicated.

Logical replication has to perform "logical decoding", that is,
it has to translate the (physical) WAL information into logical
information (waht row was modified in which table).

So this is much more complicated.  It should be possible in theory,
but nobody has got around to solving the difficulties involved yet.

> and there is one large limitation on large object support for logical
> replication?
> Where can I see the limitation on size or is it just certain data
> types ?

This has nothing to do with the size; I guess the answer is the same as
above.  One proble that I can see immediately is that primary and
standby don't share the same OIDs, yet every large object is identified
by its OID.  So I think this is a fundamental problem that cannot be
solved.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Question about password character in ECPG's connection string

2019-08-28 Thread Alban Hertroys


> 2)  "tcp:postgresql://localhost?user=myuser&password=password" looks like
> 
> "tcp:postgresql://localhost?user=myuser&password=my&pwd"
> 
> and password is parsed on the & and you also end up with an extra parameter 
> pwd

Perhaps it helps to URL-encode the & in the password as %26?

Alban Hertroys
--
There is always an exception to always.








Re: Work hours?

2019-08-28 Thread Gavin Flower

On 28/08/2019 15:22, Christopher Browne wrote:



On Tue, Aug 27, 2019, 6:27 PM stan > wrote:


I am just starting to explore the power of PostgreSQL's time and date
functionality. I must say they seem very powerful.

I need to write a function that, given a month, and a year as
input returns
the "work hours" in that month. In other words something like

8 * the count of all days in the range Monday to Friday) within that
calendar month.

Any thoughts as to the best way to approach this?


In data warehouse applications, they have the habit of creating tables 
that have various interesting attributes about dates.


https://medium.com/@duffn/creating-a-date-dimension-table-in-postgresql-af3f8e2941ac

I'd be inclined to solve this by defining various useful sets of 
dates; you might then attach relevant attributes to a dimension table 
like the d_date table in the article.


- a table with all weekdays (say, Monday to Friday)

- a table listing statutory holidays that likely need to be excluded

These are going to be small tables even if you put 10 years worth of 
dates in it.



[...]

You don't need a whole table for weekdays.  You can easily calculate the 
number of weekdays simply from knowing the first day of the month and 
how many days in a month.



Cheers,
Gavin





Re: Question about password character in ECPG's connection string

2019-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2019 at 10:47 PM Alban Hertroys  wrote:
> Perhaps it helps to URL-encode the & in the password as %26?

The OP already did without success.
Could it be needed to escape the & with the backslash or single ticks?

Luca