Metric to calculate WAL size left to transfer to Standby

2021-04-15 Thread Viral Shah
Hello All,

We have a PostgreSQL 10.12 cluster of servers in two different data
centers. Off lately, in the case of a large WAL generation, we are seeing
replication delay between the master and the standby server. These delays
have off lately been there for an unusually long time. I was wondering if
we have any metric that can calculate the amount (size) of WAL transfer
left between master and standby?

PS: We have ensured we have upgraded our firewalls for better speed
transfer.

Any help on how to figure out the slowness in the WAL transfer would
be much appreciated.

Thanks,
Viral Shah


Vulnerability PostgreSQL 11.2

2021-04-15 Thread Moris Rumenov Vrachovski
Hello,

I am not even sure if I am contacting the right person maybe you can direct
me to the right person.
My vulnerability scanner is telling me this in regards to PostgreSQL

Path : C:\Program Files\PostgreSQL\11 Installed version : 11.2 Fixed
version : 11.11


I have contacted my support team for my vulnerability scanner. But it was
worth emailing you guys as well just in case this might be an error on your
end as well.

Thank you!
-- 
Moris Vrachovski


Re: Vulnerability PostgreSQL 11.2

2021-04-15 Thread Magnus Hagander
On Thu, Apr 15, 2021 at 12:32 PM Moris Rumenov Vrachovski
 wrote:
>
> Hello,
>
> I am not even sure if I am contacting the right person maybe you can direct 
> me to the right person.
> My vulnerability scanner is telling me this in regards to PostgreSQL
>
> Path : C:\Program Files\PostgreSQL\11 Installed version : 11.2 Fixed version 
> : 11.11
>
>
> I have contacted my support team for my vulnerability scanner. But it was 
> worth emailing you guys as well just in case this might be an error on your 
> end as well.


Hello!

There are multiple vulnerabilities between 11.2 and 11.11, as you can
see on https://www.postgresql.org/support/security/11/. So it sounds
like your vulnerability scanner is right and that you need to install
the updates.


-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




Re: Vulnerability PostgreSQL 11.2

2021-04-15 Thread Laurenz Albe
On Wed, 2021-04-14 at 16:10 -0700, Moris Rumenov Vrachovski wrote:
> I am not even sure if I am contacting the right person maybe you can direct 
> me to the right person.
> My vulnerability scanner is telling me this in regards to PostgreSQL
> 
> Path  : C:\Program Files\PostgreSQL\11
>   Installed version : 11.2
>   Fixed version : 11.11
> 
> 
> I have contacted my support team for my vulnerability scanner. But it was 
> worth emailing you guys as well just in case this might be an error on your 
> end as well. 

I think that your scanner is telling you to install the latest fix for v11,
and it is right to complain.

Read https://www.postgresql.org/support/versioning/

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Metric to calculate WAL size left to transfer to Standby

2021-04-15 Thread Laurenz Albe
On Wed, 2021-04-14 at 17:50 -0400, Viral Shah wrote:
> We have a PostgreSQL 10.12 cluster of servers in two different data centers.
>  Off lately, in the case of a large WAL generation, we are seeing replication
>  delay between the master and the standby server. These delays have off lately
>  been there for an unusually long time. I was wondering if we have any metric
>  that can calculate the amount (size) of WAL transfer left between master and
>  standby?
> 
> PS: We have ensured we have upgraded our firewalls for better speed transfer.
> 
> Any help on how to figure out the slowness in the WAL transfer would be much 
> appreciated.

SELECT pg_wal_lsn_diff(pg_current_wal_lsn(), flush_lsn) AS transfer_lag,
   pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) AS replay_lag
FROM pg_stat_replication;

If both are delayed, it might be that the network cannot cope.

If only the second number is delayed, you have replication conflicts
with queries on the standby.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Timestamp/hstore query?

2021-04-15 Thread Mike Rylander
On Wed, Apr 14, 2021 at 5:33 PM Brent Wood  wrote:
>
> Thanks for your reply,
>
> The table is essentially:
> create table readings (timertimestamp primary key,
>readings  hstore);
>
> the hstore comprises ( ) key/value pairs for readings 
> taken at the time specified in the timestamp.
>
> eg:  "67" "-45.67436", "68" "176.5424" could be key/value pairs representing 
> latitude & longitude, with a timestamp in the timer column.
>
> There would be several lat/lon hstore pairs in a given minute, the query I 
> want would return the last one in the timeseries for that minute (for each 
> key).
>
> I don't think your examples will give me the separate hstore key-value pairs, 
> extracted as the last in the interval for each key & reassembled as an hstore 
> list in the result. The sensor id is the hstore key, as described above, not 
> a separate attribute. That said, the keys can be extracted from the hstore 
> much like a column, but I'm not sure that is the best approach.
>
> Treating each set of hstores in an interval as an array & extracting the last 
> elements may be viable. But I['m not sure how...
>

Hi Brent,

With the table structure like that, you'll need to project all the
sensor data onto each timestamp and then collapse it.  Something like
this:

  SELECT date_trunc('minute',timer), key, first(value) over (partition
by key order by timer desc) FROM (SELECT timer, (each(readings)).*
FROM table WHERE date_trunc('minute',timer) = '2021-04-15
08:10:00-04')x;

If your queries will always target a specific timestamp truncated to
the minute, you'll find an index on date_trunc('minute',timer) will be
useful, I think.  If your intervals are more complex than date_trunc()
can handle then you'll have to get more creative, but that's the basic
structure -- normalize the primary key, project the sensor data to
make it table-ish, then use regular SQL to pull out what you want.

--
Mike Rylander
 | Research and Development Manager
 | Equinox Open Library Initiative
 | phone:  1-877-OPEN-ILS (673-6457)
 | email:  mi...@equinoxoli.org
 | web:  https://equinoxOLI.org

>
>
> On Thursday, April 15, 2021, 6:33:08 AM GMT+12, Michael Lewis 
>  wrote:
>
>
>
>
>
> If you share example schema and desired output (like a dummy table or even 
> pseudo code SQL), then I'm sure many people could help you. Right now, the 
> description of your desired result seems a bit unclear, at least to me.
>
>
> If you wanted to run this hourly for the last 1 hour, it sounds a bit like 
> want this-
>
> select sensor_id, date_trunc( 'minute', timestamptz_field_name_here ), last( 
> value_from_hstore ) over ( partition by sensor_id, date_trunc( 'minute', 
> timestamptz_field_name_here ) ) as last_value_recorded
> from data_table_here
> where timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and NOW()
> group by sensor_id, date_trunc( 'minute', timestamptz_field_name_here )
>
>
> You could also use the left join "where is null" pattern to check that a 
> prior record in the minute period of time does not exist for that same key. 
> Something like this-
>
> select d1.sensor_id, date_trunc( 'minute', d1.timestamptz_field_name_here ), 
> d1.timestamptz_field_name_here as last_value_recorded
> from data_table_here as d1
> left join data_table_here as prior_d1 ON prior_d1.sensor_id = d1.sensor_id 
> AND prior_d1.timestamptz_field_name_here < d1.timestamptz_field_name_here and 
> prior_d1.timestamptz_field_name_here >= date_trunc( 'minute', 
> d1.timestamptz_field_name_here )
> where d1.timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and 
> NOW()
>
>
> >
>
>




Re: Why is writing JSONB faster than just JSON?

2021-04-15 Thread Dmitry Dolgov
> On Wed, Apr 14, 2021 at 10:26:23AM -0400, Tom Lane wrote:
> Dmitry Dolgov <9erthali...@gmail.com> writes:
> >> On Tue, Apr 13, 2021 at 11:38:04PM -0700, Mitar wrote:
> >> ... Namely, it looks like writing into a jsonb typed
> >> column is 30% faster than writing into a json typed column. Why is
> >> that? Does not jsonb require parsing of JSON and conversion? That
> >> should be slower than just storing a blob as-is?
>
> > * Parsing is happening in both cases, for json it's done for validation
> >   purposes.
>
> Right.
>
> > * Compression of json data is actually dominates the runtime load for large
> >   json objects, making other factors less visible and reducing difference in
> >   size (jsonb also should be a bit bigger, that's why I would think it 
> > would be
> >   slower).
> > * At the same time perf shows that json compression spends a bit more time 
> > in
> >   pglz_find_match (mostly around accessing and checking history entries), 
> > which
> >   is compression optimization via history table. So probably due to 
> > differences
> >   in data layout this optimization works slightly worse for plain json than 
> > for
> >   jsonb?
>
> Interesting.  I recall that we made some last-minute changes in the JSONB
> physical representation after finding that the original choices resulted
> in sucky compression behavior.  But I think we were only looking at the
> resultant compressed size, not time-to-compress.
>
> My own guess about this, without having tried to reproduce it, is that
> JSONB might end up physically smaller than JSON, resulting in less work
> to push the toasted datum out to disk.  This'd depend a lot on your
> formatting habits for JSON, of course.  But in any case, it'd be worth
> comparing pg_column_size() results to see what's up with that.

Oh, of course I've missed that the input I was using was indeed
formatted, without formatting both cases perform equally well and I
can't reproduce the issue. Although if I understand correctly the
original code in question doesn't actually do any formatting.




Pg_upgrade problem.

2021-04-15 Thread obi reddy
When I am trying to upgrade 9.6 to 10 . I'M getting this error , please
help me out

pg_upgrade check for "/opt/postgresql/10/bin/passwd" failed: no such file
or directory failure, exiting


Re: Why is writing JSONB faster than just JSON?

2021-04-15 Thread Tom Lane
Dmitry Dolgov <9erthali...@gmail.com> writes:
> On Wed, Apr 14, 2021 at 10:26:23AM -0400, Tom Lane wrote:
>> My own guess about this, without having tried to reproduce it, is that
>> JSONB might end up physically smaller than JSON, resulting in less work
>> to push the toasted datum out to disk.  This'd depend a lot on your
>> formatting habits for JSON, of course.  But in any case, it'd be worth
>> comparing pg_column_size() results to see what's up with that.

> Oh, of course I've missed that the input I was using was indeed
> formatted, without formatting both cases perform equally well and I
> can't reproduce the issue. Although if I understand correctly the
> original code in question doesn't actually do any formatting.

My point was that for JSON, after validating that the input is
syntactically correct, we just store it as-received.  So in particular
the amount of whitespace in the value would depend on how the client
had chosen to format the JSON.  This'd affect the stored size of
course, and I think it would have an effect on compression time too.

regards, tom lane




Query processing in postgresql

2021-04-15 Thread Rama Krishnan
Hi Guys,

How the select statement has been processed in the postgresql


Thanks

RK


Re: Query processing in postgresql

2021-04-15 Thread Adrian Klaver

On 4/15/21 10:04 AM, Rama Krishnan wrote:

Hi Guys,

How the select statement has been processed in the postgresql


https://www.postgresql.org/developer/backend/

Click on tags to get more information.




Thanks

RK



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pg_upgrade problem.

2021-04-15 Thread Adrian Klaver

On 4/15/21 9:43 AM, obi reddy wrote:
When I am trying to upgrade 9.6 to 10 . I'M getting this error , please 
help me out


What OS and version?

Where did you install Postgres from?



pg_upgrade check for "/opt/postgresql/10/bin/passwd" failed: no such 
file or directory failure, exiting


AFAIK, passwd is not a regular binary in the community edition. Have you 
installed an extension in the 9.6 instance that you have not installed 
in the 10 instance? Or something else along that line?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pg_upgrade problem.

2021-04-15 Thread obi reddy
Yeah I have installed 10 and OS version is 6.4.

> > When I am trying to upgrade 9.6 to 10 . I'M getting this error , please
> > help me out
>
> What OS and version?
>
> Where did you install Postgres from?
>
> >
> > pg_upgrade check for "/opt/postgresql/10/bin/passwd" failed: no such
> > file or directory failure, exiting
>
> AFAIK, passwd is not a regular binary in the community edition. Have you
> installed an extension in the 9.6 instance that you have not installed
> in the 10 instance? Or something else along that line?
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Pg_upgrade problem.

2021-04-15 Thread Adrian Klaver

On 4/15/21 10:33 AM, obi reddy wrote:

Yeah I have installed 10 and OS version is 6.4.


What is the actual OS?

Where did you install 9.6 and 10 from?



 > When I am trying to upgrade 9.6 to 10 . I'M getting this error ,
please
 > help me out

What OS and version?

Where did you install Postgres from?

 >
 > pg_upgrade check for "/opt/postgresql/10/bin/passwd" failed: no such
 > file or directory failure, exiting

AFAIK, passwd is not a regular binary in the community edition. Have
you
installed an extension in the 9.6 instance that you have not installed
in the 10 instance? Or something else along that line?


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pg_upgrade problem.

2021-04-15 Thread obi reddy
I have instal 10.0 side and OS is centos 6.4 64 bit

>
>
>
> What is the actual OS?
>
> Where did you install 9.6 and 10 from?
>
> >
> >  > When I am trying to upgrade 9.6 to 10 . I'M getting this error ,
> > please
> >  > help me out
> >
> > What OS and version?
> >
> > Where did you install Postgres from?
> >
> >  >
> >  > pg_upgrade check for "/opt/postgresql/10/bin/passwd" failed: no
> such
> >  > file or directory failure, exiting
> >
> > AFAIK, passwd is not a regular binary in the community edition. Have
> > you
> > installed an extension in the 9.6 instance that you have not
> installed
> > in the 10 instance? Or something else along that line?
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Pg_upgrade problem.

2021-04-15 Thread Adrian Klaver

On 4/15/21 10:49 AM, obi reddy wrote:

I have instal 10.0 side and OS is centos 6.4 64 bit


What was the source for the installs of the 9.6 and 10 instances of 
Postgres?:


1) RPM
   If so from what repo?

2) Other
   If so what and from where?






What is the actual OS?

Where did you install 9.6 and 10 from?

 >
 >      > When I am trying to upgrade 9.6 to 10 . I'M getting
this error ,
 >     please
 >      > help me out
 >
 >     What OS and version?
 >
 >     Where did you install Postgres from?
 >
 >      >
 >      > pg_upgrade check for "/opt/postgresql/10/bin/passwd"
failed: no such
 >      > file or directory failure, exiting
 >
 >     AFAIK, passwd is not a regular binary in the community
edition. Have
 >     you
 >     installed an extension in the 9.6 instance that you have not
installed
 >     in the 10 instance? Or something else along that line?
 >
 >
 >     --
 >     Adrian Klaver
 > adrian.kla...@aklaver.com 
>
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Why is writing JSONB faster than just JSON?

2021-04-15 Thread Dmitry Dolgov
> On Thu, Apr 15, 2021 at 12:47:25PM -0400, Tom Lane wrote:
> Dmitry Dolgov <9erthali...@gmail.com> writes:
> > On Wed, Apr 14, 2021 at 10:26:23AM -0400, Tom Lane wrote:
> >> My own guess about this, without having tried to reproduce it, is that
> >> JSONB might end up physically smaller than JSON, resulting in less work
> >> to push the toasted datum out to disk.  This'd depend a lot on your
> >> formatting habits for JSON, of course.  But in any case, it'd be worth
> >> comparing pg_column_size() results to see what's up with that.
>
> > Oh, of course I've missed that the input I was using was indeed
> > formatted, without formatting both cases perform equally well and I
> > can't reproduce the issue. Although if I understand correctly the
> > original code in question doesn't actually do any formatting.
>
> My point was that for JSON, after validating that the input is
> syntactically correct, we just store it as-received.  So in particular
> the amount of whitespace in the value would depend on how the client
> had chosen to format the JSON.  This'd affect the stored size of
> course, and I think it would have an effect on compression time too.

Yes, I got it and just wanted to confirm you were right - this was the
reason I've observed slowdown trying to reproduce the report.




Re: Pg_upgrade problem.

2021-04-15 Thread obi reddy
It's a "oneclic "

>
> What was the source for the installs of the 9.6 and 10 instances of
> Postgres?:
>
> 1) RPM
> If so from what repo?
>
> 2) Other
> If so what and from where?
>
> >
> >
> >
> >
> > What is the actual OS?
> >
> > Where did you install 9.6 and 10 from?
> >
> >  >
> >  >  > When I am trying to upgrade 9.6 to 10 . I'M getting
> > this error ,
> >  > please
> >  >  > help me out
> >  >
> >  > What OS and version?
> >  >
> >  > Where did you install Postgres from?
> >  >
> >  >  >
> >  >  > pg_upgrade check for "/opt/postgresql/10/bin/passwd"
> > failed: no such
> >  >  > file or directory failure, exiting
> >  >
> >  > AFAIK, passwd is not a regular binary in the community
> > edition. Have
> >  > you
> >  > installed an extension in the 9.6 instance that you have not
> > installed
> >  > in the 10 instance? Or something else along that line?
> >  >
> >  >
> >  > --
> >  > Adrian Klaver
> >  > adrian.kla...@aklaver.com 
> >  >>
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Pg_upgrade problem.

2021-04-15 Thread Adrian Klaver

On 4/15/21 2:00 PM, obi reddy wrote:

It's a "oneclic "


From here?:

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads

To get an answer you will need to provide as much basic information as 
you can. So in addition to what you have provided we need:


1) Is the 9.6 instance also from EDB(enterprisedb.com)?

2) Have you created any extensions in the 9.6 instance and if so what?

3) Is there a '/opt/postgresql/96/bin/passwd'?

4) Is there a '/opt/postgresql/10/bin/passwd'?

5) What is the complete pg_upgrade command you are using?






What was the source for the installs of the 9.6 and 10 instances of
Postgres?:

1) RPM
     If so from what repo?

2) Other
     If so what and from where?

 >
 >
 >
 >
 >     What is the actual OS?
 >
 >     Where did you install 9.6 and 10 from?
 >
 >      >
 >      >      > When I am trying to upgrade 9.6 to 10 . I'M getting
 >     this error ,
 >      >     please
 >      >      > help me out
 >      >
 >      >     What OS and version?
 >      >
 >      >     Where did you install Postgres from?
 >      >
 >      >      >
 >      >      > pg_upgrade check for "/opt/postgresql/10/bin/passwd"
 >     failed: no such
 >      >      > file or directory failure, exiting
 >      >
 >      >     AFAIK, passwd is not a regular binary in the community
 >     edition. Have
 >      >     you
 >      >     installed an extension in the 9.6 instance that you
have not
 >     installed
 >      >     in the 10 instance? Or something else along that line?
 >      >
 >      >
 >      >     --
 >      >     Adrian Klaver
 >      > adrian.kla...@aklaver.com
 >
 >      >>
 >      >
 >
 >
 >     --
 >     Adrian Klaver
 > adrian.kla...@aklaver.com 
>
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Timestamp/hstore query?

2021-04-15 Thread Brent Wood
 Thanks Mike,
That approach seems to work very well!!!
There are something on the order of 40 billion readings in the table. This 
returns an arbitrary one minutes data, as required, in under 30ms:
SELECT distinct date_trunc('minute',_timer) as timer, 
    key::int, 
    first_value(value)
 OVER (partition by key order by _timer desc)
 FROM (SELECT timer as _timer, 
  (each(v.values)).*
   FROM v_sec v
   WHERE timer between '2020-04-15 08:10:00'
   and '2020-04-15 08:10:59' ) as x
 order by key::int;

v_sec is a view on a table with 30+ years of 1 second readings from dozens of 
sensors, columns are timer (timestamp) & values (hstore).
I used "between timestamps" rather than "= date_trunc()" in the where clause as 
this uses the index on timestamps, so is MUCH faster.

FYI, we are using Timescaledb to manage the underlying data and data structures.


Appreciated!!!
Brent Wood


On Friday, April 16, 2021, 12:28:41 AM GMT+12, Mike Rylander 
 wrote:  
 
 On Wed, Apr 14, 2021 at 5:33 PM Brent Wood  wrote:
>
> Thanks for your reply,
>
> The table is essentially:
> create table readings (timer    timestamp primary key,
>                        readings  hstore);
>
> the hstore comprises ( ) key/value pairs for readings 
> taken at the time specified in the timestamp.
>
> eg:  "67" "-45.67436", "68" "176.5424" could be key/value pairs representing 
> latitude & longitude, with a timestamp in the timer column.
>
> There would be several lat/lon hstore pairs in a given minute, the query I 
> want would return the last one in the timeseries for that minute (for each 
> key).
>
> I don't think your examples will give me the separate hstore key-value pairs, 
> extracted as the last in the interval for each key & reassembled as an hstore 
> list in the result. The sensor id is the hstore key, as described above, not 
> a separate attribute. That said, the keys can be extracted from the hstore 
> much like a column, but I'm not sure that is the best approach.
>
> Treating each set of hstores in an interval as an array & extracting the last 
> elements may be viable. But I['m not sure how...
>

Hi Brent,

With the table structure like that, you'll need to project all the
sensor data onto each timestamp and then collapse it.  Something like
this:

  SELECT date_trunc('minute',timer), key, first(value) over (partition
by key order by timer desc) FROM (SELECT timer, (each(readings)).*
FROM table WHERE date_trunc('minute',timer) = '2021-04-15
08:10:00-04')x;

If your queries will always target a specific timestamp truncated to
the minute, you'll find an index on date_trunc('minute',timer) will be
useful, I think.  If your intervals are more complex than date_trunc()
can handle then you'll have to get more creative, but that's the basic
structure -- normalize the primary key, project the sensor data to
make it table-ish, then use regular SQL to pull out what you want.

--
Mike Rylander
 | Research and Development Manager
 | Equinox Open Library Initiative
 | phone:  1-877-OPEN-ILS (673-6457)
 | email:  mi...@equinoxoli.org
 | web:  https://equinoxOLI.org

>
>
> On Thursday, April 15, 2021, 6:33:08 AM GMT+12, Michael Lewis 
>  wrote:
>
>
>
>
>
> If you share example schema and desired output (like a dummy table or even 
> pseudo code SQL), then I'm sure many people could help you. Right now, the 
> description of your desired result seems a bit unclear, at least to me.
>
>
> If you wanted to run this hourly for the last 1 hour, it sounds a bit like 
> want this-
>
> select sensor_id, date_trunc( 'minute', timestamptz_field_name_here ), last( 
> value_from_hstore ) over ( partition by sensor_id, date_trunc( 'minute', 
> timestamptz_field_name_here ) ) as last_value_recorded
> from data_table_here
> where timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and NOW()
> group by sensor_id, date_trunc( 'minute', timestamptz_field_name_here )
>
>
> You could also use the left join "where is null" pattern to check that a 
> prior record in the minute period of time does not exist for that same key. 
> Something like this-
>
> select d1.sensor_id, date_trunc( 'minute', d1.timestamptz_field_name_here ), 
> d1.timestamptz_field_name_here as last_value_recorded
> from data_table_here as d1
> left join data_table_here as prior_d1 ON prior_d1.sensor_id = d1.sensor_id 
> AND prior_d1.timestamptz_field_name_here < d1.timestamptz_field_name_here and 
> prior_d1.timestamptz_field_name_here >= date_trunc( 'minute', 
> d1.timestamptz_field_name_here )
> where d1.timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and 
> NOW()
>
>
> >
>
>
  

Re: Pg_upgrade problem.

2021-04-15 Thread obi reddy
Thank you for your response. Problem is resolved.  I tried another way its
working fine.
Thank you so much
OBIREDDY. G


> It's a "oneclic "
>
>  From here?:
>
> https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
>
> To get an answer you will need to provide as much basic information as
> you can. So in addition to what you have provided we need:
>
> 1) Is the 9.6 instance also from EDB(enterprisedb.com)?
>
> 2) Have you created any extensions in the 9.6 instance and if so what?
>
> 3) Is there a '/opt/postgresql/96/bin/passwd'?
>
> 4) Is there a '/opt/postgresql/10/bin/passwd'?
>
> 5) What is the complete pg_upgrade command you are using?
>
>
>
> >
> >
> > What was the source for the installs of the 9.6 and 10 instances of
> > Postgres?:
> >
> > 1) RPM
> >  If so from what repo?
> >
> > 2) Other
> >  If so what and from where?
> >
> >  >
> >  >
> >  >
> >  >
> >  > What is the actual OS?
> >  >
> >  > Where did you install 9.6 and 10 from?
> >  >
> >  >  >
> >  >  >  > When I am trying to upgrade 9.6 to 10 . I'M getting
> >  > this error ,
> >  >  > please
> >  >  >  > help me out
> >  >  >
> >  >  > What OS and version?
> >  >  >
> >  >  > Where did you install Postgres from?
> >  >  >
> >  >  >  >
> >  >  >  > pg_upgrade check for "/opt/postgresql/10/bin/passwd"
> >  > failed: no such
> >  >  >  > file or directory failure, exiting
> >  >  >
> >  >  > AFAIK, passwd is not a regular binary in the community
> >  > edition. Have
> >  >  > you
> >  >  > installed an extension in the 9.6 instance that you
> > have not
> >  > installed
> >  >  > in the 10 instance? Or something else along that line?
> >  >  >
> >  >  >
> >  >  > --
> >  >  > Adrian Klaver
> >  >  > adrian.kla...@aklaver.com
> >   > >
> >  >  >   > >>
> >  >  >
> >  >
> >  >
> >  > --
> >  > Adrian Klaver
> >  > adrian.kla...@aklaver.com 
> >  >>
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Query processing in postgresql

2021-04-15 Thread Rama Krishnan
Thank you so much a drain

Thanks

Rama Krishnan

On Thu, 15 Apr, 2021, 22:40 Adrian Klaver, 
wrote:

> On 4/15/21 10:04 AM, Rama Krishnan wrote:
> > Hi Guys,
> >
> > How the select statement has been processed in the postgresql
>
> https://www.postgresql.org/developer/backend/
>
> Click on tags to get more information.
>
> >
> >
> > Thanks
> >
> > RK
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


SV: where clauses including timstamptz and intervals

2021-04-15 Thread Niels Jespersen
>-Oprindelig meddelelse-
>Fra: Francisco Olarte  
>Sendt: 10. april 2021 09:31
>Til: Niels Jespersen 
>
>Although it is equivalent I would suggest to use:
>
>f.r_time< ('2020-10-01 00:00:00+00'::timestamptz at time zone 'utc' + interval 
>'1 month') at time zone 'utc' ;
>
>optimizer should treat both the same, but I'm no sure it can recognize it, and 
>it may not see your condition is of the type "field < constant" to select 
>index scans and similar things. Also, you can refactor your calculations into 
>an stable function for greater clarity.

The plannes seems to be able to figure this out nicely. 
>
>> I think maybe it's better to just change the session timezone when this need 
>> arises.
>
>Maybe. Also, I do not see the need to query exact partitions by tstz range, 
>but assuming you do it may be much easier to query the catalog for the 
>partition and then query the partition. ( In many years of working with 
>time-partitioned tables I have never got the need to build a query to hit 
>exactly X partitions from times ).
>
I need to hit an exact partition, because the query makes some QA on data 
coming from a file that fits that exact timeslot. File and partition must match 
exactly because the file may be raplaced later and the partition truncated. 

Thank you for your insights.

Niels

>Francisco Olarte.


Re: Query processing in postgresql

2021-04-15 Thread Thomas Kellerer
Rama Krishnan schrieb am 15.04.2021 um 19:04:
> How the select statement has been processed in the postgresql

http://www.interdb.jp/pg/pgsql03.html