Re: Remote Connection Help

2019-11-21 Thread Mark Johnson
As I recall, if the listening address is set to '*' but is showing
localhost, then the problem you describe is likely due to missing an IPv6
address in pg_hba.conf.  For me, I just added a line to pg_hba.conf like
this:
hostall all ::1/128 md5

So, even though my client app is on the db server and the connection string
has an IPv4 address the connection request still gets to PostgreSQL as IPv6
and fails until I added the line shown above.

Did your netstat output have two lines for the port numbers used by
PostgreSQL or just one of them?  My computer has two like this,
$ netstat -nlt | grep 5432
tcp0  0 127.0.0.1:5432  0.0.0.0:*   LISTEN
tcp6   0  0 ::1:5432:::*LISTEN



On Thu, Nov 21, 2019 at 1:41 PM Jason L. Amerson 
wrote:

> Yes "listen_addresses" is not commented. I did notice when I did the
> netstat, for tcp, it was all "127.0.0.1" on various ports including 5432
> but I have a listing for tcp6 that has my static IP using port 32305. Would
> that make a difference?
>
> Jason L. Amerson
>
> -Original Message-
> From: Tom Lane 
> Sent: Thursday, November 21, 2019 01:18 PM
> To: Jason L. Amerson 
> Cc: 'Steve Crawford' ; 'Adrian Klaver' <
> adrian.kla...@aklaver.com>; 'PostgreSQL' <
> pgsql-general@lists.postgresql.org>
> Subject: Re: Remote Connection Help
>
> "Jason L. Amerson"  writes:
> > I connected to PostgreSQL locally. I ran “show listen_addresses;” and it
> returned “localhost.” I ran “show port;” and it returned “5432.” I am now
> confused. I edited the “postgresql.conf” file and change the setting to
> ‘*’. Then I restarted the server with “service postgresql restart.” I was
> in root since I had to edit the config files. I thought maybe I edited the
> wrong file, like maybe there were two in two different locations or
> something. I ran “show confg_file;” and it returned
> “/usr/local/psql/data/postgresql.conf.” That is the same file I edited from
> the start. To be sure, I edited the file by using “nano
> /usr/local/psql/data/postgresql.conf.” I went down and found that I did
> have it as “listen_addresses = ‘*’ yet when I run “show listen_addresses”,
> it shows “localhost.” I am confused. When I run “netstat -nlt”, the results
> show that it is listening to “127.0.0.1:5432.”
>
> According to what you wrote here, you did everything right, so it's
> something you failed to mention.
>
> One thing I'm wondering is whether you removed the comment symbol (#) from
> the listen_addresses line when you edited it.  As installed,
> postgresql.conf is pretty much all comments.
>
> You might get more insight from
>
> select * from pg_settings where name = 'listen_addresses';
>
> particularly the source, sourcefile, sourceline fields.
>
> regards, tom lane
>
>
>
>


Re: Handling time series data with PostgreSQL

2020-10-07 Thread Mark Johnson
I think the OP may be referring to Oracle's Temporal Validity feature.
This type of feature has yet to be implemented in PostgreSQL (see
https://www.postgresql.org/docs/13/unsupported-features-sql-standard.html
item T181).

Temporal Validity allows you to add a time dimension to any table, and only
display rows of data that are valid for the requested time period.
Oracle's implementation of Temporal Validity uses the PERIOD FOR clause in
CREATE TABLE, ALTER TABLE, and SELECT statements as illustrated below:

CREATE TABLE EMPLOYEE
(
   IDNUMBER PRIMARY KEY,
   TAX_IDVARCHAR2(10),
   HIRE_DATE TIMESTAMP,
   TERM_DATE TIMESTAMP,
   PERIOD FOR EMP_VALID_TIME (HIRE_DATE, TERM_DATE)
);

SELECT * FROM EMPLOYEE
   VERSIONS PERIOD FOR EMP_VALID_TIME
   BETWEEN TO_TIMESTAMP('06-OCT-2013', 'DD-MON-')
   AND TO_TIMESTAMP('31-OCT-2013', 'DD-MON-');

   ID TAX_ID HIRE_DATETERM_DATE
- -- --
--
1 123456789  06-OCT-13 12.00.00.00 AM   07-NOV-15 12.00.00.00 AM
2 222456789  07-OCT-13 12.00.00.00 AM
4 44400
5 505050505  30-OCT-13 12.00.00.00 AM   31-OCT-13 12.00.00.00 AM
6 666999666  30-SEP-13 12.00.00.00 AM   31-DEC-13 12.00.00.00 AM

The above feature requires Oracle 12 or higher.  SQL Server 2016 and later
also support it.  In earlier releases of each DBMS we tried to accomplish
the same by adding pairs of timestamp columns to each table and then
writing our own code to handle row filtering.  Partitioning isn't needed.
Certainly partitioning by range could be used, but it would still require
some manual efforts.

-Mark

On Wed, Oct 7, 2020 at 10:41 AM Stephen Frost  wrote:

> Greetings,
>
> * Jayaram (jairam...@gmail.com) wrote:
> > So, Do we need the timescaleDB as mandatory to handle time series data?
> Is
> > there any way to handle hourly to days,months,yearly data with PGSQL
> alone
> > without timescale addon?
>
> Certainly there is and a lot of people do it- what isn't clear is what
> it is you feel is missing from PG when it comes to handling time series
> data..?  Generally speaking there's concerns about PG's ability to
> handle lots of partitions (which comes from there being very large
> amounts of data being stored), but v12 and v13 have made great
> improvements in that area and it's not nearly an issue any longer (and
> performs better in quite a few cases than extensions).
>
> > Ours is a new project and we are unsure about whether we should have both
> > timescaleDB and PGSQL or PGSQL alone is capable of handling this time
> > series data by tuning the right indexes.etc..
>
> Partitioning and index tuning in PG (look at using BRIN if you haven't
> already...) is important when you get to larger data volumes.
>
> Thanks,
>
> Stephen
>


Re: Hot backup in PostgreSQL

2020-10-22 Thread Mark Johnson
User managed backups in PostgreSQL work very similar to what you know from
Oracle.  You first place the cluster in backup mode, then copy the database
files, and lastly take the cluster out of backup mode.  The first and last
steps are done using functions pg_start_backup('label',false,false) and
pg_stop_backup(false, false). [1].

If you use a utility supplied with PostgreSQL such as pg_basebackup, it
does these steps for you.  If you are using a specific non-PostgreSQL
utility (i.e., Dell Networker or IBM Tivoli) see their documentation for
specifics.

[1]
https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-BACKUP.


On Thu, Oct 22, 2020 at 11:14 AM Adam Brusselback 
wrote:

> > how to do "hot backup" (copying files) while database running?
> As others have shown, there are ways to do this with PG's internal tooling
> (pg_basebackup).
>
> However, I would highly recommend you use an external backup tool like
> pgbackrest [1] to save yourself the pain of implementing things incorrectly
> and ending up with non-viable backups when you need them most. I'm not
> affiliated with them at all, but have just used pgbackrest in production
> for years now with great results.  It takes care of PITR, and manages
> backup retention (and associated WAL retention). Those can be a bit of a
> pain to do manually otherwise.
>
> Just my $0.02, hope it helps!
>
> 1. https://pgbackrest.org/
>


Re: Discovering postgres binary directory location

2020-11-12 Thread Mark Johnson
On any given server there could be zero, one, or many PGHOME/bin
locations.  The OP wants to identify all of them.  The default location
used by package-based installers is different from the default location of
software built from source, and when building from source you can specify a
non-default location.  My server has PG 9.6, 12.1, and 13 from RPM and also
12.1 from source.

The solution might be as easy as finding all installations of a core
PostgreSQL executable like pg_ctl.  I would not search for psql since you
will find one or more links in system folders and so your monitoring tool
would need some logic to filter out the links.  The word postgres is both
an executable and a user directory name.  Hence I am using pg_ctl in my
example.  This example was written on CentOS 7.7.

# find / -name pg_ctl
/usr/pgsql-13/bin/pg_ctl
/usr/local/pgsql/bin/pg_ctl
/usr/pgsql-12/bin/pg_ctl
/usr/pgsql-9.6/bin/pg_ctl
/root/Downloads/postgresql-12.1/src/bin/pg_ctl
/root/Downloads/postgresql-12.1/src/bin/pg_ctl/pg_ctl
You have mail in /var/spool/mail/root

Notice above my server has three RPM installs and one source code install.
Also notice the results contain two false positives (in the Downloads
directory) and also a message about mail.  You'll have to figure out how to
separate the good and bad results.

To strip off the file name and only return the directory, you can used the
-printf option as shown below or if not supported on your system use a host
command like sed or awk.  This does not remove false positives or system
messages.

find / -name 'pg_ctl' -printf "%h\n"
/usr/pgsql-13/bin
/usr/local/pgsql/bin
/usr/pgsql-12/bin
/usr/pgsql-9.6/bin
/root/Downloads/postgresql-12.1/src/bin
/root/Downloads/postgresql-12.1/src/bin/pg_ctl
You have mail in /var/spool/mail/root

Careful when stripping out the false positives.  PostgreSQL installed from
source can be pretty much anywhere including a Downloads directory, /tmp,
and so on.  In my case, the Downloads directory has a src sub-directory,
which tells me it is just a staging area for source code.

-Mark

On Thu, Nov 12, 2020 at 1:22 AM Paul Förster 
wrote:

> Hi Raul, hi Adrian,
>
> > On 11. Nov, 2020, at 23:26, Adrian Klaver 
> wrote:
> >
> > On 11/11/20 2:22 PM, Raul Kaubi wrote:
> >> Hi
> >> CentOS 7
> >> Postgres 9 to 12
> >> I am looking ways to universally discover postgresql binary directory
> for monitoring purpose.
> >> For example postgres 12, it is: */usr/pgsql-12*
> >
> > pg_config --bindir
> > /usr/local/pgsql12/bin
>
> or by query:
>
> postgres=# select setting from pg_config where name = 'BINDIR';
>  setting
> -
>  /data/postgres/12.4/bin
> (1 row)
>
> Cheers,
> Paul
>
>


Re: pg_dump - how to force to show timestamps in client log

2020-11-20 Thread Mark Johnson
This all sounds like a previous discussion on pg hackers about a progress
meter for pg_dump.  Search the archives for that discussion.  Also, search
the web for something like "pg_dump progress meter" and you'll get a few
suggestions like pipe to pv, although that does not appear to work with all
of the file formats supported by pg_dump.

What do you see in pg_stat_activity?  It's been a while since I tried to
monitor a running pg_dump.

 Also, if you redirect the output to a file then doesn't the file's
timestamp get updated each time something happens.  (That's what I used to
do with Oracle before they added timestamps).

On Fri, Nov 20, 2020 at 5:00 PM Ron  wrote:

> On 11/20/20 3:39 PM, Adrian Klaver wrote:
> > On 11/20/20 1:00 PM, Ron wrote:
> >> On 11/20/20 2:56 PM, Adrian Klaver wrote:
> >>> On 11/20/20 10:01 AM, Durumdara wrote:
>  Hello!
> 
>  We need to log the pg_dump's state.
>  What objects are in copy, and what are the starting and ending times.
> 
>  But when I try to redirect the output, the result doesn't have
> timestamps.
> 
>  PG 11, on Windows.
> 
>  As I see the -v option isn't enough to see the starting times.
> 
>  For example:
> 
>  2020-11-19 12:00:01.084 Dump table content table1
>  2020-11-19 12:03:12.932 Dump table content table2
>  ...
>  etc.
> >>>
> >>>
> >>> If you are redirecting to a file it have the creation time that you
> can
> >>> use. Internally times don't really matter for the objects as the dump
> is
> >>> based on a snapshot. Said snapshot is based on visible transactions
> not
> >>> time. So for practical purposes they all occur at the same 'time'.
> >>
> >> It makes all the difference when monitoring the progress of a backup.
> >>
> >
> > With -v you will get running list of objects dumped, just not the time.
> > The time is only of value relative to the following. Progress will only
> be
> > measurable by determining what is left to run and the time for each
> > object. Not sure that is feasible as you would have to pre-run the dump
> to
> > get information about the number of objects and an estimate of the data
> > quantity involved and the effect of each on the other. I could see that
> > estimate getting worse the bigger the data set(and hence the more you
> > cared) got. Because at some point the load on the machine would affect
> the
> > output speed of the dump.
>
> By knowing the sizes of the tables, and how long it takes to takes the
> first
> "some" tables, then one can forecast how long it takes to backup the whole
> database.
>
> --
> Angular momentum makes the world go 'round.
>
>
>


Re: Dynamic procedure execution

2020-12-29 Thread Mark Johnson
Don't you have to select into a variable and then return the variable to
the client per [1]?

Consider the following example from my Oracle system:

beginning code ...
  V_SQL := 'SELECT COUNT(*) FROM ' || V_TAB;
  EXECUTE IMMEDIATE V_SQL INTO V_CNT;
ending code ...

[1]
https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
.


On Tue, Dec 29, 2020 at 11:40 AM Adrian Klaver 
wrote:

> On 12/28/20 10:34 PM, Muthukumar.GK wrote:
>
> Pleas do not top post, the style on this list is bottom/inline posting.
> > Hi Adrian Klaver,
> >
> > Sorry for typo mistake. Instead of writing lengthy query, I had written
> > it simple. Actually my main concept is to bring result set with multiple
> > rows (using select query) with help of dynamic query.
> >
> > When calling that procedure in Pgadmin4 window, simply getting the
> > message as ‘ CALL Query returned successfully in 158 msec’.
> >
> > FYI, I have implemented simple dynamic query for UPDATE and DELETE rows.
> > It is working fine without any issues.
> >
> > Please let me know is there any way of getting result set using dynamic
> > query.
> >
> > _Issue with dynamic select:-_
> >
> > __
> >
> > CREATE OR REPLACE Procedure sp_select_dynamic_sql(
> >
> >  keyvalue integer)
> >
> >  LANGUAGE 'plpgsql'
> >
> > AS $BODY$
> >
> > Declare v_query text;
> >
> > BEGIN
> >
> >  v_query:= 'select * from Los_BankInfo '
> >
> >  || ' where pk_id = '
> >
> >  || quote_literal(keyvalue);
> >
> >  execute v_query;
> >
> >  END;
> >
> > $BODY$;
> >
> > _Execuion__ of Proc:-_
> >
> > CALL sp_select_dynamic_sql (11);
> >
> > _Output:-_
> >
> > CALL
> >
> > Query returned successfully in 158 msec.
>
> See here:
>
>
> https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING-PROCEDURE
>
> "
> 42.6.2. Returning from a Procedure
>
> A procedure does not have a return value. A procedure can therefore end
> without a RETURN statement. If you wish to use a RETURN statement to
> exit the code early, write just RETURN with no expression.
>
> If the procedure has output parameters, the final values of the output
> parameter variables will be returned to the caller.
> "
>
> So use a function and follow the docs here:
>
>
> https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING
>
> in particular:
>
> "42.6.1.2. RETURN NEXT and RETURN QUERY"
>
> >
> > _Working fine with Dynamic UPDATE and DELETE Statement :-_
> >
> > __
> >
> > _UPDATE:-_
> >
> > __
> >
> > CREATE OR REPLACE Procedure sp_Update_dynamic_sql(
> >
> >  newvalue varchar(10),
> >
> >  keyvalue integer)
> >
> >  LANGUAGE 'plpgsql'
> >
> > AS $BODY$
> >
> > Declare v_query text;
> >
> > BEGIN
> >
> >  v_query:= 'update Los_BankInfo set approverid'
> >
> >  || ' = '
> >
> >  || quote_literal(newvalue)
> >
> >  || ' where pk_id = '
> >
> >  || quote_literal(keyvalue);
> >
> >  execute v_query;
> >
> >  END;
> >
> > $BODY$;
> >
> > --CALL sp_Update_dynamic_sql (john,10);
> >
> > _DELETE:-_
> >
> > __
> >
> > CREATE OR REPLACE Procedure sp_Delete_dynamic_sql(
> >
> >  keyvalue integer)
> >
> >  LANGUAGE 'plpgsql'
> >
> > AS $BODY$
> >
> > Declare v_query text;
> >
> > BEGIN
> >
> >  v_query:= 'delete from Los_BankInfo '
> >
> >  || ' where pk_id = '
> >
> >  || quote_literal(keyvalue);
> >
> >  execute v_query;
> >
> >  END;
> >
> > $BODY$;
> >
> > --CALL sp_Delete_dynamic_sql(10);
> >
> >
> >
> > Regards
> >
> > Muthu
> >
> >
> > On Mon, Dec 14, 2020, 8:54 PM Adrian Klaver  > > wrote:
> >
> > On 12/13/20 9:59 PM, Muthukumar.GK wrote:
> >  > Hi team,
> >  >
> >  > When I am trying to implement belwo dynamic concept in postgreSql,
> >  > getting some error. Kindly find the below attached program and
> > error.
> >  > Please advise me what is wrong here..
> >  >
> >  > CREATE OR REPLACE PROCEDURE DynamicProc()
> >  >
> >  > AS $$
> >  >
> >  > DECLARE v_query TEXT;
> >  >
> >  > C1 refcursor := 'result1';
> >  >
> >  > begin
> >  >
> >  > v_query := '';
> >  >
> >  > v_query := ' OPEN C1 FOR SELECT * FROM public."Bankdetails"';
> >  >
> >  > EXECUTE (v_query);
> >  >
> >  > END;
> >  >
> >  > $$
> >  >
> >  > Language plpgsql;
> >  >
> >  >   Calling procedure :-
> >  >
> >  > 
> >  >
> >  > CALL DynamicProc();
> >  >
> >  > FETCH ALL IN "result1";
> >  >
> >  >
> >  > Error :-
> >  >
> >  > --
> >  >
> >  > ERROR: syntax error at or near "OPEN"
> >  >
> >  > LINE 1: OPEN C1 FOR S

Re: Postgres Analog of Oracle APPEND hint

2021-02-25 Thread Mark Johnson
Since INSERT /*+APPEND*/ is generally used when bulk loading data into
Oracle from external files you should probably look at the PostgreSQL COPY
command (https://www.postgresql.org/docs/13/sql-copy.html) and additional
utilities like pg_bulkload (https://github.com/ossc-db/pg_bulkload)   .

On Thu, Feb 25, 2021 at 9:45 AM Rumpi Gravenstein 
wrote:

> All,
>
> Using PostgreSQL 13.1
>
> I am new to PostgreSQL transitioning from Oracle.  One of the many Oracle
> tricks I learned is that large inserts can be sped up by adding the direct
> path load hint /*+APPEND*/ .  I am faced with having to perform many large
> inserts (100K->100M rows) in my PostgreSQL database.
>
> My questions are:
>
>- Is there something comparable within the PostgreSQL community
>edition product?
>- Are my only options to un-log the table and tune instance memory
>parameters?
>
> I've googled for this and can't find a definitive statement on this
> point.
>
> --
> Rumpi Gravenstein
>


Re: TPC-DS queries

2019-03-14 Thread Mark Johnson
I found this error in queries generated from templates query36.tpl,
query70.tpl, and query86.tpl.  The problem is, lochierarchy is an alias
defined in the SELECT statement, and the alias isn't being recognized in
the CASE statement.  PostgreSQL does not allow a column alias to be
referenced in a CASE statement, you have to use the actual column name.
Modify each of the queries throwing errors, and replace the lochierarchy
alias with the actual column name you see in the SELECT statement.
-Mark



On Mon, Mar 11, 2019 at 4:00 AM Tatsuo Ishii  wrote:

> > Hi,
> >
> > I think that the sql is not valid. Based on the order by
> > documentation, a column label cannot be used in an expression.
> >
> > fromhttps://www.postgresql.org/docs/11/queries-order.html
> >  > Note that an output column name has to stand alone, that is, it
> > cannot be used in an expression.
>
> Thanks. Yes, you are correct. The line should be something like:
>
>,case when grouping(i_category)+grouping(i_class) = 0 then i_category
> end
>
> > Regards
> > s.
> >
> > On 11.03.2019 06:30, Tatsuo Ishii wrote:
> >> I played with TPC-DS and found some of them can't be executed because
> >> of SQL errors and I am not sure why.
> >>
> >> For example with query 36:
> >>
> >> select
> >>  sum(ss_net_profit)/sum(ss_ext_sales_price) as gross_margin
> >> ,i_category
> >> ,i_class
> >> ,grouping(i_category)+grouping(i_class) as lochierarchy
> >> ,rank() over (
> >>  partition by grouping(i_category)+grouping(i_class),
> >>  case when grouping(i_class) = 0 then i_category end
> >>  order by sum(ss_net_profit)/sum(ss_ext_sales_price) asc) as
> >>  rank_within_parent
> >>   from
> >>  store_sales
> >> ,date_dim   d1
> >> ,item
> >> ,store
> >>   where
> >>  d1.d_year = 2000
> >>   and d1.d_date_sk = ss_sold_date_sk
> >>   and i_item_sk  = ss_item_sk
> >>   and s_store_sk  = ss_store_sk
> >>   and s_state in ('TN','TN','TN','TN',
> >>   'TN','TN','TN','TN')
> >>   group by rollup(i_category,i_class)
> >>   order by
> >> lochierarchy desc
> >>,case when lochierarchy = 0 then i_category end -- line 25 is here.
> >>,rank_within_parent
> >>limit 100;
> >> psql:query_0.sql:1935: ERROR:  column "lochierarchy" does not exist
> >> LINE 25:   ,case when lochierarchy = 0 then i_category end
> >>
> >> I have follwed the instruction here.
> >> https://ankane.org/tpc-ds
> >>
> >> PostgreSQL is master branch HEAD. For me, the SQL above looks to be
> >> valid.
> >>
> >> Best regards,
> >> --
> >> Tatsuo Ishii
> >> SRA OSS, Inc. Japan
> >> English: http://www.sraoss.co.jp/index_en.php
> >> Japanese:http://www.sraoss.co.jp
> >>
> >
>
>


Re: ignore tablespace in schema definition queries

2021-04-03 Thread Mark Johnson
The solution depends on how you are creating the tables.

For example: the pg_restore has option  —-no-tablespaces. With this option,
all objects will be created in whichever tablespace is the default during
restore.  The pg_dump has similar.

If you are running CREATE TABLE statements that have hard-coded
tablespaces, then maybe pass your scripts through the sed or awk utility to
replace the name with pg_default just prior to execution.

Or, go ahead and create all possible tablespaces before running the CREATE
TABLE statements since each tablespace is just metadata not files like in
Oracle or SQL Server.

On Sat, Apr 3, 2021 at 6:59 AM Joao Miguel Ferreira <
joao.miguel.c.ferre...@gmail.com> wrote:

> Hello all,
>
> I have a big set of migration queries (that I do not control) that I must
> run on my automatic test database, in order to set ip up and run tests.
> These queries create all sorts of things like indexes, tables, and so. But
> they also include the specification of the tablespace they expect to use
> (tablespace_000, tablespace_001, up to tablespace_999). This would require
> me to setup hundreds of tablespaces before I can start the migration
> process, and run the tests.
>
> Is there a way to tell the postgres server to ignore that part and just
> use some default tablespace? My present situation is that I can not bring
> the test database to a usable point because many migration queries fail due
> to the tablespace they need has not been created. My problem is that I
> would like to avoid creating them.
>
> Thanks
> João
>
> --
- Mark