Using placeholders when creating a materialized view

2018-12-29 Thread Mitar
Hi!

Is it possible to use placeholders and provide values to the query
when creating a materialized view? It looks like direct passing of
values to the query is not possible and I have to encode them?


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m



initialize and use variable in query

2018-12-29 Thread Glenn Schultz
All,

I need to initialize a variable and then use it in query.  Ultimately this
will part of a recursive CTE but for now I just need to work this out.  I
followed the docs and thought I needed something like this.  But does not
work-maybe I have misunderstood.  Is this possible?

SET max_parallel_workers_per_gather = 8;
SET random_page_cost = 1;
SET enable_partitionwise_aggregate = on;
Do $$
Declare startdate date;
BEGIN
startdate := (select max(fctrdt) from fnmloan);
END $$;
select
fnmloan.loanseqnum
,fnmloan.currrpb
from
fnmloan
join
fnmloan_data
on
fnmloan_data.loanseqnum = fnmloan.loanseqnum
where
fnmloan.fctrdt = (select * from startdate)

limit 10


Re: initialize and use variable in query

2018-12-29 Thread Ray O'Donnell

On 29/12/2018 15:40, Glenn Schultz wrote:

All,

I need to initialize a variable and then use it in query.  
Ultimately this will part of a recursive CTE but for now I just need to 
work this out.  I followed the docs and thought I needed something like 
this.  But does not work-maybe I have misunderstood.  Is this possible?


Hi there,

What does "does not work" mean? What error do you get?



SET max_parallel_workers_per_gather = 8;
SET random_page_cost = 1;
SET enable_partitionwise_aggregate = on;
Do $$
Declare startdate date;
BEGIN
startdate := (select max(fctrdt) from fnmloan);
END $$;


A couple of things off the top of my head:

(i) I think you need "language plpgsql" (or whatever) after the DO block.

(ii) That assignment in the DO should probably be:

  select max(fctrdt) into startdate from fnmloan;


I hope this helps.

Ray.


--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie



Re: initialize and use variable in query

2018-12-29 Thread David G. Johnston
On Saturday, December 29, 2018, Glenn Schultz  wrote:

> All,
>
> I need to initialize a variable and then use it in query.  Ultimately this
> will part of a recursive CTE but for now I just need to work this out.  I
> followed the docs and thought I needed something like this.  But does not
> work-maybe I have misunderstood.  Is this possible?
>
> SET max_parallel_workers_per_gather = 8;
> SET random_page_cost = 1;
> SET enable_partitionwise_aggregate = on;
> Do $$
> Declare startdate date;
> BEGIN
> startdate := (select max(fctrdt) from fnmloan);
> END $$;
>

The stuff in the DO block is plpgsql, the stuff outside is SQL.  SQL cannot
see plpgsql variables.  And the plpgsql variables cease to exist at the end
of the block anyway.

You need to use SET or set_config with a custom variable (namespaced) in
the SQL portion of the script (examples are out there somewhere, not able
to research for you presently).  Or maybe use psql and its
features/variables...

David J.


initialize and use variable in query

2018-12-29 Thread David G. Johnston
On Saturday, December 29, 2018, Ray O'Donnell  wrote:
>
> A couple of things off the top of my head:


Sorry but, no.

>
> (i) I think you need "language plpgsql" (or whatever) after the DO block.


As the docs state plpgsql is the default for a DO block lacking a language
specifier.


> (ii) That assignment in the DO should probably be:
>
>   select max(fctrdt) into startdate from fnmloan;
>

The original form is perfectly valid plpgsql;

The DO block worked just fine.  It’s just that everything it did was
discarded at the end of it because nothing that permanently affected the
parent SQL session happened.

David J.


Re: libpq bug?

2018-12-29 Thread Igor Korot
Hi,

On Sat, Dec 29, 2018 at 1:37 AM patrick keshishian  wrote:
>
> On Fri, Dec 28, 2018 at 5:40 PM Igor Korot  wrote:
>>
>> Hi, Patrick,
>>
>> Here is my new code:
>>
>> int PostgresDatabase::GetTableOwner (const std::wstring
&schemaName,
>> const std::wstring &tableName, std::wstring &owner,
>> std::vector &errorMsg)
>> {
>> int result = 0;
>> std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
>> u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
>> c.relowner AND n.nspname = $1 AND relname = $2";
>> char *values[2];
>> values[0] = NULL, values[1] = NULL;
>> int charlength1 = schemaName.length() * sizeof( wchar_t ),
>> charlength2 = tableName.length() * sizeof( wchar_t );
>> values[0] = new char[schemaName.length() * sizeof( wchar_t ) + 1];
>> values[1] = new char[tableName.length() * sizeof( wchar_t ) + 1];
>> memset( values[0], '\0', schemaName.length()  * sizeof(
wchar_t ) + 1 );
>> memset( values[1], '\0', tableName.length() * sizeof(
wchar_t ) + 1 );
>> strcpy( values[0], m_pimpl->m_myconv.to_bytes(
schemaName.c_str()
>> ).c_str() );
>> strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
>> ).c_str() );
>> int len1 = (int) schemaName.length() * sizeof( wchar_t );
>> int len2 = (int) tableName.length() * sizeof( wchar_t );
>> int length[2] = { len1, len2 };
>> int formats[2] = { 1, 1 };
>> PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
>> query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
>> ExecStatusType status = PQresultStatus( res );
>> if( status != PGRES_COMMAND_OK && status !=
PGRES_TUPLES_OK )
>> {
>> result = 1;
>> std::wstring err = m_pimpl->m_myconv.from_bytes(
>> PQerrorMessage( m_db ) );
>> errorMsg.push_back( L"Error executing query: " + err );
>> PQclear( res );
>> }
>> else
>> {
>> owner = m_pimpl->m_myconv.from_bytes( PQgetvalue(
res, 0, 0 ) );
>> }
>> return result;
>> }
>>
>> The charlength2 variable contains the value of 8 and I'm still getting
>> the same error.
>
>
> I was hoping someone more versed in C++ would jump in to answer
your question. I haven't used C++ in at least a decade.
> You need to convert the wchar_t  data that wstring stores into
UTF-8. Personally, I would use iconv (common enough).

But that is very weird.
When I check what is stored in the values[1] array, I see the same
byte sequence as what I got from the database
information_schema.tables..
Maybe I should just upgrade the libpq and try the latest release?

>
> I assume the PostgresDatabase class is your own (?) I would add a
helper function to do the conversion. Here is a very rough template
for you to adapt if you think it helps you.

Yes, PostgresDatabase is my class.
I will look at that later today, but it would definitely be
interesting to get someone with the current C++
experience (especially with C++11), because I believe that I am doing
a conversion into UTF8.

The m_convert variable is declared as:

std::wstring_convert > m_myconv;

and so I think it is converting to the UTF8.

Thank you.

>
>
> #include 
>
> #include 
>
> #include 
>
> #include 
>
> #include 
>
>
> #include 
>
>
> class PGDB {
>
> public:
>
> // your stuff ...
>
> iconv_t ic;
>
>
> PGDB(void) {
>
> setlocale(LC_CTYPE, "");
>
> ic = iconv_open("UTF-8", "wchar_t");
>
> if ((iconv_t)-1 == ic)
>
> errx(1, "iconv_open");
>
> }
>
> ~PGDB() {
>
> iconv_close(ic);
>
> }
>
> // caller should free()
>
> char*wchar2utf8(std::wstring const &ws) {
>
> char*in, *buf, *out;
>
> size_t  bufsz, inbytes, outbytes;
>
>
> in = (char *)ws.data();
>
> inbytes = ws.length() * sizeof(wchar_t);
>
> outbytes = inbytes;
>
> bufsz = inbytes + 1; // XXX check for overflow
>
>
> buf = (char *)calloc(bufsz, 1);
>
> if (NULL == buf)
>
> err(1, NULL); // or throw something
>
>
> out = buf;
>
> if ((size_t)-1 == iconv(ic, &in, &inbytes, &out, &outbytes))
>
> errx(1, "iconv"); // or throw ...
>
>
> // TODO ensure inbytes is 0 (meaning all input consumed)
>
> return buf;
>
> }
>
> };
>
>
> // demo using above PGDB class/code
>
> int main(int argc, char *argv[])
>
> {
>
> char*str;
>
> size_t  i, n;
>
> std::wstringtab;
>
> PGDBpg;
>
>
> tab = L"ºabcß";
>
> str = pg.wchar2utf8(tab);
>
>
> n = strlen(str);
>
> for (i = 0; i < n; ++i) {
>
> printf("%02hhx ", str[i]);
>
> }
>
> printf("\n");
>
> printf("->%s<-\n", str);

Re: initialize and use variable in query

2018-12-29 Thread Glenn Schultz
Thanks!

On Sat, Dec 29, 2018 at 10:06 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Saturday, December 29, 2018, Ray O'Donnell  wrote:
>>
>> A couple of things off the top of my head:
>
>
> Sorry but, no.
>
>>
>> (i) I think you need "language plpgsql" (or whatever) after the DO block.
>
>
> As the docs state plpgsql is the default for a DO block lacking a language
> specifier.
>
>
>> (ii) That assignment in the DO should probably be:
>>
>>   select max(fctrdt) into startdate from fnmloan;
>>
>
> The original form is perfectly valid plpgsql;
>
> The DO block worked just fine.  It’s just that everything it did was
> discarded at the end of it because nothing that permanently affected the
> parent SQL session happened.
>
> David J.
>
>


Recursive CTE

2018-12-29 Thread Glenn Schultz
All,
Following my earlier post on variable instantiation, I rethought how I was
working with dates and realized I can fix the date and use static
interval.  I came up with this recursive CTE which is the end goal.
However, the problem is that the convexity query cannot be used as a
subquery.  So I think I need to use a join of convexity on the original
query - not sure I am little stuck at this point but I feel I am close.
Any help would be appreciated.

-Glenn

SET max_parallel_workers_per_gather = 8;
SET random_page_cost = 1;
SET enable_partitionwise_aggregate = on;
with recursive convexity (fctrdt, CPR3mo) as
(
select
cast((select max(fctrdt) - interval '1 month' from fnmloan) as date) as
"fctrdt"
,round(
smmtocpr(
cast(
sum(currrpb * case when fnmloan.fctrdt = fnmloan_data.fctrdt then
fnmloan_data.event else 0 end)/ sum(currrpb)
as numeric) * 100
),4) * 100 as "CPR 3mo"
from
fnmloan
join
fnmloan_data
on
fnmloan_data.loanseqnum = fnmloan.loanseqnum
where
fnmloan.fctrdt between ((select max(fctrdt)-interval '1 month' from
fnmloan) - interval '2 month') and (select max(fctrdt) - interval '1 month'
from fnmloan)
and
fnmloan.poolprefix = 'CL'
union all
select
cast((select max(fctrdt) - interval '1 month' from convexity) as date) as
"fctrdt"
,round(
smmtocpr(
cast(
sum(currrpb * case when fnmloan.fctrdt = fnmloan_data.fctrdt then
fnmloan_data.event else 0 end)/ sum(currrpb)
as numeric) * 100
),4) * 100 as "CPR 3mo"
from
fnmloan
join
fnmloan_data
on
fnmloan_data.loanseqnum = fnmloan.loanseqnum
where
fnmloan.fctrdt between ((select max(fctrdt)-interval '1 month' from
convexity) - interval '2 month') and (select max(fctrdt) - interval '1
month' from convexity)
and
fnmloan.poolprefix = 'CL'
and
convexity.fctrdt <= (select max(fctrdt) - interval' 12 months' from fnmloan)
)
select * from convexity


getting pg_basebackup to use remote destination

2018-12-29 Thread Chuck Martin
I thought I knew how to do this, but I apparently don't. I have to set up a
new server as a standby for a PG 11.1 server. The main server has a lot
more resources than the standby. What I want to do is run pg_basebackup on
the main server with the output going to the data directory on the new
server. But when I give this command:

pg_basebackup -D "ssh root@10.0.1.16:/mnt/dbraid/data" -P -v -X s


it instead writes to my root drive which doesn't have the space, so it
fails and deletes the partial backup.


While I think I could figure out how to backup to a local directory then
rsync it to the new server, I'd like to avoid that due to the 750GB size.


Is there a way to tell pg_basebackup to use a remote destination for
output? Or do I have to run pg_basebackup on the standby server?


And while I'm asking, has anyone yet written a guide/tutorial for PG 11
replication? Everything I find online is very old.

Chuck Martin
Avondale Software


Re: getting pg_basebackup to use remote destination

2018-12-29 Thread Adrian Klaver

On 12/29/18 11:04 AM, Chuck Martin wrote:
I thought I knew how to do this, but I apparently don't. I have to set 
up a new server as a standby for a PG 11.1 server. The main server has a 
lot more resources than the standby. What I want to do is run 
pg_basebackup on the main server with the output going to the data 
directory on the new server. But when I give this command:


pg_basebackup -D "ssh root@10.0.1.16:/mnt/dbraid/data" -P -v -X s


it instead writes to my root drive which doesn't have the space, so it 
fails and deletes the partial backup.



While I think I could figure out how to backup to a local directory then 
rsync it to the new server, I'd like to avoid that due to the 750GB size.



Is there a way to tell pg_basebackup to use a remote destination for 
output? Or do I have to run pg_basebackup on the standby server?


Not that I know of.

I would run it from the standby as a good deal of the heavy lifting is 
done on the main server anyway. In either case the standby will incur 
roughly the same load, namely receiving the data and writing it to disk.





And while I'm asking, has anyone yet written a guide/tutorial for PG 11 
replication? Everything I find online is very old.


Maybe?:
https://www.postgresql.org/docs/11/high-availability.html




Chuck Martin
Avondale Software



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Using placeholders when creating a materialized view

2018-12-29 Thread Adrian Klaver

On 12/29/18 12:07 AM, Mitar wrote:

Hi!

Is it possible to use placeholders and provide values to the query
when creating a materialized view? It looks like direct passing of
values to the query is not possible and I have to encode them?


The only way I could see to do that is using dynamic SQL in a function:

https://www.postgresql.org/docs/11/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

Or use something like pl/pythonu.





Mitar




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: In which session context is a trigger run?

2018-12-29 Thread Adrian Klaver

On 12/28/18 11:44 PM, Mitar wrote:

Hi!

On Fri, Dec 28, 2018 at 9:36 PM Adrian Klaver  wrote:

When you create the temporary function it is 'pinned' to a particular
session/pg_temp_nn. Running the trigger in another session 'pins' it to
that session and it is not able to see the posts_temp table in the
original session.


Yes. But why is trigger run in that other session? Could there be a
way to get trigger to run in the session where it was declared?


Because it is where the temporary table is declared that is important. 
If you want a more definitive answer then you will need to provide more 
information, in the form of:


1) The definition for pg_temp.my_function().

2) A step by step accounting of where the objects are created, in what 
sessions and what order.





And yes, global temporary tables would be another approach to solve
this. But being able to set the session could be another, no? Or are
sessions linked to OS processes PostgreSQL is using and this is why
triggers run based on the session in which a change on the table was
made?


Take a look at:

https://www.postgresql.org/docs/11/mvcc.html

https://www.postgresql.org/docs/11/trigger-definition.html





Mitar




--
Adrian Klaver
adrian.kla...@aklaver.com