managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Krishnakant Mane
Hello all,

I have been using postgresql for an enterprise quality account's
automation and inventory management software called GNUKhata
<https://gnukhata.in>

Our team is planning to add backup and restore function in the software.

But we don't want to dump the entire database and then restore the same.

What we are trying to do is to copy data specific to an organization.

The challenge here is that I might copy all data (account heads, bills,
vouchers etc ) for one organization from an instance on one machine.

I take the archive in what ever format to another machine and now
attempt to restore.

The risk here is for example if the primary key value for orgcode in the
organization table is 5, it might conflict with the data where I am
attempting it to be restored.

Same holds true for bills, invoices etc.

A certain account head with accountcode 1 might be already present on
the second machine.

I am not expecting the users to empty all data from the destination
machine before restoring a backup.

The reason is that an auditor may have many client's data and one can't
predict what primary key values are going to come from a backup.

Basically I can even say this is a copy paste instead of a pure backup
and restore.

Can any one suggest how to handle such conflicts?


-- 
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata <https://gnukhata.in/>
//(Opensource Accounting, Billing and Inventory Management Software)//


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Krishnakant Mane

On 25/09/19 4:32 PM, Rob Sargent wrote:
>
>
> On Sep 25, 2019, at 1:15 AM, Krishnakant Mane  <mailto:kkm...@riseup.net>> wrote:
>
>> Hello all,
>>
>> I have been using postgresql for an enterprise quality account's
>> automation and inventory management software called GNUKhata
>> <https://gnukhata.in>
>>
>> Our team is planning to add backup and restore function in the software.
>>
>> But we don't want to dump the entire database and then restore the same.
>>
>> What we are trying to do is to copy data specific to an organization.
>>
>> The challenge here is that I might copy all data (account heads,
>> bills, vouchers etc ) for one organization from an instance on one
>> machine.
>>
>> I take the archive in what ever format to another machine and now
>> attempt to restore.
>>
>> The risk here is for example if the primary key value for orgcode in
>> the organization table is 5, it might conflict with the data where I
>> am attempting it to be restored.
>>
>> Same holds true for bills, invoices etc.
>>
>> A certain account head with accountcode 1 might be already present on
>> the second machine.
>>
>> I am not expecting the users to empty all data from the destination
>> machine before restoring a backup.
>>
>> The reason is that an auditor may have many client's data and one
>> can't predict what primary key values are going to come from a backup.
>>
>> Basically I can even say this is a copy paste instead of a pure
>> backup and restore.
>>
>> Can any one suggest how to handle such conflicts?
>>
>>
>> -- 
>> Regards,
>> Krishnakant Mane,
>> Project Founder and Leader,
>> GNUKhata <https://gnukhata.in/>
>> //(Opensource Accounting, Billing and Inventory Management Software)//
> I’m not sure I like your definition of ‘backup and restore’ but you
> might get away with your approach if your keys were UUIDs. But I’ll
> bet dollars to doughnuts you’re using serial keys. Those seem to
> appeal to accounting types.  
-- 
Regards,
Hi Rob,
yes you are right, they are serial keys because timestamp or anything
thereoff is agressively despised by the accountants and the likes.
Now I am really stuck as to what could be done.Krishnakant Mane,
Project Founder and Leader,
GNUKhata <https://gnukhata.in/>
//(Opensource Accounting, Billing and Inventory Management Software)//


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Krishnakant Mane

On 25/09/19 7:50 PM, Adrian Klaver wrote:
> On 9/25/19 12:15 AM, Krishnakant Mane wrote:
>> Hello all,
>>
>> I have been using postgresql for an enterprise quality account's
>> automation and inventory management software called GNUKhata
>> <https://gnukhata.in>
>>
>> Our team is planning to add backup and restore function in the software.
>>
>> But we don't want to dump the entire database and then restore the same.
>>
>> What we are trying to do is to copy data specific to an organization.
>>
>> The challenge here is that I might copy all data (account heads,
>> bills, vouchers etc ) for one organization from an instance on one
>> machine.
>>
>> I take the archive in what ever format to another machine and now
>> attempt to restore.
>>
>> The risk here is for example if the primary key value for orgcode in
>> the organization table is 5, it might conflict with the data where I
>> am attempting it to be restored.
>>
>> Same holds true for bills, invoices etc.
>>
>> A certain account head with accountcode 1 might be already present on
>> the second machine.
>>
>> I am not expecting the users to empty all data from the destination
>> machine before restoring a backup.
>>
>> The reason is that an auditor may have many client's data and one
>> can't predict what primary key values are going to come from a backup.
>>
>> Basically I can even say this is a copy paste instead of a pure
>> backup and restore.
>>
>> Can any one suggest how to handle such conflicts?
>
> Hard to say. If the data is held in common tables(bills, vouchers,
> etc)then the only thing I see happening is changing the PK values to
> an unused value. That could turn into a nightmare though. Not only
> that you lose the connection to the original data source. If the data
> can be broken out into separate tables then I could see placing them
> in their own schema.
>
>>
>>
>> -- 
>> Regards,
>> Krishnakant Mane,
>> Project Founder and Leader,
>> GNUKhata <https://gnukhata.in/>
>> //(Opensource Accounting, Billing and Inventory Management Software)//
>

Hi Adrian,

Even I am thinnking to do some kind of upsert with this situation.

And I would have to set the pkey to an unassigned value when there is
conflict.

I may also choose to revamp the serial by timestamps but don't know if
the target customers would like it.

-- 
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata <https://gnukhata.in/>
//(Opensource Accounting, Billing and Inventory Management Software)//


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Krishnakant Mane

On 26/09/19 12:03 AM, Adrian Klaver wrote:
> On 9/25/19 8:04 AM, Rob Sargent wrote:
>>
>>
>> On Sep 25, 2019, at 8:24 AM, Krishnakant Mane > <mailto:kkm...@riseup.net>> wrote:
>>
>>>
>>> On 25/09/19 7:50 PM, Adrian Klaver wrote:
>>>> On 9/25/19 12:15 AM, Krishnakant Mane wrote:
>>>>> Hello all,
>>>>>
>>>>> I have been using postgresql for an enterprise quality account's
>>>>> automation and inventory management software called GNUKhata
>>>>> <https://gnukhata.in>
>>>>>
>>>>> Our team is planning to add backup and restore function in the
>>>>> software.
>>>>>
>>>>> But we don't want to dump the entire database and then restore the
>>>>> same.
>>>>>
>>>>> What we are trying to do is to copy data specific to an organization.
>>>>>
>>>>> The challenge here is that I might copy all data (account heads,
>>>>> bills, vouchers etc ) for one organization from an instance on one
>>>>> machine.
>>>>>
>>>>> I take the archive in what ever format to another machine and now
>>>>> attempt to restore.
>>>>>
>>>>> The risk here is for example if the primary key value for orgcode
>>>>> in the organization table is 5, it might conflict with the data
>>>>> where I am attempting it to be restored.
>>>>>
>>>>> Same holds true for bills, invoices etc.
>>>>>
>>>>> A certain account head with accountcode 1 might be already present
>>>>> on the second machine.
>>>>>
>>>>> I am not expecting the users to empty all data from the
>>>>> destination machine before restoring a backup.
>>>>>
>>>>> The reason is that an auditor may have many client's data and one
>>>>> can't predict what primary key values are going to come from a
>>>>> backup.
>>>>>
>>>>> Basically I can even say this is a copy paste instead of a pure
>>>>> backup and restore.
>>>>>
>>>>> Can any one suggest how to handle such conflicts?
>>>>
>>>> Hard to say. If the data is held in common tables(bills, vouchers,
>>>> etc)then the only thing I see happening is changing the PK values
>>>> to an unused value. That could turn into a nightmare though. Not
>>>> only that you lose the connection to the original data source. If
>>>> the data can be broken out into separate tables then I could see
>>>> placing them in their own schema.
>>>>
>>>>>
>>>>>
>>>>> -- 
>>>>> Regards,
>>>>> Krishnakant Mane,
>>>>> Project Founder and Leader,
>>>>> GNUKhata <https://gnukhata.in/>
>>>>> //(Opensource Accounting, Billing and Inventory Management
>>>>> Software)//
>>>>
>>>
>>> Hi Adrian,
>>>
>>> Even I am thinnking to do some kind of upsert with this situation.
>
> So to be clear the tables you are working can have records from
> multiple organizations in a single table?
>
>>>
>>> And I would have to set the pkey to an unassigned value when there
>>> is conflict.
>
> I am seeing nextval() in your future:)
>
>>>
>>> I may also choose to revamp the serial by timestamps but don't know
>>> if the target customers would like it.
>
> I would avoid that. In my opinion timestamps are to too volatile to
> serve as a PK. If you are going to change I would go with the previous
> suggestion of UUID:
> https://www.postgresql.org/docs/11/datatype-uuid.html
>
> Not sure your customers would like that either.
>
Hi Adrian,

I think I would make them like the uuid idea.

So now what I am thinking is to first revamp the database by first
removing all the primary key constraints and then deleting all the values.

Then loop through the existing data and get uuid in that colum for every
row.

I might also require to update all the references to this value as
foreign key in related tables.

But I guess some kind of on update cascade might do well.

I know this would slow down the system, but given that this will be a
one time process for an individual user (that too if he has existing
data ), I would take that trade-off.

What do you say?

>>>
>>> -- 
>>> Regards,
>>> Krishnakant Mane,
>>> Project Founder and Leader,
>>> GNUKhata <https://gnukhata.in/>
>>> //(Opensource Accounting, Billing and Inventory Management Software)//
>> It would likely be easier to rethink your backup and restore plan.
>> Putting each restore into its own space would be one tack.
>
>
-- 
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata <https://gnukhata.in/>
//(Opensource Accounting, Billing and Inventory Management Software)//


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-26 Thread Krishnakant Mane

On 26/09/19 6:53 PM, Rob Sargent wrote:
>  
>
> On Sep 26, 2019, at 12:27 AM, Krishnakant Mane  <mailto:kkm...@riseup.net>> wrote:
>
>>
>> On 26/09/19 12:03 AM, Adrian Klaver wrote:
>>> On 9/25/19 8:04 AM, Rob Sargent wrote:
>>>>
>>>>
>>>> On Sep 25, 2019, at 8:24 AM, Krishnakant Mane >>> <mailto:kkm...@riseup.net>> wrote:
>>>>
>>>>>
>>>>> On 25/09/19 7:50 PM, Adrian Klaver wrote:
>>>>>> On 9/25/19 12:15 AM, Krishnakant Mane wrote:
>>>>>>> Hello all,
>>>>>>>
>>>>>>> I have been using postgresql for an enterprise quality account's
>>>>>>> automation and inventory management software called GNUKhata
>>>>>>> <https://gnukhata.in>
>>>>>>>
>>>>>>> Our team is planning to add backup and restore function in the
>>>>>>> software.
>>>>>>>
>>>>>>> But we don't want to dump the entire database and then restore
>>>>>>> the same.
>>>>>>>
>>>>>>> What we are trying to do is to copy data specific to an
>>>>>>> organization.
>>>>>>>
>>>>>>> The challenge here is that I might copy all data (account heads,
>>>>>>> bills, vouchers etc ) for one organization from an instance on
>>>>>>> one machine.
>>>>>>>
>>>>>>> I take the archive in what ever format to another machine and
>>>>>>> now attempt to restore.
>>>>>>>
>>>>>>> The risk here is for example if the primary key value for
>>>>>>> orgcode in the organization table is 5, it might conflict with
>>>>>>> the data where I am attempting it to be restored.
>>>>>>>
>>>>>>> Same holds true for bills, invoices etc.
>>>>>>>
>>>>>>> A certain account head with accountcode 1 might be already
>>>>>>> present on the second machine.
>>>>>>>
>>>>>>> I am not expecting the users to empty all data from the
>>>>>>> destination machine before restoring a backup.
>>>>>>>
>>>>>>> The reason is that an auditor may have many client's data and
>>>>>>> one can't predict what primary key values are going to come from
>>>>>>> a backup.
>>>>>>>
>>>>>>> Basically I can even say this is a copy paste instead of a pure
>>>>>>> backup and restore.
>>>>>>>
>>>>>>> Can any one suggest how to handle such conflicts?
>>>>>>
>>>>>> Hard to say. If the data is held in common tables(bills,
>>>>>> vouchers, etc)then the only thing I see happening is changing the
>>>>>> PK values to an unused value. That could turn into a nightmare
>>>>>> though. Not only that you lose the connection to the original
>>>>>> data source. If the data can be broken out into separate tables
>>>>>> then I could see placing them in their own schema.
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> -- 
>>>>>>> Regards,
>>>>>>> Krishnakant Mane,
>>>>>>> Project Founder and Leader,
>>>>>>> GNUKhata <https://gnukhata.in/>
>>>>>>> //(Opensource Accounting, Billing and Inventory Management
>>>>>>> Software)//
>>>>>>
>>>>>
>>>>> Hi Adrian,
>>>>>
>>>>> Even I am thinnking to do some kind of upsert with this situation.
>>>
>>> So to be clear the tables you are working can have records from
>>> multiple organizations in a single table?
>>>
>>>>>
>>>>> And I would have to set the pkey to an unassigned value when there
>>>>> is conflict.
>>>
>>> I am seeing nextval() in your future:)
>>>
>>>>>
>>>>> I may also choose to revamp the serial by timestamps but don't
>>>>> know if the target customers would like it.
>>>
>>> I would avoid that. In my opinion timestamps are to too volatile to
>>> serve as a PK. If you are going to change I

confused about material view locks please explain

2024-07-05 Thread Krishnakant Mane

Hello all.

I am an old timer postgresql user for last 17 years.

I have used it for my open source as well as enterprise cloud services.

IN my current fintech solution, I need to do some performance optimisations.

I have decided to use materialised views (more precisely IVM ).

So on my postgresql version 16, I have installed pg_ivm extention.

I have one fundamental question before going ahead with it's actual use 
in production.


So, I have a voucher master and voucher details table.

This system pertains to double entry book keeping (debit and credit types ).

master contains id as serial primary key, date, voucher type and 
narration along with invoice number.


details table contains the id as foreign key, account code, again 
foreign key from the accounts table, drcrtype being integer (3 for 
credit and 4 for debit ).


So a view joining vouchermaster, voucherdetails and accounts is created.

Every time an invoice is generated, both the master and detail table 
will get updated.


the voucher view is used for generating reports such as balance sheet, 
profit loss and cash flow.


Given this setup I have a very specific questionh.

if client 1 has asked for his balance sheet and the view is being 
queried, then what will happen if client 2 happens to create an invoice 
concurrently?


Will the invoice creation (and subsequent voucher table and view update 
) wait for client 1 to complete the select query, or will the select 
query halt till the update happen?


If possible, I would like to avoid a lock on the view or at least allow 
selects on the view while it is being incrementally updated.


Is this possible?

Regards.





Re: confused about material view locks please explain

2024-07-05 Thread Krishnakant Mane



On 7/5/24 21:10, Peter J. Holzer wrote:

If I understand https://github.com/sraoss/pg_ivm correctly, the
materialized view will be updated within the same transaction. So it's
just the same as any other change in the database:

Neither client will wait for the other. The first client will see either
the old or the new state depending on whether the second client manages
to commit soon enough.


Thank you Peter.

So does that mean both the processes work concurrently?

 I had understood that while an update is happening to an IVM (material 
view ) the view is locked till the update is complete.


If so then how would both the clients have simultaneous access?


Regards.


 









can stored procedures with computational sql queries improve API performance?

2024-07-09 Thread Krishnakant Mane

Hello.

I have a straight forward question, but I am just trying to analyze the 
specifics.


So I have a set of queries depending on each other in a sequence to 
compute some results for generating financial report.


It involves summing up some amounts from tuns or of rows and also on 
certain conditions it categorizes the amounts into types (aka Debit 
Balance, Credit balance etc).


There are at least 6 queries in this sequence and apart from 4 input 
parameters. these queries never change.


So will I get any performance benefit by having them in a stored 
procedure rather than sending the queries from my Python based API?


Regards.





Re: can stored procedures with computational sql queries improve API performance?

2024-07-09 Thread Krishnakant Mane



On 7/10/24 06:44, Guyren Howe wrote:

On Jul 9, 2024, at 17:58, Krishnakant Mane  wrote:

Hello.

I have a straight forward question, but I am just trying to analyze the 
specifics.

So I have a set of queries depending on each other in a sequence to compute 
some results for generating financial report.

It involves summing up some amounts from tuns or of rows and also on certain 
conditions it categorizes the amounts into types (aka Debit Balance, Credit 
balance etc).

There are at least 6 queries in this sequence and apart from 4 input 
parameters. these queries never change.

So will I get any performance benefit by having them in a stored procedure 
rather than sending the queries from my Python based API?

Almost certainly.

Doing it all in a stored procedure or likely even better a single query will 
remove all of the latency involved in going back and forth between your app and 
the database.

Insofar as the queries you are running separately access similar data, a single 
query will be able to do that work once.

There are other potential benefits (a smaller number of queries reduces 
planning time, for example).



Basically there are if else conditions and it's not just the queries but 
the conditional sequence in which they execute.


So one single query won't do the job.

But Thank you for confirming my understanding.

I believe that the fact that stored procedures are compiled also makes 
them perform faster, is that correct?


Regards.





Re: create_immv issue on aws Ubuntu even after create extention

2025-02-28 Thread Krishnakant Mane

Ok here you go.

select create_immv('plantskillpayment', 'select distinct kwapp_plant.id 
as plantid, kwapp_skill.id as skillid, kwapp_skill.skillname, 
kwapp_plant.plantname, kwapp_pltsklpaylvl.level, kwapp_pltsklpaylvl.pay 
from kwapp_skill, kwapp_plant,  kwapp_pltsklpaylvl  where 
kwapp_skill.id  = kwapp_pltsklpaylvl.skillid_id  and kwapp_plant.id = 
kwapp_pltsklpaylvl.plantid_id');



You see, the point is this works perfectly on all local machines with 
exact same versions and same data.


Regards.

On 2/28/25 22:24, Ron Johnson wrote:
On Fri, Feb 28, 2025 at 11:41 AM Krishnakant Mane 
 wrote:


Hello all.

I am not able to use create_immv in postgresql 16.6 even after
installing it and doing create extention.

I did a git clone of the repository and then make sudo make
install to install it.

The issue is not happening on my local ubuntu 24.04 machine but on
ubuntu 24.04 on aws free tier.

All dependencies are in place, that's why the extention got
installed in the first place.

I get the error "function create_immv(unknown, unknown) does not
exist.

all the tables related to this view exist, data is present and
same setup works perfectly on my local machine, same OS and
postgresql version.

Can someone suggest what could be the solution?

Show your work!

What command did you run to get that error?

--
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!

--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane


On 3/2/25 08:44, Adrian Klaver wrote:


Then deal with the fact it is no longer installed to pg_control but 
instead pgivm. With usual schema qualifications or search_path 
techniques.



Thank you, it worked.  now pgivm.create_immv works.

I should have got it before.


Did you not read my previous post?

Actually I depend on a screen reader given my total blindness. Somehow 
it is playing some tricks with thunderbird these days.


Some times when there are multiple nested blocks, I may miss the text.

But any ways thank you very much.

I had doubted it on similar lines, but could not pinpoint the issue.

Regards.






--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist




--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane


On 3/2/25 07:23, Adrian Klaver wrote:

On 3/1/25 17:39, Krishnakant Mane wrote:




But when I try creating the same immv on the server it says 
function create_immv (unknown, unknown ) does not exist.


Connect with psql then do:

1) \dx
This will show what extensions are installed and in what schema.


pg_ivm | 1.10 | pg_catalog

plpgsql | 1.0 | pg_catalog.



2) show search_path;
"$user" public.
3) \df *.create_immv
pgivm | create_immv | bigint | text, text | func.
Provide the outputs from the above three commands in your response.


See above and provide answers.


--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane


On 3/2/25 07:03, Adrian Klaver wrote:

On 3/1/25 17:25, Krishnakant Mane wrote:


On 3/1/25 22:17, Adrian Klaver wrote:

On 2/28/25 21:29, Krishnakant Mane wrote:

All done, but no result.

I feel it is possible that the latest code must be flawed or 
something.


Well, as a matter of fact, I never had to compile the extention on 
my local ubuntu 24.04 machine.


How did you install Postgres on the AWS EC2 instance?


Please answer above.
sudo apt-get install postgresql postgresql-server-dev-16


Did you install the postgresql-server-dev-16 package?


Yes I did.



Define not work.



create extension command works successfully.

But when I try creating the same immv on the server it says function 
create_immv (unknown, unknown ) does not exist.


Connect with psql then do:

1) \dx
This will show what extensions are installed and in what schema.

2) show search_path;

3) \df *.create_immv

Provide the outputs from the above three commands in your response.




Note that this works on my local machine.



Regards.

On 3/1/25 00:10, Ron Johnson wrote:



--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile:
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/




--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane


On 3/1/25 22:17, Adrian Klaver wrote:

On 2/28/25 21:29, Krishnakant Mane wrote:

All done, but no result.

I feel it is possible that the latest code must be flawed or something.

Well, as a matter of fact, I never had to compile the extention on my 
local ubuntu 24.04 machine.


How did you install Postgres on the AWS EC2 instance?

Did you install the postgresql-server-dev-16 package?


Yes I did.


I just did a create extention and it worked just fine.

But on my aws server I had to compile it manually and still it does 
not work.


Define not work.



create extension command works successfully.

But when I try creating the same immv on the server it says function 
create_immv (unknown, unknown ) does not exist.


Note that this works on my local machine.



Regards.

On 3/1/25 00:10, Ron Johnson wrote:



--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane

Yes perfect thank you.

Besides, I will soon switch over to Mac air m3 and voice over does not 
have such problems.


This screen reader Orca is great as well and I am very sure, that it 
being open source, will have the issue fixed at the earliest.


Regards.

On 3/2/25 09:53, Adrian Klaver wrote:

On 3/1/25 19:35, Krishnakant Mane wrote:


For my future reference does something like this response work better?


--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane


On 3/2/25 07:54, David G. Johnston wrote:

On Saturday, March 1, 2025, Krishnakant Mane  wrote:


pg_ivm | 1.10 | pg_catalog

plpgsql | 1.0 | pg_catalog.



2) show search_path;
"$user" public.
3) \df *.create_immv
pgivm | create_immv | bigint | text, text | func.
Provide the outputs from the above three commands in your response.


See this issue for why this is different now.

https://github.com/sraoss/pg_ivm/pull/116

Then deal with the fact it is no longer installed to pg_control but 
instead pgivm. With usual schema qualifications or search_path techniques.

Thank you david.


Can you tell me how exactly should the syntax be?

Is my function definition wrong?



David J.


--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane


On 3/2/25 07:54, David G. Johnston wrote:

On Saturday, March 1, 2025, Krishnakant Mane  wrote:


pg_ivm | 1.10 | pg_catalog

plpgsql | 1.0 | pg_catalog.



2) show search_path;
"$user" public.
3) \df *.create_immv
pgivm | create_immv | bigint | text, text | func.
Provide the outputs from the above three commands in your response.


See this issue for why this is different now.

https://github.com/sraoss/pg_ivm/pull/116

Then deal with the fact it is no longer installed to pg_control but 
instead pgivm. With usual schema qualifications or search_path techniques.



Thank you, it worked.  now pgivm.create_immv works.

I should have got it before.


David J.


--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


create_immv issue on aws Ubuntu even after create extention

2025-02-28 Thread Krishnakant Mane

Hello all.

I am not able to use create_immv in postgresql 16.6 even after 
installing it and doing create extention.


I did a git clone of the repository and then make sudo make install to 
install it.


The issue is not happening on my local ubuntu 24.04 machine but on 
ubuntu 24.04 on aws free tier.


All dependencies are in place, that's why the extention got installed in 
the first place.


I get the error "function create_immv(unknown, unknown) does not exist.

all the tables related to this view exist, data is present and same 
setup works perfectly on my local machine, same OS and postgresql version.


Can someone suggest what could be the solution?

Regards.



Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-02-28 Thread Krishnakant Mane


On 2/28/25 22:53, Ron Johnson wrote:


Differing search_path values can lead to problems like this.


So what can I do to resolve this?
On Fri, Feb 28, 2025 at 12:01 PM Krishnakant Mane 
 wrote:


Ok here you go.

select create_immv('plantskillpayment', 'select distinct
kwapp_plant.id <http://kwapp_plant.id> as plantid, kwapp_skill.id
<http://kwapp_skill.id> as skillid, kwapp_skill.skillname,
kwapp_plant.plantname, kwapp_pltsklpaylvl.level,
kwapp_pltsklpaylvl.pay from kwapp_skill, kwapp_plant,
kwapp_pltsklpaylvl  where kwapp_skill.id <http://kwapp_skill.id> 
= kwapp_pltsklpaylvl.skillid_id  and kwapp_plant.id
<http://kwapp_plant.id> = kwapp_pltsklpaylvl.plantid_id');


You see, the point is this works perfectly on all local machines
with exact same versions and same data.

Regards.

On 2/28/25 22:24, Ron Johnson wrote:

On Fri, Feb 28, 2025 at 11:41 AM Krishnakant Mane
 wrote:

Hello all.

I am not able to use create_immv in postgresql 16.6 even
after installing it and doing create extention.

I did a git clone of the repository and then make sudo make
install to install it.

The issue is not happening on my local ubuntu 24.04 machine
but on ubuntu 24.04 on aws free tier.

All dependencies are in place, that's why the extention got
installed in the first place.

I get the error "function create_immv(unknown, unknown) does
not exist.

all the tables related to this view exist, data is present
and same setup works perfectly on my local machine, same OS
and postgresql version.

Can someone suggest what could be the solution?

Show your work!

What command did you run to get that error?

-- 
Death to , and butter sauce.

Don't boil me, I'm still alive.
 lobster!
-- 
*Krishnakant Mane*

Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/



--
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!

--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-02-28 Thread Krishnakant Mane

Thank you.

Just a quick question.

should it be in pg_catalog?

if yes then it is so.

On 3/1/25 01:39, Adrian Klaver wrote:

On 2/28/25 09:26, Krishnakant Mane wrote:


On 2/28/25 22:53, Ron Johnson wrote:


Differing search_path values can lead to problems like this.


So what can I do to resolve this?


In psql do:

\dx

That will show you what extensions are installed and in what schema.

Then read this:

https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH

It explains search_path and how to modify it.



--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/


Re: create_immv issue on aws Ubuntu even after create extention

2025-03-01 Thread Krishnakant Mane

All done, but no result.

I feel it is possible that the latest code must be flawed or something.

Well, as a matter of fact, I never had to compile the extention on my 
local ubuntu 24.04 machine.


I just did a create extention and it worked just fine.

But on my aws server I had to compile it manually and still it does not 
work.


Regards.

On 3/1/25 00:10, Ron Johnson wrote:
When you know that the SELECT statement runs on the local machines, 
then on each server, run these SQL statements:


SHOW search_path;
|SELECT * FROM pg_extension;|

On Fri, Feb 28, 2025 at 12:26 PM Krishnakant Mane 
 wrote:



On 2/28/25 22:53, Ron Johnson wrote:


Differing search_path values can lead to problems like this.


So what can I do to resolve this?

On Fri, Feb 28, 2025 at 12:01 PM Krishnakant Mane
 wrote:

Ok here you go.

select create_immv('plantskillpayment', 'select distinct
kwapp_plant.id <http://kwapp_plant.id> as plantid,
kwapp_skill.id <http://kwapp_skill.id> as skillid,
kwapp_skill.skillname, kwapp_plant.plantname,
kwapp_pltsklpaylvl.level, kwapp_pltsklpaylvl.pay from
kwapp_skill, kwapp_plant,  kwapp_pltsklpaylvl  where
kwapp_skill.id <http://kwapp_skill.id>  =
kwapp_pltsklpaylvl.skillid_id  and kwapp_plant.id
<http://kwapp_plant.id> = kwapp_pltsklpaylvl.plantid_id');


You see, the point is this works perfectly on all local
machines with exact same versions and same data.

Regards.

On 2/28/25 22:24, Ron Johnson wrote:

On Fri, Feb 28, 2025 at 11:41 AM Krishnakant Mane
 wrote:

Hello all.

I am not able to use create_immv in postgresql 16.6 even
after installing it and doing create extention.

I did a git clone of the repository and then make sudo
make install to install it.

The issue is not happening on my local ubuntu 24.04
machine but on ubuntu 24.04 on aws free tier.

All dependencies are in place, that's why the extention
got installed in the first place.

I get the error "function create_immv(unknown, unknown)
does not exist.

all the tables related to this view exist, data is
present and same setup works perfectly on my local
machine, same OS and postgresql version.

Can someone suggest what could be the solution?

Show your work!

What command did you run to get that error?

-- 
Death to , and butter sauce.

Don't boil me, I'm still alive.
 lobster!
-- 
*Krishnakant Mane*

Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/



-- 
Death to , and butter sauce.

Don't boil me, I'm still alive.
 lobster!
-- 
*Krishnakant Mane*

Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/



--
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!

--
Email Signature
*Krishnakant Mane*
Software Architecture Design & Implementation Specialist


Mobile: 
+91 8424039903

https://www.linkedin.com/in/krishnakant-r-mane/