request to support "conflict on(col1 or col2) do update xxx" feature

2021-12-10 Thread sai
I think this feature is a very common requirement.

For example. I created  a table,  which username and email columns are
unique separately

CREATE TABLE test (
usernameVARCHAR(255) NOT NULL UNIQUE,
email   VARCHAR(255) NOT NULL UNIQUE,
status  VARCHAR(127)
);
I want to do an "update" when any of these two columns triggers conflict.

But postgres doesn't  support "conflict(col1 or col2)",  it only supports
"conflict(col1)", like this:

insert into test (username, email, status) values('u1','e1','s1') on
conflict(username) do  update set status=CURRENT_TIMESTAMP;

Many guys said you can create a unique index on a combination of two
columns like "unique(username, email)",  this is an absolutely incorrect
answer !

Assume I insert two records:
1. u1, e1, s1
2. u2, e2 ,s2
Now when I insert (u1, e3, s3),  the combination (u1, e3) is unique,  but
if you use "on conflict(username, email) do update xxx", you will still get
an exception  !  it can not process conflict on any one of the columns !

So I think we should have a simple syntax to support it? (I don't want to
write a Stored Procedure or use any Complex/hacker solution, thanks.).

Can the postgres team implement this feature?

-- 
Best Regards,
Jin


Reg: User creation script/List of privileges

2022-03-30 Thread Sai Ch
Hi Experts,

I am looking for a query or possibility to generate all the privileges a
user/role has.

I need this to create a user/role from one database to another with exact
same privileges.

Kindly, share the query or way to proceed further.

Thanks & Regards,
Sai.


org.postgresql.util.PSQLException: ERROR: invalid XML content. Huge Input lookup

2023-08-03 Thread Sai Teja
Hi team,

I am trying to migrate the data from db2 to postgreSQL in which one of the
table is having XML data.
For one of the file (13MB) I'm facing an error with

ERROR: invalid XML content
Detail: line 418061: internal error: Huge input lookup
nested exception is org.postgresql.util.PSQLException: ERROR: invalid XML
content

I even tried with more size of XML data (30MB) but for this case I can able
to migrate the date successfully.

But only for that file I'm facing the issue.
Later, I got to know that there are some special characters in that file
not sure whether this can be the issue or not.

I'm using java.sql.preparedstatement
Preparedstatement.setSQLXML(iterator, xmlobject) to migrate the data from
db2 to postgreSQL
When I tried in my local postgreSQL (which is windows and located in APAC
region. I can able to migrate successfully)
But when I'm trying to migrate into azure postgreSQL (Linux OS, located in
Germany) facing an issue.

I request you to please look into this and help me to resolve this error.

Please let me know if any queries.

Looking forward for the resolution.

Thanks & regards,
Sai Teja


Re: org.postgresql.util.PSQLException: ERROR: invalid XML content. Huge Input lookup

2023-08-03 Thread Sai Teja
Thank you for the reply.

The pg version is 14.7 (Azure DB)
Locale :-
LC_COLLATE & LC_CTYPE : en_US.utf_8

Local postgreSQL:- Version:- (15.1-1-windows-x64-binaries)

LC_COLLATE & LC_CTYPE:-
English_United States.1252

Please let me know if any other information is needed.

Thanks & Best Regards,
Sai Teja






On Fri, 4 Aug, 2023, 8:03 am Ron,  wrote:

> On 8/3/23 21:22, Sai Teja wrote:
>
> Hi team,
>
> I am trying to migrate the data from db2 to postgreSQL in which one of the
> table is having XML data.
> For one of the file (13MB) I'm facing an error with
>
> ERROR: invalid XML content
> Detail: line 418061: internal error: Huge input lookup
> nested exception is org.postgresql.util.PSQLException: ERROR: invalid XML
> content
>
> I even tried with more size of XML data (30MB) but for this case I can
> able to migrate the date successfully.
>
> But only for that file I'm facing the issue.
> Later, I got to know that there are some special characters in that file
> not sure whether this can be the issue or not.
>
> I'm using java.sql.preparedstatement
> Preparedstatement.setSQLXML(iterator, xmlobject) to migrate the data from
> db2 to postgreSQL
> When I tried in my local postgreSQL (which is *windows* and located in
> *APAC* region. I can able to migrate successfully)
> But when I'm trying to migrate into *azure* postgreSQL (Linux OS, located
> in *Germany*) facing an issue.
>
>
> That sounds like it has to do with locale differences between the two
> systems.
>
> *Exactly* what Pg version (including patch level) and locale are in each
> of the two systems?
>
> Even if it's not that, *something* is configured differently between the
> two systems.
>
> --
> Born in Arizona, moved to Babylonia.
>


Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-14 Thread Sai Teja
Hi Team,

We are trying to fetch the one row of data (bytea data) for one table in
But getting the error stating that "Invalid Memory alloc request size
1236252631"

The row which we were trying to fetch have one bytea column which is more
than 1GB

Could anyone please help me to resolve this issue.

Thanks & Regards,
Sai Teja


Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-14 Thread Sai Teja
Could anyone please suggest any ideas to resolve this issue.

I have increased the below parameters but still I'm getting same error.

work_mem, shared_buffers

Out of 70k rows in the table only for the few rows which is of large size
(700MB) getting the issue. Am unable to fetch the data for that particular
row.

Would be appreciated if anyone share the insights.

Thanks,
Sai



On Mon, 14 Aug, 2023, 5:21 pm Sai Teja, 
wrote:

> Hi Andreas,
>
> Thank you for the reply!
>
> Currently it is Hex by default. If I change to escape is there any
> possibility to fetch the data?
>
> Thanks,
> Sai Teja
>
> On Mon, 14 Aug, 2023, 5:12 pm Andreas Kretschmer, 
> wrote:
>
>>
>>
>> On 14 August 2023 11:59:26 CEST, Sai Teja <
>> saitejasaichintalap...@gmail.com> wrote:
>> >Hi Team,
>> >
>> >We are trying to fetch the one row of data (bytea data) for one table in
>> >But getting the error stating that "Invalid Memory alloc request size
>> >1236252631"
>> >
>> >The row which we were trying to fetch have one bytea column which is more
>> >than 1GB
>> >
>> >Could anyone please help me to resolve this issue.
>> >
>>
>> You can try to change the bytea_output. Possible values are hex and
>> escape. With some luck it will work, but maybe your application will have
>> problems with that.
>>
>> Andreas
>>
>


Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-14 Thread Sai Teja
I am just running select query to fetch the result
Query : select id, content_data, name from table_name
So here content_data is bytea content which is having more than 700 MB.
Even if I run this query in any DB client such as Pgadmin, dbeaver etc..
I'm facing the same error. But this query is being called in java as well
So, I don't think java could be the issue as I can able to successfully
insert the data. But, only the problem is with fetching the data that too
only specific rows which are having huge volume of data.

Thanks,
Sai

On Mon, 14 Aug, 2023, 10:55 pm Rob Sargent,  wrote:

> On 8/14/23 09:29, Sai Teja wrote:
> > Could anyone please suggest any ideas to resolve this issue.
> >
> > I have increased the below parameters but still I'm getting same error.
> >
> > work_mem, shared_buffers
> >
> > Out of 70k rows in the table only for the few rows which is of large
> > size (700MB) getting the issue. Am unable to fetch the data for that
> > particular row.
> >
> > Would be appreciated if anyone share the insights.
> >
> > Thanks,
> > Sai
> >
> >
> Are you using java?  There's an upper limit on array size, hence also on
> String length.  You'll likely need to process the output in chunks.
>
>
>


Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-14 Thread Sai Teja
By default the bytea_output is in hex format.

On Tue, 15 Aug, 2023, 12:44 am Ron,  wrote:

> Did you *try* changing bytea_output to hex?
>
> On 8/14/23 12:31, Sai Teja wrote:
>
> I am just running select query to fetch the result
> Query : select id, content_data, name from table_name
> So here content_data is bytea content which is having more than 700 MB.
> Even if I run this query in any DB client such as Pgadmin, dbeaver etc..
> I'm facing the same error. But this query is being called in java as well
> So, I don't think java could be the issue as I can able to successfully
> insert the data. But, only the problem is with fetching the data that too
> only specific rows which are having huge volume of data.
>
> Thanks,
> Sai
>
> On Mon, 14 Aug, 2023, 10:55 pm Rob Sargent,  wrote:
>
>> On 8/14/23 09:29, Sai Teja wrote:
>> > Could anyone please suggest any ideas to resolve this issue.
>> >
>> > I have increased the below parameters but still I'm getting same error.
>> >
>> > work_mem, shared_buffers
>> >
>> > Out of 70k rows in the table only for the few rows which is of large
>> > size (700MB) getting the issue. Am unable to fetch the data for that
>> > particular row.
>> >
>> > Would be appreciated if anyone share the insights.
>> >
>> > Thanks,
>> > Sai
>> >
>> >
>> Are you using java?  There's an upper limit on array size, hence also on
>> String length.  You'll likely need to process the output in chunks.
>>
>>
>>
> --
> Born in Arizona, moved to Babylonia.
>


Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-14 Thread Sai Teja
Hi team,

I got to know the field size limit for the bytea datatype column is limited
to 1 GB in postgreSQL. Then how can we increase this? Since we need to
store high volume of data for each row in a table

https://www.postgresql.org/docs/current/limits.html

Any suggestions would be appreciated.

Thanks & Regards,
Sai

On Tue, 15 Aug, 2023, 8:10 am Sai Teja, 
wrote:

> By default the bytea_output is in hex format.
>
> On Tue, 15 Aug, 2023, 12:44 am Ron,  wrote:
>
>> Did you *try* changing bytea_output to hex?
>>
>> On 8/14/23 12:31, Sai Teja wrote:
>>
>> I am just running select query to fetch the result
>> Query : select id, content_data, name from table_name
>> So here content_data is bytea content which is having more than 700 MB.
>> Even if I run this query in any DB client such as Pgadmin, dbeaver etc..
>> I'm facing the same error. But this query is being called in java as well
>> So, I don't think java could be the issue as I can able to successfully
>> insert the data. But, only the problem is with fetching the data that too
>> only specific rows which are having huge volume of data.
>>
>> Thanks,
>> Sai
>>
>> On Mon, 14 Aug, 2023, 10:55 pm Rob Sargent, 
>> wrote:
>>
>>> On 8/14/23 09:29, Sai Teja wrote:
>>> > Could anyone please suggest any ideas to resolve this issue.
>>> >
>>> > I have increased the below parameters but still I'm getting same error.
>>> >
>>> > work_mem, shared_buffers
>>> >
>>> > Out of 70k rows in the table only for the few rows which is of large
>>> > size (700MB) getting the issue. Am unable to fetch the data for that
>>> > particular row.
>>> >
>>> > Would be appreciated if anyone share the insights.
>>> >
>>> > Thanks,
>>> > Sai
>>> >
>>> >
>>> Are you using java?  There's an upper limit on array size, hence also on
>>> String length.  You'll likely need to process the output in chunks.
>>>
>>>
>>>
>> --
>> Born in Arizona, moved to Babylonia.
>>
>


Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-17 Thread Sai Teja
Hi Team,

Even I used postgreSQL Large Objects by referring this link to store and
retrieve large files (As bytea not working)
https://www.postgresql.org/docs/current/largeobjects.html

But even now I am unable to fetch the data at once from large objects

select lo_get(oid);

Here I'm getting the same error message.

But if I use select data from pg_large_object where loid = 49374
Then I can fetch the data but in page wise (data splitting into rows of
each size 2KB)

So, here how can I fetch the data at single step rather than page by page
without any error.

And I'm just wondering how do many applications storing huge amount of data
in GBs? I know that there is 1GB limit for each field set by postgreSQL. If
so, how to deal with these kind of situations? Would like to know about
this to deal with real time scenarios.

We need to store large content (huge volume of data) and retrieve it.
Currently It is not happening due to limit of field size set by postgreSQL.

Would request to share your insights and suggestions on this to help me for
resolving this issue.


Thanks & Regards,
Sai Teja

On Tue, 15 Aug, 2023, 8:53 am Tom Lane,  wrote:

> Sai Teja  writes:
> > I got to know the field size limit for the bytea datatype column is
> limited
> > to 1 GB in postgreSQL. Then how can we increase this?
>
> You can't.  That limit is wired-in in many ways.  Think about how to
> split your data across multiple table rows.
>
> regards, tom lane
>


Taking too much time to fetch the data from Azure postgreSQL Large objects (pg_largeobjects)

2023-09-01 Thread Sai Teja
Hi Team,

We have bytea data stored in pg_largeobjects (Large objects table)

Here , the data is 675 MB. We are using Large objects client interface API
provided by postgreSQL to retrieve the data (lo_read, lo_open etc)

When I try to fetch the data from local it took 30-35 sec to retrieve the
content from DB.
Whereas, azure postgreSQL is keep on executing even it is more than 1hr.

I'm using Java code to retrieve the data as you can see that code, network
speed everything is same but when I point to azure postgreSQL the latency
is very high.

If anyone knows what can be root cause or how can I retrieve the file like
local. It would help me a lot and appreciated.

Azure postgreSQL version: 14.8 - Hosted in Germany west
Local postgreSQL version: 15.1 - Hosted in India

Even script ran in Germany (I thought network latency/region is the issue)
there is still high latency.

I request to please suggest any ideas/solution to resolve this issue.

Thanks,
Sai


postgreSQL UPPER Method is converting the character "µ" into "M"

2023-09-06 Thread Sai Teja
Hi team,

I am using UPPER document name for converting the text from lower case into
upper case.
But here for the below example
Command:- select UPPER(testµ)
Input :- testµ
Output:- TESTM
Expected output :-  TESTµ

Here, that character is converting into M which should not be case
The postgreSQL is hosted in Linux Machine which is using Locale_ctype:-
en_US_utf.8

Would request you to please suggest any ideas to resolve this issue.

It'll would be very helpful and appreciated.

Thanks,
Sai Teja


Re: postgreSQL UPPER Method is converting the character "µ" into "M"

2023-09-06 Thread Sai Teja
Hi Tom Lane, Thank you for your reply!

Actually we are migrating the data from DB2 to postgreSQL. So in DB2 the
upper method is converting µ as µ only but not as M.

So, while validating the data we have encountered this problem.

We want to have the data in both DB's should be similar (db2 and postgreSQL)

In my local it is windows OS and locale is English_united_states.1252 and
in local it is converting as expected ( testµ into TESTµ)

So, even I want to change locale i cannot change now. Since already db is
created. BTW this db is hosted in azure.

Would request to please suggest if any alternatives are there to resolve
this issue.

Thanks,
Sai Teja



On Wed, 6 Sep, 2023, 7:23 pm Tom Lane,  wrote:

> Sai Teja  writes:
> > I am using UPPER document name for converting the text from lower case
> into
> > upper case.
> > But here for the below example
> > Command:- select UPPER(testµ)
> > Input :- testµ
> > Output:- TESTM
> > Expected output :-  TESTµ
>
> According to https://en.wikipedia.org/wiki/Greek_alphabet
> the upper-case form of µ (mu) is in fact M, so I'm not sure
> what problem you have with this.  But in any case, you are
> complaining in the wrong place.  This conversion is determined
> by the rules of the selected locale.  If you don't like it,
> choose another locale, or make your own, or try to persuade
> the glibc maintainers that they are wrong.
>
> regards, tom lane
>


Re: postgreSQL UPPER Method is converting the character "µ" into "M"

2023-09-06 Thread Sai Teja
Thank you all for your response.
Got one more query,

When I'm trying the below command
Select UPPER('Mass')
I'm getting MASS as output .
But when I created the column with generated always constraint
For example
Alter table xyz add column xyz varchar(800) generated always as
(UPPER(content)) stored

Here content is original string coming from the table
The output is coming as 'µass" when I am selecting the data from the table

Even When I try to use collat C or ucs_basics I'm getting the same when I
select the data from the table
But when I select the data explicitly through UPPER netword like select
UPPER('Mass') then I'm getting expected output such as MASS

Would request you to please suggest the ideas to resolve this issue.

Thanks,
Sai Teja


On Wed, 6 Sep, 2023, 8:59 pm Francisco Olarte, 
wrote:

> On Wed, 6 Sept 2023 at 16:40, Sai Teja 
> wrote:
> > Actually we are migrating the data from DB2 to postgreSQL. So in DB2 the
> upper method is converting µ as µ only but not as M.
> > So, while validating the data we have encountered this problem.
> >
> > We want to have the data in both DB's should be similar (db2 and
> postgreSQL)
> Data seems to be the same, function behaviour is what seems different.
>
> > In my local it is windows OS and locale is English_united_states.1252
> and in local it is converting as expected ( testµ into TESTµ)
>
> Maybe because unicode has GREEK CAPITAL LETTER MU ( 924, which looks
> like capital M ), GREEK SMALL LETTER MU ( 956 ) and MICRO SIGN ( 181,
> which looks like small mu ) while windows-1252 only has 0xB6 as micro.
> OTOH Windows-1253 ( greek ansi ) has all three.
>
> If your small mu are really micro-sign ( which is suspected if youused
> 1252 ) maybe changing them to that helps ( but I do not have the
> resources to test that on hand ).
>
> Francisco Olarte.
>


Re: postgreSQL UPPER Method is converting the character "µ" into "M"

2023-09-06 Thread Sai Teja
I added one column with generated always column with UPPER CASE like below:-

Alter table table_name t add column data varchar(8000) generated always as
(UPPER(t.content)) stored

Data column is generated always constraint here

This column has many sentences for each row in which some of the characters
are in Greek language.
Like µ, ë, ä, Ä etc..
So, for the example testµ when I choose
1. Select UPPER('testµ')
Output :- TESTM

But as per mail conversation I have used COLLATE ucs_basic like
2. Select UPPER('testµ' collate "ucs_basic")
Output :- TESTµ (which is correct)


3. SELECT UPPER('Mass' collate "ucs_basic")
Output :- MASS (which is correct)

4. Select data from table (here data is the column which is created with
generated always column like mentioned above)

For some of the rows which contains Greek characters I'm getting wrong
output.

For ex:- for the word 'MASS' I'm getting 'µASS' when I select the data from
the table

Summary:- I'm getting wrong output when I use upper keyword with collation
for the table
But when I explicitly call upper keyword with collation like mentioned in
above I'm getting the results as expected.

Even I tried to add collation in the column itself but it didn't worked.

Alter table table_name t add column data varchar(8000) generated always as
(UPPER(t.content, collation "ucs_basic")) stored
Or
Alter table table_name t add column data varchar(8000) generated always as
(UPPER(t.content) collation "ucs_basic") stored

Both didn't worked. As I got wrong output when I selected the data from the
table.

On Wed, 6 Sep, 2023, 10:18 pm Erik Wienhold,  wrote:

> On 06/09/2023 18:37 CEST Erik Wienhold  wrote:
>
> > Homoglyphs are one explanation if you get 'µass' from the generated
> column as
> > described.
>
> postgres=# SELECT upper('𝝻𝚊𝚜𝚜');
>  upper
> ---
>  𝝻𝚊𝚜𝚜
> (1 row)
>
> The codepoints I picked are:
>
> * MATHEMATICAL SANS-SERIF BOLD SMALL MU
> * MATHEMATICAL MONOSPACE SMALL A
> * MATHEMATICAL MONOSPACE SMALL S
>
> --
> Erik
>


Huge input lookup exception when trying to create the index for XML data type column in postgreSQL

2023-09-07 Thread Sai Teja
Hi All,

We have one table which is storing XML data with 30k records and stores
huge amount of data.

We are trying to create the index for this column in the table. But, we’re
getting “Huge input Lookup error” during creation of Index.

Please check the below command which is used to create the index

CREATE INDEX xml_index on xml_table using BTREE (CAST (XPATH
(‘directory/access/mode/@Identifier’, content) as text[]))

Here we’re using Xpath expression to create the index since postgreSQL
directly does not support comparison methods. So, we decided to use Xpath
expression. But while creating the index as I mentioned above we’re facing
the issue with Huge Input lookup

I can able to create the index when the table have no records. But now we
have huge amount of data stored in the table. So, creating the index facing
the issue with Huge input lookup

The same error we have faced earlier when trying to retrieve the particular
rows from the table So we have changed the XML option from content to
document and then it got worked and we can able to retrieve the files now.

But, now while creating the index, we tried to change the XML Option from
content to document again. But this didn’t worked.

PS, we are using postgreSQL 14.8 version which is hosted in azure.

I would request to please suggest any ideas to resolve this issue. This
would help us a lot and appreciated


Re: Huge input lookup exception when trying to create the index for XML data type column in postgreSQL

2023-09-07 Thread Sai Teja
Thank you so much for all your responses.

I just tried with Hash, GIN etc

But it didn't worked. And I think it is because of "Xpath" expression which
I used in the index create command.

But is there any alternative way to change this Xpath? Since I need to
parse the XML as there is no other option. I need the other ways to create
the index .

May be if there are any parameters to change like xmloption etc it would
help us to resolve the issue.

Thanks,
Sai



On Fri, 8 Sep, 2023, 1:51 am Tom Lane,  wrote:

> Erik Wienhold  writes:
> > On 07/09/2023 21:09 CEST Tom Lane  wrote:
> >> There is no such string anywhere in the Postgres source code;
> >> furthermore, if someone tried to add such an error, it'd get rejected
> >> (I hope) as not conforming to our style guidelines.  I thought maybe
> >> it's coming from libxml or the xpath code, but I couldn't get a match
> >> for it anywhere in Debian Code Search either.  Is that the *exact*
> >> spelling of the message?
>
> > Looks like "Huge input lookup" as reported in [1] (also from Sai) and
> that
> > error is from libxml.
>
> Ah, thanks for the pointer.  It looks like for the DOCUMENT case,
> we could maybe relax this restriction by passing the XML_PARSE_HUGE
> option to xmlCtxtReadDoc().  However, there are things to worry about:
>
> * Some of the other libxml functions we use don't seem to have an
> options argument, so it's not clear how to remove the limit in all
> code paths.
>
> * One of the first hits I got while googling for XML_PARSE_HUGE was
> CVE-2022-40303 [1] (libxml2: integer overflows with XML_PARSE_HUGE).
> It seems highly likely that not everybody's libxml is patched for
> that yet, meaning we'd be opening a lot of systems to security issues.
>
> * XML_PARSE_HUGE apparently also removes restrictions on nesting
> depth of XML documents.  I wonder whether that creates a risk of
> stack-overflow crashes.
>
> On the whole, I'm not sure I want to mess with this.  libxml2 is
> rickety enough already without taking off its training wheels.
> And, as noted by David J., we'd very possibly only be moving
> the bottleneck somewhere else.  "Put many megabytes of data into
> one field" is an antipattern for successful SQL use, and probably
> always will be.
>
> regards, tom lane
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=2136266
>