Delete values from JSON

2023-03-17 Thread Andreas Joseph Krogh


Hi, in PG-14, how do I delete the keys "dunsNumber": "NaN":


{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "dunsNumber": "NaN", "organizationType": "LIMITED_COMPANY" 
} }, { "keyInformation": { "dunsNumber": "123", "organizationType": 
"LIMITED_COMPANY" } } ], "nisse": 123 }


So that the result becomes:


{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "organizationType": "LIMITED_COMPANY" } }, { 
"keyInformation": { "dunsNumber": "123", "organizationType": "LIMITED_COMPANY" 
} } ], "nisse": 123 }


Thanks.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 

nested xml/json to table

2023-03-17 Thread Wim Bertels
Hello,

in python pandas there is for example a json_normalize function,
i didn't find something similar or better in postgresql?

what would be the general idea: "easily" convert an hierarchical
structure like json or xml to a table; for example creating columns by
appending the key-names when going doing down the three, using null for
empty values, adding more columns as needed by the given structure.
(1-way operation)

a few conceptual gists:
jsonX=
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Sta.."; 
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-m..",
"GlossSeeAlso": 
["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}

select *
from json_to_table('jsonX');
-- generated columns with no data/only nulls could be removed..
-- arrays could be unnested in the process as well

glossary | glossary.title   | glossary.title.GlossDiv.title | .. 
-
null | example glossary | S |
..

the last column: glossary.title.GlossDiv.GlossList.GlossEntry.GlossSee
with value "markup"

---

what if there are different structures that need to be combined?
(they could be added in the same manner as before)

jsonY=
{
s1:[{
"f1": "a",
"f2": "b",
"f3": { "f3.1": "c",
"f3.2": "d"}
   },
   { 
"f1": "e",
"f4": "g"
   }
   ]
}
   
select *
from json_to_table('jsonY');
-- generated columns with no data/only nulls could be removed..
-- separator sign is untrusted

s1  | s1.f1 | s1.f2 | s1.f3 | s1.f3.f3.1 | s1.f3.f3.2 | s1.f4
-
null| a | b | null  | c  | d  | null
null| e | null  | null  | null   | null   | g


any ideas or suggestions (apart from plpython)? 
Wim 


Re: nested xml/json to table

2023-03-17 Thread Thomas Kellerer
Wim Bertels schrieb am 17.03.2023 um 11:05:
> what would be the general idea: "easily" convert an hierarchical
> structure like json or xml to a table; for example creating columns by
> appending the key-names when going doing down the three, using null for
> empty values, adding more columns as needed by the given structure.
> (1-way operation)
>
> a few conceptual gists:
> jsonX=
> {
> "glossary": {
> "title": "example glossary",
>   "GlossDiv": {
> "title": "S",
>   "GlossList": {
> "GlossEntry": {
>   "ID": "SGML",
>   "SortAs": "SGML",
>   "GlossTerm": "Sta..";
>   "Acronym": "SGML",
>   "Abbrev": "ISO 8879:1986",
>   "GlossDef": {
>   "para": "A meta-m..",
>   "GlossSeeAlso": 
>   ["GML", "XML"]
>   },
>   "GlossSee": "markup"
> }
> }
> }
> }
> }
>
> select *
> from json_to_table('jsonX');
> -- generated columns with no data/only nulls could be removed..
> -- arrays could be unnested in the process as well
>
> glossary | glossary.title   | glossary.title.GlossDiv.title | ..
> -
> null   | example glossary | S |
> ..
>
> the last column: glossary.title.GlossDiv.GlossList.GlossEntry.GlossSee
> with value "markup"
>
> ---
>
> what if there are different structures that need to be combined?
> (they could be added in the same manner as before)
>
> jsonY=
> {
> s1:[{
> "f1": "a",
> "f2": "b",
> "f3": { "f3.1": "c",
>   "f3.2": "d"}
>},
>{
> "f1": "e",
> "f4": "g"
>}
>]
> }
>
> select *
> from json_to_table('jsonY');
> -- generated columns with no data/only nulls could be removed..
> -- separator sign is untrusted
>
> s1  | s1.f1 | s1.f2 | s1.f3 | s1.f3.f3.1 | s1.f3.f3.2 | s1.f4
> -
> null| a | b | null  | c  | d  | null
> null| e | null  | null  | null   | null   | g


You can't have a function that returns a different set of columns each time you 
call it
(without specifying the output columns - which you don't want).

I have once written a function to flatten a JSON hierarchy to multiple rows.

Applied to your first example it would return the following:

path | key  | value
-+--+-
/glossary| title| example 
glossary
/glossary/GlossDiv   | title| S
/glossary/GlossDiv/GlossList/GlossEntry  | ID   | SGML
/glossary/GlossDiv/GlossList/GlossEntry  | Abbrev   | ISO 8879:1986
/glossary/GlossDiv/GlossList/GlossEntry  | SortAs   | SGML
/glossary/GlossDiv/GlossList/GlossEntry  | Acronym  | SGML
/glossary/GlossDiv/GlossList/GlossEntry  | GlossSee | markup
/glossary/GlossDiv/GlossList/GlossEntry  | GlossTerm| Sta..
/glossary/GlossDiv/GlossList/GlossEntry/GlossDef | para | A meta-m..
/glossary/GlossDiv/GlossList/GlossEntry/GlossDef | GlossSeeAlso | ["GML", "XML"]

And the following for the second example:

path   | key  | value
---+--+--
/s1| f1   | a
/s1| f2   | b
/s1/f3 | f3.1 | c
/s1/f3 | f3.2 | d
/s1| f1   | e
/s1| f4   | g


Thomas




create or replace function flatten(p_input jsonb, p_path text)
  returns table(path text, key text, value text)
as
$$
begin
  if jsonb_typeof(p_input) = 'array' then
return query
  select f.*
  from jsonb_array_elements(p_input) as a(element)
cross join flatten(a.element, p_path) f;
  else
return query
  select p_path, e.key, e.value #>> '{}'
  from jsonb_each(p_input) as e(key, value)
  where jsonb_typeof(e.value) not in ('object', 'array')
or (jsonb_typeof(e.value) = 'array' and jsonb_typeof(e.value -> 0) <> 
'object')

  union all

  select f.*
  from jsonb_each(p_input) as t(key,value)
cross join flatten(t.value, p_path||'/'||t.key) as f
  where jsonb_typeof(t.value) = 'object'

  union all

  select f.*
  from jsonb_each(p_input) as t(key,value)
cross join jsonb_array_elements(t.value) as a(element)
cross join flatten(a.element, p_path||'/'||t.key) as f
  where jsonb_typeof(t.value) = 'array'
and jsonb_typeof(t.value -> 0) = 'object';
  end if;
end;
$$
language plpgsql
immutable
parallel safe
;

cre

Re: src/test/examples/testlibpq2.c where the HAVE_SYS_SELECT_H is defined.

2023-03-17 Thread Thomas Munro
On Fri, Mar 17, 2023 at 7:48 PM jian he  wrote:
> Hi,
> playing around with $[0] testlibpq2.c example. I wondered where 
> HAVE_SYS_SELECT_H is defined?
>
> I searched on the internet, founded that people also asked the same question 
> in $[1].
>
> In my machine, I do have .
> system version: Ubuntu 22.04.1 LTS
> gcc version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
> gcc compile command: gcc pg_testlibpq2.c -I/home/jian/postgres/pg16/include \
> -L/home/jian/postgres/pg16/lib -lpq
>
> [0]https://www.postgresql.org/docs/current/libpq-example.html
> [1]https://stackoverflow.com/questions/37876850/in-compilation-time-how-to-find-the-macro-is-defined-in-which-header-file

In 15 and earlier, it is defined in pg_config.h, which is created by
configure.  But in 16, that particular macro was removed by commit
7e50b4e3c.  It looks like you are using PostgreSQL 16 sources, but
looking at PostgreSQL 15 examples?




Re: Delete values from JSON

2023-03-17 Thread Romain MAZIÈRE

Hi,

If it is jsonb type, you can have a look at the documentation : 
https://www.postgresql.org/docs/14/functions-json.html


There are some examples :

|jsonb| |-| |text| → |jsonb|

Deletes a key (and its value) from a JSON object, or matching string 
value(s) from a JSON array.


|'{"a": "b", "c": "d"}'::jsonb - 'a'| → |{"c": "d"}|

|'["a", "b", "c", "b"]'::jsonb - 'b'| → |["a", "c"]|

|jsonb| |-| |text[]| → |jsonb|

Deletes all matching keys or array elements from the left operand.

|'{"a": "b", "c": "d"}'::jsonb - '{a,c}'::text[]| → |{}|

|jsonb| |-| |integer| → |jsonb|

Deletes the array element with specified index (negative integers count 
from the end). Throws an error if JSON value is not an array.


|'["a", "b"]'::jsonb - 1| → |["a"]|

|jsonb| |#-| |text[]| → |jsonb|

Deletes the field or array element at the specified path, where path 
elements can be either field keys or array indexes.


|'["a", {"b":1}]'::jsonb #- '{1,b}'| → |["a", {}]|

Regards

Romain MAZIÈRE
romain.mazi...@sigmaz-consilium.fr
+33.535.545.085
+33.781.46.36.96
https://sigmaz-consilium.fr

Le 17/03/2023 à 08:56, Andreas Joseph Krogh a écrit :


Hi, in PG-14, how do I delete the keys |"dunsNumber": "NaN"|:

|{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "dunsNumber": "NaN", "organizationType": 
"LIMITED_COMPANY" } }, { "keyInformation": { "dunsNumber": "123", 
"organizationType": "LIMITED_COMPANY" } } ], "nisse": 123 }|


So that the result becomes:

|{ "sessionId": "ce6fc9d0-5923-4d71-9f7e-ae60f35c45d6", "details": [ { 
"keyInformation": { "organizationType": "LIMITED_COMPANY" } }, { 
"keyInformation": { "dunsNumber": "123", "organizationType": 
"LIMITED_COMPANY" } } ], "nisse": 123 }|


Thanks.

--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com
www.visena.com 



OpenPGP_0x97C05C89DEC4129F.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature


Re: nested xml/json to table

2023-03-17 Thread Wim Bertels
Thomas Kellerer schreef op vr 17-03-2023 om 11:21 [+0100]:
> Wim Bertels schrieb am 17.03.2023 um 11:05:
> > what would be the general idea: "easily" convert an hierarchical
> > structure like json or xml to a table; for example creating columns
> > by
> > appending the key-names when going doing down the three, using null
> > for
> > empty values, adding more columns as needed by the given structure.
> > (1-way operation)
> 
> 
> You can't have a function that returns a different set of columns
> each time you call it
> (without specifying the output columns - which you don't want).

Hello Thomas,

thanks for the feedback,
i was wondering in the likes of existing built-in functions or
extensions (not CREATE FUNCTION)

mvg,
Wim




Re: Delete values from JSON

2023-03-17 Thread Andreas Joseph Krogh


På fredag 17. mars 2023 kl. 11:56:22, skrev Romain MAZIÈRE <
romain.mazi...@sigmaz-consilium.fr >:
Hi,

If it is jsonb type, you can have a look at the documentation : 
https://www.postgresql.org/docs/14/functions-json.html 


There are some examples :


jsonb - text → jsonb

Deletes a key (and its value) from a JSON object, or matching string value(s) 
from a JSON array.

'{"a": "b", "c": "d"}'::jsonb - 'a' → {"c": "d"}

'["a", "b", "c", "b"]'::jsonb - 'b' → ["a", "c"]


jsonb - text[] → jsonb

Deletes all matching keys or array elements from the left operand.

'{"a": "b", "c": "d"}'::jsonb - '{a,c}'::text[] → {}


jsonb - integer → jsonb

Deletes the array element with specified index (negative integers count from 
the end). Throws an error if JSON value is not an array.

'["a", "b"]'::jsonb - 1 → ["a"]


jsonb #- text[] → jsonb

Deletes the field or array element at the specified path, where path elements 
can be either field keys or array indexes.

'["a", {"b":1}]'::jsonb #- '{1,b}' → ["a", {}]

Regards

I have looked at the docs, but it doesn't, AFAIU, show how to conditionally 
delete a key based on its value, and leave other keys in the JSONB not matching 
the value alone.

I want to delete all keys in the (pseudo) path 
details.keyInformation[*].dunsNumber if the value is "NaN".






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


Re: src/test/examples/testlibpq2.c where the HAVE_SYS_SELECT_H is defined.

2023-03-17 Thread jian he
Yes, you are right. my mistake. I should choose the manual dev version.

On Fri, Mar 17, 2023 at 4:12 PM Thomas Munro  wrote:

> On Fri, Mar 17, 2023 at 7:48 PM jian he 
> wrote:
> > Hi,
> > playing around with $[0] testlibpq2.c example. I wondered where
> HAVE_SYS_SELECT_H is defined?
> >
> > I searched on the internet, founded that people also asked the same
> question in $[1].
> >
> > In my machine, I do have .
> > system version: Ubuntu 22.04.1 LTS
> > gcc version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
> > gcc compile command: gcc pg_testlibpq2.c
> -I/home/jian/postgres/pg16/include \
> > -L/home/jian/postgres/pg16/lib -lpq
> >
> > [0]https://www.postgresql.org/docs/current/libpq-example.html
> > [1]
> https://stackoverflow.com/questions/37876850/in-compilation-time-how-to-find-the-macro-is-defined-in-which-header-file
>
> In 15 and earlier, it is defined in pg_config.h, which is created by
> configure.  But in 16, that particular macro was removed by commit
> 7e50b4e3c.  It looks like you are using PostgreSQL 16 sources, but
> looking at PostgreSQL 15 examples?
>


-- 
 I recommend David Deutsch's <>

  Jian


Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-17 Thread Peter J. Holzer
On 2023-03-16 11:52:47 +0100, Dominique Devienne wrote:
> On Thu, Mar 16, 2023 at 11:15 AM Pavel Stehule  
> wrote:
> čt 16. 3. 2023 v 9:55 odesílatel Dominique Devienne 
> napsal:
> That's a nice way to put it Pavel.
> And to have it both ways, use COPY in binary protocol?
[...]
> The performance benefit will be minimal ([...]).
> 
> COPY matters on INSERT for sure performance-wise.
> So why wouldn't COPY matter for SELECTs too?

COPY is faster than a bunch of INSERTs because each INSERT has some
overhead: It needs to be parsed (if you PREPAREd the INSERT you need to
parse the EXECUTE command instead) and planned. But most importantly you
have a round trip time between the client and the server. With COPY you
incur that overhead only once.

(Which reminds me that I should benchmark INSERT with lots of VALUES
against COPY some time.)

With COPYing the output of a SELECT I don't see any savings. On the
contrary, it's an extra step.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: EXTERNAL: Re: "No Free extents", table using all allocated space but no rows!

2023-03-17 Thread Peter J. Holzer
On 2023-03-16 20:24:49 +0100, Laurenz Albe wrote:
> On Thu, 2023-03-16 at 13:20 +, Dolan, Sean wrote:
> > I messed up and confused issues.   The error is :  
> > ERROR: Could not extend pg_tblspc/16555/PG_13_20200//  No space 
> > left on device
> > HINT: Check free disk space
> > 
> > So the schema is "full" and the offender is this one table.
> > I can't TRUNCATE as there needs to be space to perform the action.
> > Is there a way to see if there is a transaction on that table like you 
> > allude to?
> 
> If you don't have enough space to run TRUNCATE, and you don't feel like
> extending the disk space, DROP TABLE would be a convenient alternative.

If you need to get just a little bit free space, you may be able to
reduce the reserved space on the file system. For example, on ext4 there
are typically 5 % reserved for root, so by reducing that to 2 % you get
about 3 % of extra space:

kitty:~/tmp 0:26 :-) 19% dd if=/dev/zero of=fill bs=1024k
dd: error writing 'fill': No space left on device

kitty:~/tmp 0:27 :-( 20% df -h .
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/kitty--vg-home  4.8G  4.5G 0 100% /home

kitty:~/tmp 0:27 :-) 21% echo test > small
echo: write error: no space left on device

kitty# tune2fs -m 2 /dev/kitty-vg/home
tune2fs 1.46.2 (28-Feb-2021)
Setting reserved blocks percentage to 2% (25784 blocks)

kitty:~/tmp 0:27 :-( 22% df -h .
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/kitty--vg-home  4.8G  4.5G  152M  97% /home

kitty:~/tmp 0:29 :-) 23% echo test > small

kitty:~/tmp 0:29 :-) 24% cat small
test

(You should restore the reserved space to the default afterwards. The
BSD file system and its descendants (like ext4) don't like getting
completely full.)

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Can't connect to server

2023-03-17 Thread Wu, Abigaile
I am writing to seek your assistance with a problem I am facing while using 
pgAdmin4. After downloading the PostgreSQL 15, I downloaded the latest version 
of Pgadmin 4 and then I click the server and choose PostgreSQL 15. Whenever I 
try to log in to the server after entering my password, I receive the following 
error message:

"Connection to server at "localhost"(::1), port 5432 failed: could not initiate 
GSSAPI security context: The operation or option is not available: credential 
handle connection to server at "localhost" (::1), port 5432 failed: FATAL: 
password authentication failed for user "postgres"."

Could you please advise me on how to resolve this issue? I would be grateful if 
you could provide me with any instructions or guidance on what steps I can take 
to fix this problem and gain access to the server.

Thank you for your time and assistance. I look forward to hearing from you soon.


[cid:a456724f-1d8d-4cd9-ba3c-81dd950f3723]

Abigaile Wu

Pronouns: She/Her/Hers

Full-Time MBA Candidate 2023

Graduate Assistant - Admission Team

Broad College of Business

Michigan State University

C: (517) 329 - 5793

E: wuchi...@msu.edu



WHO WILL MAKE BUSINESS HAPPEN?

SPARTANS WILL.