Re: Expression of check constraint

2019-07-04 Thread rob stone
Hello,

On Thu, 2019-07-04 at 05:58 +, Dirk Mika wrote:
> Hi,
>  
> if I add the following check constraint to a table:
>  
> ALTER TABLE public.times_places
>ADD CONSTRAINT ck_tp_ratified CHECK
>   (ratified IS NULL OR (ratified IN ('Y', 'N')));
>  
> It becomes the following when describing the table in psql:
>  
> Check constraints:
> "ck_tp_ratified" CHECK (ratified IS NULL OR (ratified::text = ANY
> (ARRAY['Y'::character varying, 'N'::character varying]::text[])))
>  
> The behavior of the check constraint is logically identical and this
> seems plausible to me, but I still wonder why:
> 1.does the expression x in (a, b) become the expression x =
> any(array(a, b)?
> 2.why is the array expression casted so wildly? First to
> character varying and then to text[]?
> 3.The column ratified is of type character varying(1).  Why is it
> casted to text?
>  
> Dirk
>  -- 
> Dirk Mika
> Software Developer
> 
>  


Why don't you define "ratified" as CHAR(1)?

AFAIK, constraint evaluation is based upon the column's underlying data
type.

Cheers,
Robert
 







Re: Expression of check constraint

2019-07-04 Thread Dirk Mika
Hi


--
Dirk Mika
Software Developer

mika:timing GmbH
Strundepark - Kürtener Str. 11b
51465 Bergisch Gladbach
Germany

fon +49 2202 2401-1197
dirk.m...@mikatiming.de
www.mikatiming.de

AG Köln HRB 47509 * WEEE-Reg.-Nr. DE 90029884
Geschäftsführer: Harald Mika, Jörg Mika



## How2Use
## the ChampionChip by mika:timing
## https://youtu.be/qfOFXrpSKLQ
Am 04.07.19, 10:50 schrieb "rob stone" :

Why don't you define "ratified" as CHAR(1)?

This will change the constraint to

Check constraints:
"ck_tp_ratified" CHECK (ratified IS NULL OR (ratified = ANY 
(ARRAY['Y'::bpchar, 'N'::bpchar])))

Less casting. (
Does the casting from one "String" type to another "String" type have any 
performance impact, btw?

AFAIK, constraint evaluation is based upon the column's underlying data
type.

But isn't char and varchar all stored in the same data structure?

Dirk







Help with a selection

2019-07-04 Thread paul.malm
Hi,
I have a column named col (varchar)

col could look like this
902930
902920
902900
903110
913210
913200
...
I would like to remove an object that doesn't end '00' and if there are objects 
that start with the same 4 charcters and ending with 00.
All objects ending with 00 shall remain.
All object not ending with 00 shall remain of there is no object starting with 
the same 4 characters and ending with 00

The result of the col table should be:
902900
903110
913200

903110 is not removed because there is no 903100
913210 is removed because there is a 913200
902930 and 902920 are removed because there is 902900

I hope you understand the logic , perhaps there is no logic in my explanation.
Thanks in advance,
Paul


Re: Expression of check constraint

2019-07-04 Thread Laurenz Albe
Dirk Mika wrote:
> if I add the following check constraint to a table:
>  
> ALTER TABLE public.times_places
>ADD CONSTRAINT ck_tp_ratified CHECK
>   (ratified IS NULL OR (ratified IN ('Y', 'N')));
>  
> It becomes the following when describing the table in psql:
>  
> Check constraints:
> "ck_tp_ratified" CHECK (ratified IS NULL OR (ratified::text = ANY 
> (ARRAY['Y'::character varying, 'N'::character varying]::text[])))
>  
> The behavior of the check constraint is logically identical and this seems 
> plausible to me, but I still wonder why:
> 1.does the expression x in (a, b) become the expression x = any(array(a, 
> b)?

Because that's what the PostgreSQL query parser makes out of an IN list.

> 2.why is the array expression casted so wildly? First to character 
> varying and then to text[]?

Because "text" is the preferred string type, and there is no "=" operator for 
"character varying".
But don't worry, casting "character varying" to "text" doesn't cost anything, 
since the
types are binary coercible (the storage ist the same).

> 3.The column ratified is of type character varying(1).  Why is it casted 
> to text?

See 2. above.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Error: rows returned by function are not all of the same row type

2019-07-04 Thread Andrey Sychev
Hi, everyone,

I have written C-language function that returns
multiple composite rows.

Generally function works as expected, but sometimes problem takes place.
At  rough  guess  the  problem  occurs  when  number of returning rows
relatively large (more than 100K - 1M).

I have added some checkpoints.
P5 and P6 are present in snippet.

The  function  always  reaches checkpoint P5, but when
number of returning rows relatively large, sometimes
before P6 it returns error:

"rows returned by function are not all of the same row type"

Supposedly, at some iteration on SRF_RETURN_NEXT

Any ideas?

Below is a snippet of code:

#include "postgres.h"
#include "funcapi.h"
#include "executor/spi.h"

Datum my_func(PG_FUNCTION_ARGS);

PG_FUNCTION_INFO_V1(show_eudc);

Datum
my_eudc(PG_FUNCTION_ARGS)
{
  FuncCallContext *funcctx;
  int  call_cntr;
  int  max_calls;
  TupleDesc   tupleDesc;

  /* Build a tuple descriptor for our result type */
  if(get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
  {
ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("Function 
returning record called in context that cannot accept type record")));
  }

  if(SRF_IS_FIRSTCALL())
  {
MemoryContext oldcontext;
funcctx = SRF_FIRSTCALL_INIT();
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);

My_SPI_call_context ctx;
memset(&ctx, 0, sizeof(My_SPI_call_context));

int ret;

/* Connect to SPI manager */
if((ret = SPI_connect()) < 0)
{
  /* internal error */
  elog(ERROR, "spi_match: SPI_connect returned %d", ret);
  SPI_finish();
  PG_RETURN_VOID();
}

/* some setup code */

const char* stSQLDef_0[1] = {
"CREATE TEMPORARY TABLE results (v1 BIGINT NOT NULL, v2 INTEGER NOT NULL)",
};

for(int k=0; k<1; k++)
{
  ret = SPI_exec(stSQLDef_0[k], 0);
  if(ret != SPI_OK_UTILITY)
  {
elog(ERROR, "SPI_exec (0)-(%d) returned %d", k, ret);
my_spi_free_context(&ctx);
PG_RETURN_VOID();
  }
}

/* many code */

const char* stSQLResultsInsert = "INSERT INTO results (v1, v2) VALUES (%ld, 
%d)";

for(int k=0; kmax_calls = 0;
  funcctx->user_fctx = NULL;
  if(proc <= 0) ereport(NOTICE, (errmsg("SPI_execute (10) returned %d", 
ret)));
}
else if(proc)
{
  spi_tuptable = SPI_tuptable;

  funcctx->max_calls = proc;
  funcctx->user_fctx = spi_tuptable;
}

my_spi_free_context(&ctx);

tupleDesc = BlessTupleDesc(tupleDesc);
funcctx->tuple_desc = tupleDesc;
MemoryContextSwitchTo(oldcontext);

  }

  funcctx = SRF_PERCALL_SETUP();

  call_cntr = funcctx->call_cntr;
  max_calls = funcctx->max_calls;

  if(call_cntr < max_calls)
  {
SPITupleTable* table = (SPITupleTable*)funcctx->user_fctx;
Datum   results; /* Results tuple */
Datum   column[2];
boolisColumnNull[2];
HeapTuple   tuple;
int m;

if(table)
{
  for(m=0; m<2; m++)
  {
column[m] = SPI_getbinval(table->vals[call_cntr], table->tupdesc, m+1, 
&isColumnNull[m]);
  }

  tuple = heap_form_tuple(funcctx->tuple_desc, column, isColumnNull);
  results = HeapTupleGetDatum(tuple);
  SRF_RETURN_NEXT(funcctx, results);
}
  }
  else
  {
int ret;
SPITupleTable* table = (SPITupleTable*)funcctx->user_fctx;
if(table)
{
  SPI_freetuptable(table);
}

ereport(NOTICE, (errmsg("P6: [%s]-(%d)", (const char*)__FUNCTION__, 
max_calls)));

ret = SPI_exec("DROP TABLE results", 0);
if(ret != SPI_OK_UTILITY)
{
  elog(ERROR, "spi_match: SPI_exec (20) returned %d", ret);
}

SPI_finish();
SRF_RETURN_DONE(funcctx);
  }

  PG_RETURN_VOID();
}

-- 
Best regards,

Andrey Sychev

andrey.syc...@cifrasoft.com





Why does jsonb_set() remove non-mentioned keys?

2019-07-04 Thread Thomas Kellerer
Why does 

   select jsonb_set('{"foo": 1}'::jsonb, '{bar}', to_jsonb(null::int), true)

return NULL when all it should do is to add a second key?

I would expect {"foo": 1, "bar": null} or no change at all to the original JSON 
value, but not that the whole JSON is set to null. 

In the original case the new value to be set was the result of an expression, 
not a "hardcoded" null value.
 
Thomas







Re: Error: rows returned by function are not all of the same row type

2019-07-04 Thread Tom Lane
Andrey Sychev  writes:
> I have written C-language function that returns
> multiple composite rows.
> Generally function works as expected, but sometimes problem takes place.
> At  rough  guess  the  problem  occurs  when  number of returning rows
> relatively large (more than 100K - 1M).

I do not think it's valid to return from your function with the SPI
context still open.  Probably, it seems to accidentally sort of work
as long as you don't return enough rows to cause the outer query to
do anything interesting like spill to disk.

Probably you should re-code this to execute just once and return
a tuplestore.

regards, tom lane




Re: Expression of check constraint

2019-07-04 Thread Tom Lane
Dirk Mika  writes:
> 3.The column ratified is of type character varying(1).  Why is it casted 
> to text?

Type varchar in Postgres is a poor stepchild without any operators
of its own.  Text is the "native" string type and any comparison etc
will require casting varchar to text first.  It happens that the
expression decompilation code will show you those implicit casts
explicitly, but they don't really mean much.

regards, tom lane




PostgreSQL upgrade from 9.4.2 to 9.6.12

2019-07-04 Thread Ashwini Singh
Hi Everyone,

We have a legacy application running on Ruby on Rails on AWS
infrastructure. We wanted to do a DB upgrade from PostgreSQL version 9.4.2
to 9.6.12.

We tried finding out any breaking changes from 9.4.x to 9.6.x upgrade but
could not find out by reading the documentation.

Please let us know if the PostgreSQL version 9.6.12 is backwards compatible
with version 9.4.2 or if there are breaking changes do let us know.

Thank you.

Regards,
Ashwini


Re: PostgreSQL upgrade from 9.4.2 to 9.6.12

2019-07-04 Thread Adrian Klaver

On 7/4/19 7:48 AM, Ashwini Singh wrote:

Hi Everyone,

We have a legacy application running on Ruby on Rails on AWS 
infrastructure. We wanted to do a DB upgrade from PostgreSQL version 
9.4.2 to 9.6.12.


We tried finding out any breaking changes from 9.4.x to 9.6.x upgrade 
but could not find out by reading the documentation.


Please let us know if the PostgreSQL version 9.6.12 is backwards 
compatible with version 9.4.2 or if there are breaking changes do let us 
know.


Breaking changes rather depends on what you are doing in your app. 
Postgres will have backwards incompatible changes between major 
releases. Prior to Postgres version 10 that that was any version that 
changed the X in X.X.y in the versioning. For 10+ that is X.y. To see 
what backwards incompatible changes occurred go to the  Release Notes 
for the first new major version(change in X). So for you migration path :


https://www.postgresql.org/docs/9.5/release-9-5.html

https://www.postgresql.org/docs/9.6/release-9-6.html

The big(incompatible) changes will be listed under:

Migration to Version 

Changes are cumulative .




Thank you.

Regards,
Ashwini



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-04 Thread David G. Johnston
On Thu, Jul 4, 2019 at 6:18 AM Thomas Kellerer  wrote:

> Why does
>
>select jsonb_set('{"foo": 1}'::jsonb, '{bar}', to_jsonb(null::int),
> true)
>
> return NULL when all it should do is to add a second key?
>

Both functions involved are defined as being STRICT (null on null input).
You need to COALSESCE SQL NULL to JSON 'null' to obtain the result you
desire.  This is a general truth when dealing with the JSON type in
PostgreSQL.

select jsonb_set('{"foo": 1}'::jsonb, '{bar}',
coalesce(to_jsonb(null::int), 'null'), true)

David J.


Re: pg_dump (COPY) hanging intermittently

2019-07-04 Thread Ben Snaidero
On Thu, Jun 27, 2019 at 3:19 PM Tom Lane  wrote:

> Ben Snaidero  writes:
> > Do these stack traces shed help at all?
>
> None worth mentioning :-(.  Can you rebuild with debug symbols?
>
> regards, tom lane
>


So I've rebuilt with debug symbols and for some reason I am now unable to
reproduce the issue.  I've also gone back the original binaries and can
also no longer reproduce the issue.  I am starting to think this might be
network related (storage is network attached) as I haven't even rebooted
the server I have been using to test.

Thanks for all your help.  At least I learned how to build from the source
(with debug) so I gained something from working through this issue.


Re: PostgreSQL upgrade from 9.4.2 to 9.6.12

2019-07-04 Thread Jonathan Harden
If you're doing this in RDS make sure to check what extensions each of your
databases in your RDS instance has (we use PostGIS and hstore for example)
and upgrade them incrementally. AWS provide the right libs installed to do
those updates between each engine version, but not necessarily between
engine versions more than 1 major version apart.

You can see which versions of the extensions are installed in RDS in each
engine version on this page:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.Extensions


Converting to identity columns with domains on PK columns

2019-07-04 Thread PegoraroF10
Domains on Postgres are really strange to me. Am I creating a domain which is
exactly equal to integer, right ?

create domain i32 as integer;
create domain T50 as varchar(50);

Create table MyTable(
ID I32 not null primary key,
Description T50);

Then, after inserts and updates done to that table, I want to convert that
primary key to a identity column.

alter table MyTable alter ID add generated always as identity;

ERROR: identity column type must be smallint, integer, or bigint

So, What do I need do to create this identity column ?
Why Postgres consider different I32 and integer ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-04 Thread Adrian Klaver

On 7/4/19 12:41 PM, PegoraroF10 wrote:

Domains on Postgres are really strange to me. Am I creating a domain which is
exactly equal to integer, right ?

create domain i32 as integer;
create domain T50 as varchar(50);

Create table MyTable(
ID I32 not null primary key,
Description T50);

Then, after inserts and updates done to that table, I want to convert that
primary key to a identity column.

alter table MyTable alter ID add generated always as identity;

ERROR: identity column type must be smallint, integer, or bigint

So, What do I need do to create this identity column ?
Why Postgres consider different I32 and integer ?


Because one(integer) is a base type and the other is a domain over a 
base type(I32). Domains can have restrictions over what is accepted so I 
can see why they would not be good candidates for a sequence(identity).


Solutions:

1) Create a new integer column for the identity.

2) alter table MyTable alter ID type integer;

   alter table MyTable alter ID add generated always as identity;

\d 'MyTable' 



   Table "public.mytable" 



   Column|  Type   | Collation | Nullable |   Default 



-+-+---+--+-- 



 id  | integer |   | not null | generated always as 
identity 

 description | t50 |   |  | 









--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Converting to identity columns with domains on PK columns

2019-07-04 Thread PegoraroF10
ok, thanks for the explanation but ... I cannot add a field and move data,
constraints, triggers, identity to it because my pk field will be
repositioned to the last field on that table and I have lots of other codes
which point to pk as the first field on every table.

So, there is a way to convert that field to a identity field ?




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-04 Thread Adrian Klaver

On 7/4/19 1:03 PM, PegoraroF10 wrote:

ok, thanks for the explanation but ... I cannot add a field and move data,
constraints, triggers, identity to it because my pk field will be
repositioned to the last field on that table and I have lots of other codes
which point to pk as the first field on every table.


I stay away from using the index position of field for this reason.



So, there is a way to convert that field to a identity field ?


See my second option in previous post.

Or, old school identity column:):

create sequence id_seq owned by mytable.id;

alter table mytable alter column id set default nextval('id_seq');

\d mytable
 Table "public.mytable"
   Column| Type | Collation | Nullable |   Default
-+--+---+--+-
 id  | i32  |   | not null | nextval('id_seq'::regclass)
 description | t50  |   |  |
Indexes:
"mytable_pkey" PRIMARY KEY, btree (id)







--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-04 Thread Thomas Kellerer

David G. Johnston schrieb am 04.07.2019 um 18:20:

On Thu, Jul 4, 2019 at 6:18 AM Thomas Kellerer mailto:spam_ea...@gmx.net>> wrote:

Why does

    select jsonb_set('{"foo": 1}'::jsonb, '{bar}', to_jsonb(null::int), 
true)

return NULL when all it should do is to add a second key?


Both functions involved are defined as being STRICT (null on null
input).  You need to COALSESCE SQL NULL to JSON 'null' to obtain the
result you desire.  This is a general truth when dealing with the
JSON type in PostgreSQL.


But jsonb_set() doesn't change (or shouldn't) the whole value, only one key.

I can understand that the "bar" key would not be set (because of the NULL), but 
removing a key that isn't even part of the target path looks like a bug to.

Thomas

 






Re: Converting to identity columns with domains on PK columns

2019-07-04 Thread Adrian Klaver

On 7/4/19 1:27 PM, Adrian Klaver wrote:

On 7/4/19 1:03 PM, PegoraroF10 wrote:
ok, thanks for the explanation but ... I cannot add a field and move 
data,

constraints, triggers, identity to it because my pk field will be
repositioned to the last field on that table and I have lots of other 
codes

which point to pk as the first field on every table.


I stay away from using the index position of field for this reason.



So, there is a way to convert that field to a identity field ?


See my second option in previous post.

Or, old school identity column:):

create sequence id_seq owned by mytable.id;

alter table mytable alter column id set default nextval('id_seq');

\d mytable
  Table "public.mytable"
    Column    | Type | Collation | Nullable |   Default
-+--+---+--+-
  id  | i32  |   | not null | nextval('id_seq'::regclass)
  description | t50  |   |  |
Indexes:
     "mytable_pkey" PRIMARY KEY, btree (id)




I know this worked, but then I got to wondering why?

Found the answer in sequence.c(init_params) in the if (as_type != NULL) 
section. When creating a sequence you can specify AS data_type as long 
as the type is one of smallint, integer or bigint. If data_type is not 
specified then the default is bigint. If I am following correctly in 
tablecommands.c when you create an IDENTITY column it uses the type it 
gets from the column for the AS data_type. In your case that would be a 
domain type which is != to the base types above.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-04 Thread Gianni Ceccarelli
Some experimentation:

> \pset null '((null))'

> select jsonb_set('{"foo":1}'::jsonb,'{bar}','null'::jsonb,true);
┌─┐
│jsonb_set│
├─┤
│ {"bar": null, "foo": 1} │
└─┘


> select jsonb_set('{"foo":1}'::jsonb,'{bar}',to_jsonb(null::text),true);
┌───┐
│ jsonb_set │
├───┤
│ ((null))  │
└───┘

That's a bit weird already. Also:

> select null::jsonb;
┌──┐
│  jsonb   │
├──┤
│ ((null)) │
└──┘

> select 'null'::jsonb;
┌───┐
│ jsonb │
├───┤
│ null  │
└───┘

> select to_jsonb(null::int);
┌──┐
│ to_jsonb │
├──┤
│ ((null)) │
└──┘

> select to_jsonb('null'::text);
┌──┐
│ to_jsonb │
├──┤
│ "null"   │
└──┘

I'm sharing Thomas's confusion…

-- 
Dakkar - 
GPG public key fingerprint = A071 E618 DD2C 5901 9574
 6FE2 40EA 9883 7519 3F88
key id = 0x75193F88

Work continues in this area.
-- DEC's SPR-Answering-Automaton




Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-04 Thread David G. Johnston
On Thursday, July 4, 2019, Gianni Ceccarelli  wrote:

> Some experimentation:
>
> > \pset null '((null))'
>
> > select jsonb_set('{"foo":1}'::jsonb,'{bar}','null'::jsonb,true);
> ┌─┐
> │jsonb_set│
> ├─┤
> │ {"bar": null, "foo": 1} │
> └─┘


No SQL null, ok

>
>
> > select jsonb_set('{"foo":1}'::jsonb,'{bar}',to_jsonb(null::text),true);
> ┌───┐
> │ jsonb_set │
> ├───┤
> │ ((null))  │
> └───┘


Sql null poisons the expression and so sql null is the result


>
> That's a bit weird already. Also:
>
> > select null::jsonb;
> ┌──┐
> │  jsonb   │
> ├──┤
> │ ((null)) │
> └──┘
>
>
Sql null


> > select 'null'::jsonb;
> ┌───┐
> │ jsonb │
> ├───┤
> │ null  │
> └───┘
>
>
Json null


> > select to_jsonb(null::int);
> ┌──┐
> │ to_jsonb │
> ├──┤
> │ ((null)) │
> └──┘
>
>
Sql null poisons the function call which immediately returns sql null


> > select to_jsonb('null'::text);
> ┌──┐
> │ to_jsonb │
> ├──┤
> │ "null"   │
> └──┘
>
>
Json null


> I'm sharing Thomas's confusion…
>
>
Sql null and json null are represented differently; strict functions with
sql null inputs yield sql null output without even executing the function

David J.


Re: Converting to identity columns with domains on PK columns

2019-07-04 Thread Adrian Klaver

On 7/4/19 1:27 PM, Adrian Klaver wrote:

On 7/4/19 1:03 PM, PegoraroF10 wrote:
ok, thanks for the explanation but ... I cannot add a field and move 
data,

constraints, triggers, identity to it because my pk field will be
repositioned to the last field on that table and I have lots of other 
codes

which point to pk as the first field on every table.


I stay away from using the index position of field for this reason.



So, there is a way to convert that field to a identity field ?


See my second option in previous post.

Or, old school identity column:):

create sequence id_seq owned by mytable.id;

alter table mytable alter column id set default nextval('id_seq');

\d mytable
  Table "public.mytable"
    Column    | Type | Collation | Nullable |   Default
-+--+---+--+-
  id  | i32  |   | not null | nextval('id_seq'::regclass)
  description | t50  |   |  |
Indexes:
     "mytable_pkey" PRIMARY KEY, btree (id)



Further thinking pointed up the peril of the above. Your field is 
defined as integer and per my previous post a sequence without an AS 
data_type will be bigint. At some point the sequence is going to start 
trying to set nextval() to a number your field cannot handle. If you go 
that route you will need to do something like:


create sequence id_seq AS integer owned by mytable.id;


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-04 Thread David G. Johnston
On Thu, Jul 4, 2019 at 2:09 PM David G. Johnston 
wrote:

> On Thursday, July 4, 2019, Gianni Ceccarelli 
> wrote:
>
>>
>> > select to_jsonb('null'::text);
>> ┌──┐
>> │ to_jsonb │
>> ├──┤
>> │ "null"   │
>> └──┘
>>
>>
> Json null
>
>
Sorry, my bad on this last one.  You cannot use to_jsonb to construct a
json null.  The only way, I think, to construct a json null scalar is with
an explicit literal.

SELECT 'null'::jsonb;

The to_jsonb function always interprets a textual value passed to it as
being the literal text and so the original query results in a json string
whose content is "null"

David J.