Loops and Case Statements Involving Dates

2023-08-21 Thread Anthony Apollis
*Please review my code and make recommendations where needed. I have this code:
*```
NUMBER OF LOOPS FOR POSTGRESQL ETL:
SELECT
CASE
WHEN (((EXTRACT(DAY FROM
((CASE
WHEN
(SELECT DISTINCT "CALDAY" FROM
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT 
MAX(DISTINCT "CALDAY") AS
"Calendar day" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS
WEEK_NUMBER)) IS NULL THEN '2020-07-01'
ELSE
(SELECT DISTINCT "CALDAY" FROM
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT 
MAX(DISTINCT "CALDAY") AS
"Calendar day" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS
WEEK_NUMBER)) + interval '1 day'
END))::timestamp - --start date
(CAST(NOW() - INTERVAL '1 day' AS 
DATE))::timestamp)-1) --end date
* -1)) <= 30 THEN 1
ELSE
CEIL(((EXTRACT(DAY FROM
((CASE
WHEN
(SELECT DISTINCT "CALDAY" FROM
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT 
MAX(DISTINCT "CALDAY") AS
"Calendar day" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS
WEEK_NUMBER)) IS NULL THEN '2020-07-01'
ELSE
(SELECT DISTINCT "CALDAY" FROM
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT 
MAX(DISTINCT "CALDAY") AS
"Calendar day" FROM system."IMETA_ZINV_AP1_Invoice_data_TA_BW"" AS
WEEK_NUMBER)) + interval '1 day'
END))::timestamp - --start date
(CAST(NOW() - INTERVAL '1 day' AS 
DATE))::timestamp)-1) --end date
* -1)/30) --30 DAY INTERVALS UNLESS LESS
END
AS "Number of days"


I have re-written this code to make it less complex, still doing what
it is supposed to do. I want you to review my re-written code(code
must give me report up until yesterday):
```
WITH MaxDateCTE AS (
SELECT
COALESCE(MAX(DISTINCT "CALDAY"), '2021-07-01') AS MaxDate
FROM
"system"."IMETA_ZTRNSPCST$F_Shipment_Cost_TA_BW"
)

SELECT
CASE
WHEN EXTRACT(DAY FROM (MaxDate + INTERVAL '1 day')::timestamp
- (NOW() - INTERVAL '1 day')::timestamp) <= 30 THEN 1
ELSE CEIL(EXTRACT(DAY FROM (MaxDate + INTERVAL '1
day')::timestamp - (NOW() - INTERVAL '1 day')::timestamp) / 30)
END AS "Number of days"
FROM
MaxDateCTE;



*Full Code can be found here:
https://drive.google.com/file/d/1NaoaK0z3s3cfYilAdH4stJ1F6mq6Sc4n/view?usp=sharing
*


Re: Loops and Case Statements Involving Dates

2023-08-21 Thread jian he
On Mon, Aug 21, 2023 at 3:07 PM Anthony Apollis
 wrote:
>
> Please review my code and make recommendations where needed. I have this code:
> ```
> NUMBER OF LOOPS FOR POSTGRESQL ETL:
> SELECT
> CASE
> WHEN (((EXTRACT(DAY FROM
> ((CASE
> WHEN
> (SELECT DISTINCT "CALDAY" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” 
> AS WEEK_NUMBER
> WHERE
> WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT "CALDAY") AS "Calendar day" FROM 
> “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER)) IS NULL THEN 
> '2020-07-01'
> ELSE
> (SELECT DISTINCT "CALDAY" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” 
> AS WEEK_NUMBER
> WHERE
> WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT "CALDAY") AS "Calendar day" FROM 
> “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER)) + interval '1 
> day'
> END))::timestamp - --start date
> (CAST(NOW() - INTERVAL '1 day' AS DATE))::timestamp)-1) --end date
> * -1)) <= 30 THEN 1
> ELSE
> CEIL(((EXTRACT(DAY FROM
> ((CASE
> WHEN
> (SELECT DISTINCT "CALDAY" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” 
> AS WEEK_NUMBER
> WHERE
> WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT "CALDAY") AS "Calendar day" FROM 
> “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER)) IS NULL THEN 
> '2020-07-01'
> ELSE
> (SELECT DISTINCT "CALDAY" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” 
> AS WEEK_NUMBER
> WHERE
> WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT "CALDAY") AS "Calendar day" FROM 
> system."IMETA_ZINV_AP1_Invoice_data_TA_BW"" AS WEEK_NUMBER)) + interval '1 
> day'
> END))::timestamp - --start date
> (CAST(NOW() - INTERVAL '1 day' AS DATE))::timestamp)-1) --end date
> * -1)/30) --30 DAY INTERVALS UNLESS LESS
> END
> AS "Number of days"
>

“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW”
seems like some of the characters look like double quotes, but it's
maybe not a double quote.
wiki: https://en.wikipedia.org/wiki/Quotation_mark.




Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Rihad

On 8/20/23 20:22, Adrian Klaver wrote:

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:


Hi, all. After calling pg_stat_reset all statistics used by 
autovacuum got zeroed, and started accumulating from scratch. Some 
tables get acted upon properly, some don't.



Self-replying: yup, it seems there's an arbitrary limit of 100K of 
n_live_tup after which autovac/analyze kicks in, or it seems so.


To know rather then guess read:

https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM



Sure, I read it before asking.

Taking the first table in the list as an example:


    relname  | n_live_tup | n_dead_tup |   left   | 
n_mod_since_analyze |  left

--+++--+-+
fooo     |  32781 | 240663 |  -234057 | 
 513265 | -509937



n_dead_tup (not the actual value, but some time after calling 
pg_stat_reset) is much larger than 20% of n_live_tup 32781, and 
n_mod_since_analyze is much larger than 10% of it.


Yet it is kept unvacuumed and unanalyzed for a long time.

autovacuum_(vacuum|analyze)_threshold is 50.

What am I missing?






There are still many tables waiting for their turn, which is long due.

Although there are some tables having only 60-70 (not 60-70K) 
n_live_tup that have had autovacuum run on them. Weird.









Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Rihad

On 8/21/23 00:15, Adrian Klaver wrote:

On 8/20/23 12:10, Rihad wrote:

On 8/20/23 20:22, Adrian Klaver wrote:

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:


Hi, all. After calling pg_stat_reset all statistics used by 
autovacuum got zeroed, and started accumulating from scratch. Some 
tables get acted upon properly, some don't.



Self-replying: yup, it seems there's an arbitrary limit of 100K of 
n_live_tup after which autovac/analyze kicks in, or it seems so.


To know rather then guess read:

https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM 




Sure, I read it before asking.

Taking the first table in the list as an example:


 relname  | n_live_tup | n_dead_tup |   left   | 
n_mod_since_analyze |  left
--+++--+-+ 

fooo     |  32781 | 240663 |  -234057 | 
  513265 | -509937



n_dead_tup (not the actual value, but some time after calling 
pg_stat_reset) is much larger than 20% of n_live_tup 32781, and 
n_mod_since_analyze is much larger than 10% of it.


Yet it is kept unvacuumed and unanalyzed for a long time.

autovacuum_(vacuum|analyze)_threshold is 50.

What am I missing?


Hard to say without seeing the actual settings in postgresql.conf that 
match:


https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR 




Most importantly:

autovacuum

and

track_counts

https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS 





They are both on and set as per default. Autovac/analyze continue 
running on some tables after pg_stat_reset. Just not on all of them, 
even thought they should judging by live/dead tuples calculation.



foo=> show track_counts;
track_counts
--
on
(1 row)

foo=> show autovacuum;
autovacuum

on
(1 row)









There are still many tables waiting for their turn, which is long due.

Although there are some tables having only 60-70 (not 60-70K) 
n_live_tup that have had autovacuum run on them. Weird.













LDAP Authentication

2023-08-21 Thread Roger Tannous
Hello,

In section 21.10 LDAP Authentication
 of the
documentation, it says that the DB *user must already exist in the database
before LDAP can be used for authentication*.

I'm checking the possibility to use LDAP Authentication with Postgres, but
I'm confused about the user creation that must be initially done regarding
the  value of the password.

Suppose I create user_x with pass_x, this user will be able to connect to
the DB using these credentials and, of course, depending on the role
assigned to them.

But once I configure LDAP authentication on the database server, user_x has
to connect to the database according to whatever is set up in LDAP, and the
initially configured password becomes obsolete ? Is that correct ?


Thank you in advance,
Roger


Re: LDAP Authentication

2023-08-21 Thread Emile Amewoto
Hi Roger,
Here is the high level  process:
1- Create the user x without password in Postgres.
2- Assign  role or roles to the user x
3- Update pg_hba.conf with the ldap connection link.

You might need cert for the ldap to connect to AD, assuming you are using AD.

Regards,
Emile

> On 21 Aug 2023, at 10:42, Roger Tannous  wrote:
> 
> Hello, 
> 
> In section 21.10 LDAP Authentication 
>  of the 
> documentation, it says that the DB user must already exist in the database 
> before LDAP can be used for authentication.
> 
> I'm checking the possibility to use LDAP Authentication with Postgres, but 
> I'm confused about the user creation that must be initially done regarding 
> the  value of the password.
> 
> Suppose I create user_x with pass_x, this user will be able to connect to the 
> DB using these credentials and, of course, depending on the role assigned to 
> them.
> 
> But once I configure LDAP authentication on the database server, user_x has 
> to connect to the database according to whatever is set up in LDAP, and the 
> initially configured password becomes obsolete ? Is that correct ?
> 
> 
> Thank you in advance, 
> Roger
> 



Re: Loops and Case Statements Involving Dates

2023-08-21 Thread Peter J. Holzer
On 2023-08-21 09:07:00 +0200, Anthony Apollis wrote:
> Please review my code and make recommendations where needed. I have this code:
> ```
[complicated code snipped]
> 
> 
> I have re-written this code to make it less complex, still doing what
> it is supposed to do. I want you to review my re-written code(code
> must give me report up until yesterday):
> ```
> WITH MaxDateCTE AS (
> SELECT
> COALESCE(MAX(DISTINCT "CALDAY"), '2021-07-01') AS MaxDate
> FROM
> "system"."IMETA_ZTRNSPCST$F_Shipment_Cost_TA_BW"
> )
> 
> SELECT
> CASE
> WHEN EXTRACT(DAY FROM (MaxDate + INTERVAL '1 day')::timestamp - 
> (NOW() - INTERVAL '1 day')::timestamp) <= 30 THEN 1
> ELSE CEIL(EXTRACT(DAY FROM (MaxDate + INTERVAL '1 day')::timestamp - 
> (NOW() - INTERVAL '1 day')::timestamp) / 30)
> END AS "Number of days"
> FROM
> MaxDateCTE;

So the intent is to compute how many "months" the maximum CALDAY is in
the future, with some minor twists:
* A "month" is always 30 days, not a calendar month.
* The difference is between the day after the given date and yesterday -
  so it's shifted by one day (today + 30 days already counts as 2
  months)
* the minimum is 1.
?

Then this can be simplified further:

* MAX(DISTINCT "CALDAY") can be simplified to just MAX("CALDAY").

* The CASE can be eliminated and replaced by
GREATEST(CEIL(EXTRACT(DAY FROM (MaxDate + INTERVAL '1 day')::timestamp - 
(NOW() - INTERVAL '1 day')::timestamp) / 30), 1)

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Ron

On 8/20/23 14:10, Rihad wrote:

On 8/20/23 20:22, Adrian Klaver wrote:

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:


Hi, all. After calling pg_stat_reset all statistics used by autovacuum 
got zeroed, and started accumulating from scratch. Some tables get 
acted upon properly, some don't.



Self-replying: yup, it seems there's an arbitrary limit of 100K of 
n_live_tup after which autovac/analyze kicks in, or it seems so.


To know rather then guess read:

https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM



Sure, I read it before asking.

Taking the first table in the list as an example:


    relname  | n_live_tup | n_dead_tup |   left   | 
n_mod_since_analyze |  left

--+++--+-+
fooo     |  32781 | 240663 |  -234057 | 
 513265 | -509937



n_dead_tup (not the actual value, but some time after calling 
pg_stat_reset) is much larger than 20% of n_live_tup 32781, and 
n_mod_since_analyze is much larger than 10% of it.


Yet it is kept unvacuumed and unanalyzed for a long time.

autovacuum_(vacuum|analyze)_threshold is 50.

What am I missing?



What are your autovacuum_(vacuum|analyze_*scale_factor* values?

--
Born in Arizona, moved to Babylonia.

Re: Loops and Case Statements Involving Dates

2023-08-21 Thread Ron
Since this code is in a loop, consider using clock_timestamp() instead of 
now(). It might not matter in this situation, though.


On 8/21/23 02:07, Anthony Apollis wrote:
*Please review my code and make recommendations where needed. I have this 
code: *```

NUMBER OF LOOPS FOR POSTGRESQL ETL:
SELECT
CASE
WHEN (((EXTRACT(DAY FROM
((CASE
WHEN
(SELECT DISTINCT "CALDAY" FROM 
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT 
"CALDAY") AS "Calendar day" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS 
WEEK_NUMBER)) IS NULL THEN '2020-07-01'
ELSE
(SELECT DISTINCT "CALDAY" FROM 
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT 
"CALDAY") AS "Calendar day" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS 
WEEK_NUMBER)) + interval '1 day'
END))::timestamp - --start date
(CAST(NOW() - INTERVAL '1 day' AS 
DATE))::timestamp)-1) --end date
* -1)) <= 30 THEN 1
ELSE
CEIL(((EXTRACT(DAY FROM
((CASE
WHEN
(SELECT DISTINCT "CALDAY" FROM 
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT 
"CALDAY") AS "Calendar day" FROM “system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS 
WEEK_NUMBER)) IS NULL THEN '2020-07-01'
ELSE
(SELECT DISTINCT "CALDAY" FROM 
“system”.”IMETA_ZACTPRIC1_Cust_Pricing_TA_BW” AS WEEK_NUMBER
WHERE
WEEK_NUMBER."CALDAY" = (SELECT MAX(DISTINCT "CALDAY") AS 
"Calendar day" FROM system."IMETA_ZINV_AP1_Invoice_data_TA_BW"" AS WEEK_NUMBER)) + interval '1 day'
END))::timestamp - --start date
(CAST(NOW() - INTERVAL '1 day' AS 
DATE))::timestamp)-1) --end date
* -1)/30) --30 DAY INTERVALS UNLESS LESS
END
AS "Number of days"
I have re-written this code to make it less complex, still doing what it is 
supposed to do. I want you to review my re-written code(code must give me 
report up until yesterday):
```
WITH MaxDateCTE AS (
 SELECT
 COALESCE(MAX(DISTINCT "CALDAY"), '2021-07-01') AS MaxDate
 FROM
 "system"."IMETA_ZTRNSPCST$F_Shipment_Cost_TA_BW"
)

SELECT
 CASE
 WHEN EXTRACT(DAY FROM (MaxDate + INTERVAL '1 day')::timestamp - (NOW() - 
INTERVAL '1 day')::timestamp) <= 30 THEN 1
 ELSE CEIL(EXTRACT(DAY FROM (MaxDate + INTERVAL '1 day')::timestamp - 
(NOW() - INTERVAL '1 day')::timestamp) / 30)
 END AS "Number of days"
FROM
 MaxDateCTE;

**
*Full Code can be found here: 
https://drive.google.com/file/d/1NaoaK0z3s3cfYilAdH4stJ1F6mq6Sc4n/view?usp=sharing*




--
Born in Arizona, moved to Babylonia.

Re: A Good Beginner's Book

2023-08-21 Thread Wen Yi
I think the postgresql's document is the best place to learn.











Yours,
Wen Yi

---Original---
From: "Amn Ojee Uw"

The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Edoardo Panfili
Hello,
I am using
postgresql version: 15.3 (Debian 15.3-0+deb12u1)
org.postgresql.postgresql JDBC driver version: 42.6.0
via Java 17.0.7

I discovered an unattended (for me) situation: when I execute 
10 times the same prepared query the result is not always the same.

I wrote a little test case to show this.

this is the db that I am using:
 CREATE TABLE number(
   name character varying(30) NOT NULL,
   dim1 real DEFAULT '-1' NOT NULL
 );
 insert into number (name) VALUES('first');

and the test program:
static final String DB_URL = "jdbc:postgresql://192.168.64.7:5432/testdb";
static final String USER = "user";
static final String PASS = "password";
public static void main(String[] args) throws Exception {
 Connection conn = DriverManager.getConnection(DB_URL, USER, PASS);
 for(int i=0; i<10; i++) {
   try( PreparedStatement istruzioneCelle = conn.prepareStatement(
   "SELECT dim1 FROM number WHERE name='first'") ) {
 ResultSet rs = istruzioneCelle.executeQuery();
 rs.next();
 System.out.print("p: "+rs.getString("dim1")+"\n”);
   } catch (SQLException e) {
 e.printStackTrace();
   } 
 }
 conn.close();
}

The attended result was a sequence of ten equal values but this is the actual 
result:
p: -1
p: -1
p: -1
p: -1
p: -1
p: -1.0
p: -1.0
p: -1.0
p: -1.0
p: -1.0

All works fine if I open and close the connection after every single query
but in production I am using pooled connections.
This is what I can read in postgresql logs (it seems that after 4 queries
the statement becomes named and the result changes after the second call to
the named query):

2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute : SET 
extra_float_digits = 3
2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute : SET 
application_name = 'PostgreSQL JDBC Driver'
2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name=‘first'

Can I do something to avoid this problem?

thank you
Edoardo





Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Adrian Klaver

On 8/21/23 08:27, Edoardo Panfili wrote:

Hello,
I am using
postgresql version: 15.3 (Debian 15.3-0+deb12u1)
org.postgresql.postgresql JDBC driver version: 42.6.0
via Java 17.0.7

I discovered an unattended (for me) situation: when I execute
10 times the same prepared query the result is not always the same.




The attended result was a sequence of ten equal values but this is the actual 
result:
p: -1
p: -1
p: -1
p: -1
p: -1
p: -1.0
p: -1.0
p: -1.0
p: -1.0
p: -1.0


They are equal values:

 select -1 = -1.0;
 ?column?
--
 t




All works fine if I open and close the connection after every single query
but in production I am using pooled connections.
This is what I can read in postgresql logs (it seems that after 4 queries
the statement becomes named and the result changes after the second call to
the named query):

2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute : SET 
extra_float_digits = 3
2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute : SET 
application_name = 'PostgreSQL JDBC Driver'
2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute : SELECT 
dim1 FROM number WHERE name='first'
2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name='first'
2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
FROM number WHERE name=‘first'

Can I do something to avoid this problem?


Read this:

https://www.postgresql.org/docs/current/sql-prepare.html

"
By default (that is, when plan_cache_mode is set to auto), the server 
will automatically choose whether to use a generic or custom plan for a 
prepared statement that has parameters. The current rule for this is 
that the first five executions are done with custom plans and the 
average estimated cost of those plans is calculated. Then a generic plan 
is created and its estimated cost is compared to the average custom-plan 
cost. Subsequent executions use the generic plan if its cost is not so 
much higher than the average custom-plan cost as to make repeated 
replanning seem preferable.


This heuristic can be overridden, forcing the server to use either 
generic or custom plans, by setting plan_cache_mode to 
force_generic_plan or force_custom_plan respectively. This setting is 
primarily useful if the generic plan's cost estimate is badly off for 
some reason, allowing it to be chosen even though its actual cost is 
much more than that of a custom plan.

"



thank you
Edoardo





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread David G. Johnston
On Mon, Aug 21, 2023 at 8:28 AM Edoardo Panfili <
edoardo.panf...@iisgubbio.edu.it> wrote:

>
>dim1 real DEFAULT '-1' NOT NULL
>
>  System.out.print("p: "+rs.getString("dim1")+"\n”);
>
> Can I do something to avoid this problem?
>

Use the type appropriate getter, not getString, to retrieve the value of
the underlying real typed column.

Otherwise, I agree this seems like a bug, probably in the JDBC driver,
though one pertains to style as opposed to semantics since both answers are
technically correct.

David J.


Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Adrian Klaver

On 8/20/23 22:31, Rihad wrote:

On 8/21/23 00:15, Adrian Klaver wrote:

On 8/20/23 12:10, Rihad wrote:

On 8/20/23 20:22, Adrian Klaver wrote:

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:






Hard to say without seeing the actual settings in postgresql.conf that 
match:


https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR


Most importantly:

autovacuum

and

track_counts

https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS



They are both on and set as per default. Autovac/analyze continue 
running on some tables after pg_stat_reset. Just not on all of them, 
even thought they should judging by live/dead tuples calculation.



foo=> show track_counts;
track_counts
--
on
(1 row)

foo=> show autovacuum;
autovacuum

on
(1 row)


How about the rest of the settings at?:

https://www.postgresql.org/docs/current/runtime-config-autovacuum.html

Have the storage parameters for the tables been changed per?:

https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS

In psql you can do:

\d+ 

The setting if changed will show up as Options: 

Also are there include directives in use per?:

https://www.postgresql.org/docs/current/config-setting.html#CONFIG-INCLUDES

You can see by looking at the sourcefile field in pg_settings:

https://www.postgresql.org/docs/current/view-pg-settings.html











There are still many tables waiting for their turn, which is long due.

Although there are some tables having only 60-70 (not 60-70K) 
n_live_tup that have had autovacuum run on them. Weird.















--
Adrian Klaver
adrian.kla...@aklaver.com





Will PostgreSQL 16 supports native transparent data encryption ?

2023-08-21 Thread Mostafa Fathy
Hi there,

It is mentioned here
https://www.postgresql.org/about/press/faq/#:~:text=Q%3A%20What%20features%20will%20PostgreSQL%2016%20have%3F
that
native transparent data encryption is being worked on and it may be
delivered with PostgreSQL 16.

Is PostgreSQL 16 beta version includes native transparent data encryption
or not ? because I checked the docs
https://www.postgresql.org/docs/16/index.html  and couldn't find
anything related to transparent data encryption.

If not supported yet in the beta version I would like to know if PostgreSQL
16 final version will support native transparent data encryption or not?


Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Rihad

On 8/21/23 20:00, Adrian Klaver wrote:

On 8/20/23 22:31, Rihad wrote:

On 8/21/23 00:15, Adrian Klaver wrote:

On 8/20/23 12:10, Rihad wrote:

On 8/20/23 20:22, Adrian Klaver wrote:

On 8/18/23 22:35, Rihad wrote:

On 8/17/23 13:01, rihad wrote:






Hard to say without seeing the actual settings in postgresql.conf 
that match:


https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR 




Most importantly:

autovacuum

and

track_counts

https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS 





They are both on and set as per default. Autovac/analyze continue 
running on some tables after pg_stat_reset. Just not on all of them, 
even thought they should judging by live/dead tuples calculation.



foo=> show track_counts;
track_counts
--
on
(1 row)

foo=> show autovacuum;
autovacuum

on
(1 row)


How about the rest of the settings at?:

https://www.postgresql.org/docs/current/runtime-config-autovacuum.html

Have the storage parameters for the tables been changed per?:

https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS 



In psql you can do:

\d+ 

The setting if changed will show up as Options: 

Also are there include directives in use per?:

https://www.postgresql.org/docs/current/config-setting.html#CONFIG-INCLUDES 



You can see by looking at the sourcefile field in pg_settings:

https://www.postgresql.org/docs/current/view-pg-settings.html


Thanks for the detailed reply, no tables have custom settings.

I need to make it clear once again that all autovac/analyze work as 
expected when n_live_tup matches reality, i.e. when analyze has been run 
on them since last reset.


A way to fix this is to simply analyze the whole database. Before doing 
that, while n_live_tup starts from basically 0 and grows based on DB 
activity, these usual calculations of 10-20% table size for 
vacuum/analyze don't work. They don't trigger autovac for most tables, 
or do it much much later.













There are still many tables waiting for their turn, which is long 
due.


Although there are some tables having only 60-70 (not 60-70K) 
n_live_tup that have had autovacuum run on them. Weird.





















Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Adrian Klaver

On 8/21/23 09:09, Rihad wrote:

On 8/21/23 20:00, Adrian Klaver wrote:




Thanks for the detailed reply, no tables have custom settings.

I need to make it clear once again that all autovac/analyze work as 
expected when n_live_tup matches reality, i.e. when analyze has been run 
on them since last reset.


A way to fix this is to simply analyze the whole database. Before doing 
that, while n_live_tup starts from basically 0 and grows based on DB 
activity, these usual calculations of 10-20% table size for 
vacuum/analyze don't work. They don't trigger autovac for most tables, 
or do it much much later.




You still have not said or shown whether the other autovacuum settings 
are the default values or not. Assuming they are, then the only other 
explanation I can come up with is that there is a process or processes 
that are creating long running open transactions that prevent autovacuum 
from running on the affected tables.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Will PostgreSQL 16 supports native transparent data encryption ?

2023-08-21 Thread Adrian Klaver

On 8/21/23 09:02, Mostafa Fathy wrote:

Hi there,

It is mentioned here 
https://www.postgresql.org/about/press/faq/#:~:text=Q%3A%20What%20features%20will%20PostgreSQL%2016%20have%3F  that native transparent data encryption is being worked on and it may be delivered with PostgreSQL 16.


Is PostgreSQL 16 beta version includes native transparent data 
encryption or not ? because I checked the docs 
https://www.postgresql.org/docs/16/index.html 
  and couldn't find 
anything related to transparent data encryption.


If not supported yet in the beta version I would like to know if 
PostgreSQL 16 final version will support native transparent data 
encryption or not?


I can't find anything that says it is included and if it is not at this 
point(Beta) it will not be in the production release.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Rihad

On 8/21/23 20:17, Adrian Klaver wrote:

On 8/21/23 09:09, Rihad wrote:

On 8/21/23 20:00, Adrian Klaver wrote:




Thanks for the detailed reply, no tables have custom settings.

I need to make it clear once again that all autovac/analyze work as 
expected when n_live_tup matches reality, i.e. when analyze has been 
run on them since last reset.


A way to fix this is to simply analyze the whole database. Before 
doing that, while n_live_tup starts from basically 0 and grows based 
on DB activity, these usual calculations of 10-20% table size for 
vacuum/analyze don't work. They don't trigger autovac for most 
tables, or do it much much later.




You still have not said or shown whether the other autovacuum settings 
are the default values or not. Assuming they are, then the only other 
explanation I can come up with is that there is a process or processes 
that are creating long running open transactions that prevent 
autovacuum from running on the affected tables.




Sorry, they are all as per default, commented out in the config.

There are no long running queries, otherwise they wouldn't be 
vacuumed/analyzed in due time after running first manual analyze, which 
updates n_live_tup to match reltuples.






Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Tom Lane
"David G. Johnston"  writes:
> Otherwise, I agree this seems like a bug, probably in the JDBC driver,
> though one pertains to style as opposed to semantics since both answers are
> technically correct.

I don't see any such behavior change in psql, so I agree that this is
probably something to ask about on the JDBC list.

My first thought was that the output change occurs when the query
transitions from custom to generic plan.  However, since there are
no parameters involved the backend would go for a generic plan
immediately.  Having said that, I seem to recall that the JDBC driver
has its own notion of custom vs generic execution, so maybe the
issue is around there somewhere.

regards, tom lane




Re: Why doesn't autovacuum/analyze run in due time after calling pg_stat_reset?

2023-08-21 Thread Adrian Klaver

On 8/21/23 09:31, Rihad wrote:

On 8/21/23 20:17, Adrian Klaver wrote:

On 8/21/23 09:09, Rihad wrote:

On 8/21/23 20:00, Adrian Klaver wrote:






Sorry, they are all as per default, commented out in the config.

There are no long running queries, otherwise they wouldn't be 
vacuumed/analyzed in due time after running first manual analyze, which 
updates n_live_tup to match reltuples.




My only remaining suggestion is to closely monitor the Postgres log and 
see if provides a clue.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Edoardo Panfili


> Il giorno 21 ago 2023, alle ore 17:45, Adrian Klaver 
>  ha scritto:
> 
> On 8/21/23 08:27, Edoardo Panfili wrote:
>> Hello,
>> I am using
>> postgresql version: 15.3 (Debian 15.3-0+deb12u1)
>> org.postgresql.postgresql JDBC driver version: 42.6.0
>> via Java 17.0.7
>> I discovered an unattended (for me) situation: when I execute
>> 10 times the same prepared query the result is not always the same.
> 
>> The attended result was a sequence of ten equal values but this is the 
>> actual result:
>> p: -1
>> p: -1
>> p: -1
>> p: -1
>> p: -1
>> p: -1.0
>> p: -1.0
>> p: -1.0
>> p: -1.0
>> p: -1.0
> 
> They are equal values:
> 
> select -1 = -1.0;
> ?column?
> --
> t
You are right, I know. But in a single occasion I have to use it as a text 
value. Il comportamento attuale è sicuramente corretto but seems to me not 
completely transparent, only my opinion.

> 
> 
>> All works fine if I open and close the connection after every single query
>> but in production I am using pooled connections.
>> This is what I can read in postgresql logs (it seems that after 4 queries
>> the statement becomes named and the result changes after the second call to
>> the named query):
>> 2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute : SET 
>> extra_float_digits = 3
>> 2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute : SET 
>> application_name = 'PostgreSQL JDBC Driver'
>> 2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name=‘first'
>> Can I do something to avoid this problem?
> 
> Read this:
> 
> https://www.postgresql.org/docs/current/sql-prepare.html
> 
> "
> By default (that is, when plan_cache_mode is set to auto), the server will 
> automatically choose whether to use a generic or custom plan for a prepared 
> statement that has parameters. The current rule for this is that the first 
> five executions are done with custom plans and the average estimated cost of 
> those plans is calculated. Then a generic plan is created and its estimated 
> cost is compared to the average custom-plan cost. Subsequent executions use 
> the generic plan if its cost is not so much higher than the average 
> custom-plan cost as to make repeated replanning seem preferable.
> 
> This heuristic can be overridden, forcing the server to use either generic or 
> custom plans, by setting plan_cache_mode to force_generic_plan or 
> force_custom_plan respectively. This setting is primarily useful if the 
> generic plan's cost estimate is badly off for some reason, allowing it to be 
> chosen even though its actual cost is much more than that of a custom plan.
> "

Thank you for the link! I did a try setting “plan_cache_mode” but it seems 
nothing change, and my test query (not production one obviously) has no 
parameter and in this occasion “if the prepared statement has no parameters, 
then this is moot and a generic plan is always used.” 
Also using variables in query nothing changes. I will work again on it.

Thank you again.
Edoardo

Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Edoardo Panfili


> Il giorno 21 ago 2023, alle ore 17:50, David G. Johnston 
>  ha scritto:
> 
> 
> On Mon, Aug 21, 2023 at 8:28 AM Edoardo Panfili 
> mailto:edoardo.panf...@iisgubbio.edu.it>> 
> wrote:
>> 
>>dim1 real DEFAULT '-1' NOT NULL
>> 
>>  System.out.print("p: "+rs.getString("dim1")+"\n”);
>> 
>> Can I do something to avoid this problem?
> 
> Use the type appropriate getter, not getString, to retrieve the value of the 
> underlying real typed column.
I know, but in this occasion I need to use text value.

> 
> Otherwise, I agree this seems like a bug, probably in the JDBC driver, though 
> one pertains to style as opposed to semantics since both answers are 
> technically correct.
I will try, thank you

Edoardo



Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Edoardo Panfili



> Il giorno 21 ago 2023, alle ore 18:45, Tom Lane  ha 
> scritto:
> 
> "David G. Johnston"  writes:
>> Otherwise, I agree this seems like a bug, probably in the JDBC driver,
>> though one pertains to style as opposed to semantics since both answers are
>> technically correct.
> 
> I don't see any such behavior change in psql, so I agree that this is
> probably something to ask about on the JDBC list.

I’m going to ask there, thank you

Edoardo





Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Pavel Stehule
po 21. 8. 2023 v 19:52 odesílatel Edoardo Panfili <
edoardo.panf...@iisgubbio.edu.it> napsal:

>
>
> > Il giorno 21 ago 2023, alle ore 18:45, Tom Lane  ha
> scritto:
> >
> > "David G. Johnston"  writes:
> >> Otherwise, I agree this seems like a bug, probably in the JDBC driver,
> >> though one pertains to style as opposed to semantics since both answers
> are
> >> technically correct.
> >
> > I don't see any such behavior change in psql, so I agree that this is
> > probably something to ask about on the JDBC list.
>
> I’m going to ask there, thank you
>

Maybe it is switch from client side prepared statements to server side
prepared statements

see *prepareThreshold *https://jdbc.postgresql.org/documentation/use/

I don't use Java, so I don't know more

Regards

Pavel


> Edoardo
>
>
>
>


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Rob Sargent

On 8/21/23 11:17, Edoardo Panfili wrote:
Use the type appropriate getter, not getString, to retrieve the value 
of the underlying real typed column.

I know, but in this occasion I need to use text value.



Otherwise, I agree this seems like a bug, probably in the JDBC 
driver, though one pertains to style as opposed to semantics since 
both answers are technically correct.

I will try, thank you

Edoardo

You appear to have two code paths in your application.  The one that 
needs the string version of the answer should do the conversion of the 
numeric value.

rjs


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Peter J. Holzer
On 2023-08-21 17:27:20 +0200, Edoardo Panfili wrote:
> The attended result was a sequence of ten equal values but this is the actual 
> result:
> p: -1
> p: -1
> p: -1
> p: -1
> p: -1
> p: -1.0
> p: -1.0
> p: -1.0
> p: -1.0
> p: -1.0
[...]
> 2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute : SET 
> extra_float_digits = 3
> 2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute : SET 
> application_name = 'PostgreSQL JDBC Driver'
> 2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute : 
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute : 
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute : 
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute : 
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
> FROM number WHERE name='first'
> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
> FROM number WHERE name='first'
> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
> FROM number WHERE name='first'
> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
> FROM number WHERE name='first'
> 2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
> FROM number WHERE name='first'
> 2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT dim1 
> FROM number WHERE name=‘first'

Are these outputs from the same run?

I notice that the output from the program switches after 5 queries from
"-1" to "-1-0", but the logged query name switches after 4 queries from
"" to "S_1".

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Edoardo Panfili



> Il giorno 21 ago 2023, alle ore 20:13, Peter J. Holzer  ha 
> scritto:
> 
> On 2023-08-21 17:27:20 +0200, Edoardo Panfili wrote:
>> The attended result was a sequence of ten equal values but this is the 
>> actual result:
>> p: -1
>> p: -1
>> p: -1
>> p: -1
>> p: -1
>> p: -1.0
>> p: -1.0
>> p: -1.0
>> p: -1.0
>> p: -1.0
> [...]
>> 2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute : SET 
>> extra_float_digits = 3
>> 2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute : SET 
>> application_name = 'PostgreSQL JDBC Driver'
>> 2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute : 
>> SELECT dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name='first'
>> 2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> dim1 FROM number WHERE name=‘first'
> 
> Are these outputs from the same run?
Yes

> 
> I notice that the output from the program switches after 5 queries from
> "-1" to "-1-0", but the logged query name switches after 4 queries from
> "" to "S_1”.

You’re right.  It seem a JDBC side problem. I am doing some tests using 
suggestions from pgsql-jdbc list.

What sounds strange to me is that switching from “mode_X” to  “mode_Y” I obtain 
different representation of the same value,

I know the value is semantically the same.. but… in some way I like to see.. "a 
perfect postgresql env” (as it absolutely is)

Edoardo






Re: JDBC + PG-15 but not psql

2023-08-21 Thread Dave Cramer
Hi Amn,

Can you help me understand this issue better?

I don't see anywhere in the code where you are attempting to create a
tablespace ?


Dave Cramer



On Sun, 20 Aug 2023 at 11:17, Amn Ojee Uw  wrote:

> Hello.
>
> I have searched the net in an attempt to find if others have had and
> resolved this challenge, but most of the sites talk about how, when using
> the psql, this error arises. In my case, the error arises only when access
> PG-15 using JDBC.
> JDBC connects to the database, but when trying to execute a schema, it be
> to create a database or to create a tabelspace I get this error :
>
> *StackTrace : [Ljava.lang.StackTraceElement;@7a69b07*
> *Message : ERROR: CREATE TABLESPACE cannot run inside a transaction block*
>
> I have used the same algorithm, but now modify to accommodate PG-15, when
> using SQLite. So the JDBC code might not be the problem, but the
> requirements needed from PG-15.
>
> Change of perspective, I resolved the concern!
>
> Here is what changed the outcome :
> --- snip ---
>
> public void connectToDatabase() throws ClassNotFoundException,
> SQLException {
> try {
> Class.forName(this.getDatabaseClass().toString());
> this.conn =
> DriverManager.getConnection(this.getDatabaseUrl().toString(),
> this.getUserID().toString(),
> this.getUserPassword().toString());
>
> *this.conn.setAutoCommit(true);** //** game changer!!*
>
>
> this.pout("Connected to the PostgreSQL server, success!!!");
>
> this.stmt = this.conn.createStatement();
> } catch (final SQLException | ClassNotFoundException e) {
> throw e;
> }
> }
>
> --
>
> All I had to do was to setAutoCommit to true, PG-15 set this value to
> false by default.
>
>
> I hope my experience can help others.
>
>
>


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Dave Cramer
On Mon, 21 Aug 2023 at 12:46, Tom Lane  wrote:

> "David G. Johnston"  writes:
> > Otherwise, I agree this seems like a bug, probably in the JDBC driver,
> > though one pertains to style as opposed to semantics since both answers
> are
> > technically correct.
>
> I don't see any such behavior change in psql, so I agree that this is
> probably something to ask about on the JDBC list.
>
> My first thought was that the output change occurs when the query
> transitions from custom to generic plan.  However, since there are
> no parameters involved the backend would go for a generic plan
> immediately.  Having said that, I seem to recall that the JDBC driver
> has its own notion of custom vs generic execution, so maybe the
> issue is around there somewhere.
>
> regards, tom lane
>
>
Tom,

It's because we also switch to binary at that point and don't parse text
any more. I agree though it shouldn't happen.

Dave


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Dave Cramer
On Mon, 21 Aug 2023 at 15:12, Edoardo Panfili <
edoardo.panf...@iisgubbio.edu.it> wrote:

>
>
> Il giorno 21 ago 2023, alle ore 17:45, Adrian Klaver <
> adrian.kla...@aklaver.com> ha scritto:
>
> On 8/21/23 08:27, Edoardo Panfili wrote:
>
> Hello,
> I am using
> postgresql version: 15.3 (Debian 15.3-0+deb12u1)
> org.postgresql.postgresql JDBC driver version: 42.6.0
> via Java 17.0.7
> I discovered an unattended (for me) situation: when I execute
> 10 times the same prepared query the result is not always the same.
>
>
> The attended result was a sequence of ten equal values but this is the
> actual result:
> p: -1
> p: -1
> p: -1
> p: -1
> p: -1
> p: -1.0
> p: -1.0
> p: -1.0
> p: -1.0
> p: -1.0
>
>
> They are equal values:
>
> select -1 = -1.0;
> ?column?
> --
> t
>
> You are right, I know. But in a single occasion I have to use it as a text
> value. Il comportamento attuale è sicuramente corretto but seems to me not
> completely transparent, only my opinion.
>
>
>
> All works fine if I open and close the connection after every single query
> but in production I am using pooled connections.
> This is what I can read in postgresql logs (it seems that after 4 queries
> the statement becomes named and the result changes after the second call to
> the named query):
> 2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute :
> SET extra_float_digits = 3
> 2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute :
> SET application_name = 'PostgreSQL JDBC Driver'
> 2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute :
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute :
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute :
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute :
> SELECT dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT
> dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT
> dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT
> dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT
> dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT
> dim1 FROM number WHERE name='first'
> 2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT
> dim1 FROM number WHERE name=‘first'
> Can I do something to avoid this problem?
>
>
> Read this:
>
> https://www.postgresql.org/docs/current/sql-prepare.html
>
> "
> By default (that is, when plan_cache_mode is set to auto), the server will
> automatically choose whether to use a generic or custom plan for a prepared
> statement that has parameters. The current rule for this is that the first
> five executions are done with custom plans and the average estimated cost
> of those plans is calculated. Then a generic plan is created and its
> estimated cost is compared to the average custom-plan cost. Subsequent
> executions use the generic plan if its cost is not so much higher than the
> average custom-plan cost as to make repeated replanning seem preferable.
>
> This heuristic can be overridden, forcing the server to use either generic
> or custom plans, by setting plan_cache_mode to force_generic_plan or
> force_custom_plan respectively. This setting is primarily useful if the
> generic plan's cost estimate is badly off for some reason, allowing it to
> be chosen even though its actual cost is much more than that of a custom
> plan.
> "
>
>
> Thank you for the link! I did a try setting “plan_cache_mode” but it seems
> nothing change, and my test query (not production one obviously) has no
> parameter and in this occasion “if the prepared statement has no
> parameters, then this is moot and a generic plan is always used.”
> Also using variables in query nothing changes. I will work again on it.
>

It has nothing to do with this. Yes the generic plan will be used but that
does not change the output.
It has to do with the way the data is being transferred. When the driver
switches to a named statement it also switches to binary mode which means
data will be transferred in binary.

In text we get -1, in binary we get -1.0

Dave


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Dave Cramer
On Mon, 21 Aug 2023 at 14:42, Edoardo Panfili  wrote:

>
>
> > Il giorno 21 ago 2023, alle ore 20:13, Peter J. Holzer 
> ha scritto:
> >
> > On 2023-08-21 17:27:20 +0200, Edoardo Panfili wrote:
> >> The attended result was a sequence of ten equal values but this is the
> actual result:
> >> p: -1
> >> p: -1
> >> p: -1
> >> p: -1
> >> p: -1
> >> p: -1.0
> >> p: -1.0
> >> p: -1.0
> >> p: -1.0
> >> p: -1.0
> > [...]
> >> 2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute
> : SET extra_float_digits = 3
> >> 2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute
> : SET application_name = 'PostgreSQL JDBC Driver'
> >> 2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute
> : SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute
> : SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute
> : SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute
> : SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1:
> SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1:
> SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1:
> SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1:
> SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1:
> SELECT dim1 FROM number WHERE name='first'
> >> 2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1:
> SELECT dim1 FROM number WHERE name=‘first'
> >
> > Are these outputs from the same run?
> Yes
>
> >
> > I notice that the output from the program switches after 5 queries from
> > "-1" to "-1-0", but the logged query name switches after 4 queries from
> > "" to "S_1”.
>
> You’re right.  It seem a JDBC side problem. I am doing some tests using
> suggestions from pgsql-jdbc list.
>
> What sounds strange to me is that switching from “mode_X” to  “mode_Y” I
> obtain different representation of the same value,
>
> I know the value is semantically the same.. but… in some way I like to
> see.. "a perfect postgresql env” (as it absolutely is)
>
> Edoardo
>
>
I have confirmed that this behaviour is by design either by postgres or the
driver. When postgres provides us the data using text mode we see -1

(psql)
select * from number ;
 name  | dim1
---+--
 first |   -1

This is the way the data is presented by the text output function.

In binary mode we see -1.0. This is an artifact of the binary output
function.

If ResultSet.getDouble is used the data is the same ... -1 for both text
and binary

Using getString we see the problem.

So you have a few options to stop this: Set prepareThreshold to 0 and the
driver will not switch to named statements or binary, Or remove double from
binary transfer.

Regards,

Dave


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Ron

On 8/21/23 14:32, Dave Cramer wrote:
[snip]
It has to do with the way the data is being transferred. When the driver 
switches to a named statement it also switches to binary mode which means 
data will be transferred in binary.


In text we get -1, in binary we get -1.0


That seems odd.  Why does it do that?

--
Born in Arizona, moved to Babylonia.

Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Dave Cramer
On Mon, 21 Aug 2023 at 17:17, Ron  wrote:

> On 8/21/23 14:32, Dave Cramer wrote:
>
> [snip]
>
> It has to do with the way the data is being transferred. When the driver
> switches to a named statement it also switches to binary mode which means
> data will be transferred in binary.
>
> In text we get -1, in binary we get -1.0
>
>
> That seems odd.  Why does it do that?
>

If we look at float8out_internal (text output)
https://github.com/postgres/postgres/blob/6fde2d9a005a5bc04aa059d3faeb865c8dd322ce/src/backend/utils/adt/float.c#L536
and
the comment about extra float digits which states

If >0, use shortest-decimal format for output; this is both the default and
allows for compatibility with clients that explicitly set a value here to
get round-trip-accurate results. If 0 or less, then use the old, slow,
decimal rounding method.

-1 would be the shortest-decimal format.

whereas float8send sends the actual binary data on disk, hence -1.0

Dave

> --
> Born in Arizona, moved to Babylonia.
>


Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Peter J. Holzer
On 2023-08-21 17:31:06 -0400, Dave Cramer wrote:
> On Mon, 21 Aug 2023 at 17:17, Ron  wrote:
> 
> On 8/21/23 14:32, Dave Cramer wrote:
> 
> [snip]
> 
> It has to do with the way the data is being transferred. When the
> driver switches to a named statement it also switches to binary mode
> which means data will be transferred in binary. 
> 
> In text we get -1, in binary we get -1.0
> 
> 
> That seems odd.  Why does it do that?
[...]
> whereas float8send sends the actual binary data on disk, hence -1.0

It should be noted that the binary representation is not "-1.0", but
something like 
1 0111 000 (in binary or)
BF   80   0   0   0   0(in hex)
(big endian for better readability)

The decimal representation "-1.0" is created by the binary to string
conversion (Java's Float.toString()?). That could also produce "-1" or
"-1E0" or any other equivalent representation. The author of that
routine decided in include ".0" in the output, possibly to signify that
it's a floating point value, not an integer.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: JDBC + PG-15 but not psql

2023-08-21 Thread Amn Ojee Uw

Thanks Dave for your interest.

In  the OP I started by explaining the difficulties when trying to 
create a tabelspace using JDBC; please note that when using the PG-15 
prompt I did not have the problem. Well, it turns out that while 
developing the request for help (writing the email), I realize that my 
code had the following line :

*this.conn.setAutoCommit(false);*/*
*/However, in my research, I had read a post mentioning that by setting 
/auto commit/ to *true* the issue was solved. So, I changed that line of 
code in my algorithm to :

*this.conn.setAutoCommit(true);*/*
*/Which also resulted in my app working without any errors./**/

Please read the code-snip I submitted; there you can see that line of 
code I edited; which I commented with "/*//*//*game changer!!".*/


Thanks again Dave, have great week.

/*
*/

On 8/21/23 2:51 p.m., Dave Cramer wrote:

Hi Amn,

Can you help me understand this issue better?

I don't see anywhere in the code where you are attempting to create a 
tablespace ?



Dave Cramer



On Sun, 20 Aug 2023 at 11:17, Amn Ojee Uw  wrote:

Hello.

I have searched the net in an attempt to find if others have had
and resolved this challenge, but most of the sites talk about how,
when using the psql, this error arises. In my case, the error
arises only when access PG-15 using JDBC.
JDBC connects to the database, but when trying to execute a
schema, it be to create a database or to create a tabelspace I get
this error :

*StackTrace : [Ljava.lang.StackTraceElement;@7a69b07**
**Message : ERROR: CREATE TABLESPACE cannot run inside a
transaction block*

I have used the same algorithm, but now modify to accommodate
PG-15, when using SQLite. So the JDBC code might not be the
problem, but the requirements needed from PG-15.

Change of perspective, I resolved the concern!

Here is what changed the outcome :
--- snip ---

public void connectToDatabase() throws ClassNotFoundException,
SQLException {
    try {
    Class.forName(this.getDatabaseClass().toString());
    this.conn =
DriverManager.getConnection(this.getDatabaseUrl().toString(),
    this.getUserID().toString(),
    this.getUserPassword().toString());

*this.conn.setAutoCommit(true);*/*//*//*game changer!!*/

        this.pout("Connected to the PostgreSQL server, success!!!");

    this.stmt = this.conn.createStatement();
    } catch (final SQLException | ClassNotFoundException e) {
    throw e;
    }
}

--

All I had to do was to setAutoCommit to true, PG-15 set this value
to false by default.


I hope my experience can help others.



Re: Will PostgreSQL 16 supports native transparent data encryption ?

2023-08-21 Thread Bruce Momjian
On Mon, Aug 21, 2023 at 07:02:46PM +0300, Mostafa Fathy wrote:
> Hi there,
> 
> It is mentioned here https://www.postgresql.org/about/press/faq/#:~:text=
> Q%3A%20What%20features%20will%20PostgreSQL%2016%20have%3F that native
> transparent data encryption is being worked on and it may be delivered with
> PostgreSQL 16.
> 
> Is PostgreSQL 16 beta version includes native transparent data encryption or
> not ? because I checked the docs 
> https://www.postgresql.org/docs/16/index.html 
> and couldn't find anything related to transparent data encryption.
> 
> If not supported yet in the beta version I would like to know if PostgreSQL 16
> final version will support native transparent data encryption or not?

Not, PG 16 will not support it, and I am unclear if later major versions
will either.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Only you can decide what is important to you.




Re: Will PostgreSQL 16 supports native transparent data encryption ?

2023-08-21 Thread Ron

On 8/21/23 18:49, Bruce Momjian wrote:

On Mon, Aug 21, 2023 at 07:02:46PM +0300, Mostafa Fathy wrote:

Hi there,

It is mentioned here https://www.postgresql.org/about/press/faq/#:~:text=
Q%3A%20What%20features%20will%20PostgreSQL%2016%20have%3F that native
transparent data encryption is being worked on and it may be delivered with
PostgreSQL 16.

Is PostgreSQL 16 beta version includes native transparent data encryption or
not ? because I checked the docs https://www.postgresql.org/docs/16/index.html
and couldn't find anything related to transparent data encryption.

If not supported yet in the beta version I would like to know if PostgreSQL 16
final version will support native transparent data encryption or not?

Not, PG 16 will not support it, and I am unclear if later major versions
will either.


That's disappointing, since TDE makes PCI audits that much simpler.

--
Born in Arizona, moved to Babylonia.




EXPLAIN statement can also apply to SELECT INTO.

2023-08-21 Thread jian he
hi.

https://www.postgresql.org/docs/current/sql-explain.html
>>>
Any SELECT, INSERT, UPDATE, DELETE, MERGE, VALUES, EXECUTE, DECLARE,
CREATE TABLE AS, or CREATE MATERIALIZED VIEW AS statement, whose
execution plan you wish to see.
>>>

seems you can use it with SELECT INTO.

explain (ANALYZE, BUFFERS, TIMING, VERBOSE ON, BUFFERS ON, WAL ON,SETTINGS ON)
select count(*) as cnt INTO s from tenk1;




Re: EXPLAIN statement can also apply to SELECT INTO.

2023-08-21 Thread David G. Johnston
On Monday, August 21, 2023, jian he  wrote:

> hi.
>
> https://www.postgresql.org/docs/current/sql-explain.html
> >>>
> Any SELECT, INSERT, UPDATE, DELETE, MERGE, VALUES, EXECUTE, DECLARE,
> CREATE TABLE AS, or CREATE MATERIALIZED VIEW AS statement, whose
> execution plan you wish to see.
> >>>
>
> seems you can use it with SELECT INTO.
>
> explain (ANALYZE, BUFFERS, TIMING, VERBOSE ON, BUFFERS ON, WAL ON,SETTINGS
> ON)
> select count(*) as cnt INTO s from tenk1;
>
>
We try not to encourage such things.  And CTAS is listed.

David J.


Re: Will PostgreSQL 16 supports native transparent data encryption ?

2023-08-21 Thread Andreas Kretschmer



On 22 August 2023 06:52:10 CEST, Ron  wrote:
>On 8/21/23 18:49, Bruce Momjian wrote:
>> On Mon, Aug 21, 2023 at 07:02:46PM +0300, Mostafa Fathy wrote:
>>> Hi there,
>>> 
>>> It is mentioned here https://www.postgresql.org/about/press/faq/#:~:text=
>>> Q%3A%20What%20features%20will%20PostgreSQL%2016%20have%3F that native
>>> transparent data encryption is being worked on and it may be delivered with
>>> PostgreSQL 16.
>>> 
>>> Is PostgreSQL 16 beta version includes native transparent data encryption or
>>> not ? because I checked the docs 
>>> https://www.postgresql.org/docs/16/index.html
>>> and couldn't find anything related to transparent data encryption.
>>> 
>>> If not supported yet in the beta version I would like to know if PostgreSQL 
>>> 16
>>> final version will support native transparent data encryption or not?
>> Not, PG 16 will not support it, and I am unclear if later major versions
>> will either.
>
>That's disappointing, since TDE makes PCI audits that much simpler.
>

Sure. You can use EDB products (EPAS or Postgres Extended) with TDE.

Andreas




Re: The same prepared query yield "-1" the first six times and then "-1.0"

2023-08-21 Thread Edoardo Panfili


> Il giorno 21 ago 2023, alle ore 21:37, Dave Cramer 
>  ha scritto:
> 
> 
> 
> On Mon, 21 Aug 2023 at 14:42, Edoardo Panfili  > wrote:
>> 
>> 
>> > Il giorno 21 ago 2023, alle ore 20:13, Peter J. Holzer > > > ha scritto:
>> > 
>> > On 2023-08-21 17:27:20 +0200, Edoardo Panfili wrote:
>> >> The attended result was a sequence of ten equal values but this is the 
>> >> actual result:
>> >> p: -1
>> >> p: -1
>> >> p: -1
>> >> p: -1
>> >> p: -1
>> >> p: -1.0
>> >> p: -1.0
>> >> p: -1.0
>> >> p: -1.0
>> >> p: -1.0
>> > [...]
>> >> 2023-08-21 11:51:50.633 CEST [1511] user@testdb LOG: execute : 
>> >> SET extra_float_digits = 3
>> >> 2023-08-21 11:51:50.634 CEST [1511] user@testdb LOG: execute : 
>> >> SET application_name = 'PostgreSQL JDBC Driver'
>> >> 2023-08-21 11:51:50.644 CEST [1511] user@testdb LOG: execute : 
>> >> SELECT dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.648 CEST [1511] user@testdb LOG: execute : 
>> >> SELECT dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.649 CEST [1511] user@testdb LOG: execute : 
>> >> SELECT dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.650 CEST [1511] user@testdb LOG: execute : 
>> >> SELECT dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> >> dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.651 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> >> dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> >> dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.653 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> >> dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.654 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> >> dim1 FROM number WHERE name='first'
>> >> 2023-08-21 11:51:50.656 CEST [1511] user@testdb LOG: execute S_1: SELECT 
>> >> dim1 FROM number WHERE name=‘first'
>> > 
>> > Are these outputs from the same run?
>> Yes
>> 
>> > 
>> > I notice that the output from the program switches after 5 queries from
>> > "-1" to "-1-0", but the logged query name switches after 4 queries from
>> > "" to "S_1”.
>> 
>> You’re right.  It seem a JDBC side problem. I am doing some tests using 
>> suggestions from pgsql-jdbc list.
>> 
>> What sounds strange to me is that switching from “mode_X” to  “mode_Y” I 
>> obtain different representation of the same value,
>> 
>> I know the value is semantically the same.. but… in some way I like to see.. 
>> "a perfect postgresql env” (as it absolutely is)
>> 
>> Edoardo
>> 
> 
> I have confirmed that this behaviour is by design either by postgres or the 
> driver. When postgres provides us the data using text mode we see -1 
> 
> (psql)
> select * from number ;
>  name  | dim1
> ---+--
>  first |   -1 
> 
> This is the way the data is presented by the text output function.
And I expected to obtain the same from the JDBC connection 


> In binary mode we see -1.0. This is an artifact of the binary output function.
> 
> If ResultSet.getDouble is used the data is the same ... -1 for both text and 
> binary
> 
> Using getString we see the problem.
> 
> So you have a few options to stop this: Set prepareThreshold to 0 and the 
> driver will not switch to named statements or binary,
Properties props = new Properties();
props.setProperty("prepareThreshold", "0");
Connection conn = DriverManager.getConnection(

"jdbc:postgresql://192.168.64.7:5432/testdb?user=user&password=password”, 
props);

Obtains “-1” all the times

> Or remove double from binary transfer.

Thank you
Edoardo