Re: [SPAM] Re: How to hash a large amount of data within Postgres?
Guten Tag Tomas Vondra, am Donnerstag, 24. Juni 2021 um 00:56 schrieben Sie: > Not sure where you searched, but there definitely are interfaces to read > chunks of data from large objects - see this: That wasn't the point, but the lack of HASH-functions working with such read blocks of larger data and maintaining state internally. I only see functions outputting a calculated hash for a given block of data, start/progress/end-interfaces like in many other libs. So, am I missing something or do I need to build something based on multiple individually output hash results instead, like suggested? Mit freundlichen Grüßen Thorsten Schöning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister für IT und TK E-Mail: thorsten.schoen...@am-soft.de Web:http://www.AM-SoFT.de/ Tel: 05151- 9468- 0 Tel: 05151- 9468-55 Fax: 05151- 9468-88 Mobil: 0178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Geschäftsführer: Janine Galonska Für Rückfragen stehe ich Ihnen sehr gerne zur Verfügung. Mit freundlichen Grüßen Thorsten Schöning Tel: 05151 9468 0 Fax: 05151 9468 88 Mobil: Webseite: https://www.am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH i.G. ist ein Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister für IT und TK AM-Soft IT-Service - Bitstore Hameln GmbH i.G. Brandenburger Str. 7c 31789 Hameln Tel: 05151 9468 0 Bitstore IT-Consulting GmbH Zentrale - Berlin Lichtenberg Frankfurter Allee 285 10317 Berlin Tel: 030 453 087 80 CBS IT-Service - Bitstore Kaulsdorf UG Tel: 030 453 087 880 1 Büro Dallgow-Döberitz Tel: 03322 507 020 Büro Kloster Lehnin Tel: 033207 566 530 PCE IT-Service - Bitstore Darmstadt UG Darmstadt Tel: 06151 392 973 0 Büro Neuruppin Tel: 033932 606 090 ACI EDV Systemhaus - Bitstore Dresden GmbH Dresden Tel: 0351 254 410 Das Systemhaus - Bitstore Magdeburg GmbH Magdeburg Tel: 0391 636 651 0 Allerdata.IT - Bitstore Wittenberg GmbH Wittenberg Tel: 03491 876 735 7 Büro Liebenwalde Tel: 033054 810 00 HSA - das Büro - Bitstore Altenburg UG Altenburg Tel: 0344 784 390 97 Bitstore IT – Consulting GmbH NL Piesteritz Piesteritz Tel: 03491 644 868 6 Solltec IT-Services - Bitstore Braunschweig UG Braunschweig Tel: 0531 206 068 0 MF Computer Service - Bitstore Gütersloh GmbH Gütersloh Tel: 05245 920 809 3 Firmensitz: AM-Soft IT-Service - Bitstore Hameln GmbH i.G. , Brandenburger Str. 7c , 31789 Hameln Geschäftsführer Janine Galonska
Re: Postgres PANIC when it could not open file in pg_logical/snapshots directory
On Thu, Jun 24, 2021, 9:28 AM Mike Yeap wrote: > Hi Vijay, thanks for the tests, it's very helpful. > > Just that, isn't it too extreme when one of the processes having problems > accessing a snap file, it causes all other processes to be terminated? > After all, most of the other processes do not need to access the snap > files. Is there a way to fine tune this behaviour? > > Thank you. > > Regards, > Mike Yeap > > I guess the best people to ask this question is the core team, I am just over a year in pg, I am not in a position to call this an issue with postgres unless I understand the underlying design and possible data corruption if any if it does not abort etc. I'll leave it to the core team, then to give incorrect workaround.
removing "serial" from table definitions.
Hi, Is there a way to change a data type from serial to int? I tried with : ALTER TABLE "admin".db_jobs ALTER COLUMN id TYPE int USING id::int; But this seems not to change anything, as if Posgres woud consider the statement as a no-op. My problem is that "serial" is not exported with pg_dump. Creating a db from the dump will hence result into a different table definition (which is equivalent tough) We are trying a transfer/migration tool on Azure, that check the table definitions between the source and target before starting the data transfer, and it blocks on that difference. best regards, Marc Mamin
Re: removing "serial" from table definitions.
>From what I know, serial is just used to setup a table but it is then converted to int in the table. Therefore, you probably cannot remove it since it is not there any more anyway. To setup table with int instead of serial, you could use this: CREATE SEQUENCE table_name_id_seq; CREATE TABLE table_name ( id integer NOT NULL DEFAULT nextval('table_name_id_seq') ); ALTER SEQUENCE table_name_id_seq OWNED BY table_name.id; instead of CREATE TABLE table_name( id SERIAL ); as explained on https://www.postgresqltutorial.com/postgresql-serial/ Am Do., 24. Juni 2021 um 14:33 Uhr schrieb Marc Mamin : > Hi, > > Is there a way to change a data type from serial to int? > > I tried with : > > ALTER TABLE "admin".db_jobs ALTER COLUMN id TYPE int USING id::int; > > But this seems not to change anything, as if Posgres woud consider the > statement as a no-op. > > > > My problem is that "serial" is not exported with pg_dump. > > Creating a db from the dump will hence result into a different table > definition (which is equivalent tough) > > > > We are trying a transfer/migration tool on Azure, that check the table > definitions between the source and target before starting the data > transfer, and it blocks on that difference. > > > > best regards, > > Marc Mamin > > >
Re: removing "serial" from table definitions.
On 6/24/21 8:33 AM, Marc Mamin wrote: Hi, Is there a way to change a data type from serial to int? I tried with : ALTER TABLE "admin".db_jobs ALTER COLUMN id TYPE int USING id::int; But this seems not to change anything, as if Posgres woud consider the statement as a no-op. serial is not an actual data type -- it is essentially an integer with a default and an automatically created sequence. See: https://www.postgresql.org/docs/13/datatype-numeric.html#DATATYPE-SERIAL My problem is that "serial" is not exported with pg_dump. Creating a db from the dump will hence result into a different table definition (which is equivalent tough) We are trying a transfer/migration tool on Azure, that check the table definitions between the source and target before starting the data transfer, and it blocks on that difference. From the linked doc above: -- The data types smallserial, serial and bigserial are not true types, but merely a notational convenience for creating unique identifier columns (similar to the AUTO_INCREMENT property supported by some other databases). In the current implementation, specifying: CREATE TABLE tablename ( colname SERIAL ); is equivalent to specifying: CREATE SEQUENCE tablename_colname_seq AS integer; CREATE TABLE tablename ( colname integer NOT NULL DEFAULT nextval('tablename_colname_seq') ); ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname; --- I haven't checked, but I am guessing that creating the table using the former method results in a dump that looks like the latter? In that case, just define the table in the second way to begin with and they will match from the migration tools standpoint I should think. Joe -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, & Open Source Development
Re: second CTE kills perf
Hi, thanks for all. I replaced row_number() with some computed int which speeds up a lot the query. ‐‐‐ Original Message ‐‐‐ On Tuesday, June 22nd, 2021 at 15:53, Tom Lane wrote: > Nicolas Seinlet nico...@seinlet.com writes: > > > I'm trying to understand this behaviour and the limits of CTE, when they > > reach the limits, when they cannot receive parameters from a caller, ... > > I'm running a query on PostgreSQL 10 with a cte. the query runs in ~ 10 > > seconds. If I add a second CTE with the same query as the previous one and > > select * from second_cte as query, it now runs in ~ 10 minutes. > > > oversimplified example: > > > > 10 seconds version: > > > > | WITH cte1 AS (SELECT x,y,z FROM table) SELECT row_number() over(),x,y,z > > FROM cte1 WHERE x=32; > > > 10 minutes version: > > > > | WITH cte1 AS (SELECT x,y,z FROM table), cte2 AS (SELECT row_number() > > over(),x,y,z FROM cte1) SELECT * FROM cte2 WHERE x=32; > > [ shrug... ] You are asking for two different computations, and the > > second one is far more expensive. > > In the first case, the WHERE x=32 clause is applied before the window > > function, so we can (indeed must) filter out all rows not having x=32 > > before doing the window function. > > In the second case, WHERE x=32 is applied above/after the window > > function. We cannot push down the WHERE to before the window function. > > (In this case, filtering beforehand would obviously change the results > > of row_number, but in general we don't know enough about window function > > behavior to risk such changes.) So row_number has to be computed over > > the entire contents of the "table", and that's not cheap. > > It does surprise me a bit that row_number is quite that expensive, > > but if you are expecting equivalent results from these two queries, > > you're simply wrong. > > regards, tom lane publickey - nicolas@seinlet.com - 0xCAEB7FAF.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature
insert ..... returning problem
Greetings! I'm trying to insert a record returning column value using the following query: INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning load_idx INTO :idx But when I try to prepare the query I'm getting the following error: PREPARE p1515544c0001 AS insert into t_load (load_id, load_type) values ($1, $2) returning load_idx into $3]: ERROR: syntax error at or near "into" LINE 1: ...ad_id, load_type) values ($1, $2) returning load_idx into $3 ^ Why 'into var' is not accepted here? Best regards, -- \ / | | (OvO) | Mikhail Iwanow | (^^^) | Voice: +7 (911) 223-1300 | \^/ | E-mail: iv...@logit-ag.de | ^ ^ | |
Re: insert ..... returning problem
On Thu, 2021-06-24 at 12:31 +0300, Michael Ivanov wrote: > I'm trying to insert a record returning column value using the following > query: > >INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning > load_idx INTO :idx > > But when I try to prepare the query I'm getting the following error: > >PREPARE p1515544c0001 AS insert into t_load (load_id, load_type) > values ($1, $2) returning load_idx into $3]: >ERROR: syntax error at or near "into" LINE 1: ...ad_id, load_type) > values ($1, $2) returning load_idx into $3 > > ^ > Why 'into var' is not accepted here? Are you talking about ECPG? Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
How much data does server side "loread" return?
Hi all, I need to read some large object and am wondering how much data "loread" returns in case it successfully reads at all AND the object contains at least as much data as requested. In that case, does "loread" always return the requested amount of data or does one need to always loop when calling that function? I didn't find any explicit docs regarding that point and almost all examples I saw where issuing one call with large amounts of requested data, regardless of the size of the large object. The latter most likely means that if available, all requested data is always returned. Besides that, if "loread" is used in a loop for various reasons already, one would need to count the overall number of bytes read to know when the last chunk was read. Instead, it would be enough to check for if the last returned chunk is less than requested. > LOOP > chunk := loread(fd, CHUNK_SIZE); > chunkSize := length(chunk); > fdRead := fdRead + chunkSize; > [...] > --EXIT WHEN (chunkSize < CHUNK_SIZE); > EXIT WHEN (fdRead = fdSize); > END LOOP; Is the first EXIT sufficient already and "fdRead" unnecessary? Thanks! Mit freundlichen Grüßen Thorsten Schöning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister für IT und TK E-Mail: thorsten.schoen...@am-soft.de Web:http://www.AM-SoFT.de/ Tel: 05151- 9468- 0 Tel: 05151- 9468-55 Fax: 05151- 9468-88 Mobil: 0178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Geschäftsführer: Janine Galonska Für Rückfragen stehe ich Ihnen sehr gerne zur Verfügung. Mit freundlichen Grüßen Thorsten Schöning Tel: 05151 9468 0 Fax: 05151 9468 88 Mobil: Webseite: https://www.am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH i.G. ist ein Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister für IT und TK AM-Soft IT-Service - Bitstore Hameln GmbH i.G. Brandenburger Str. 7c 31789 Hameln Tel: 05151 9468 0 Bitstore IT-Consulting GmbH Zentrale - Berlin Lichtenberg Frankfurter Allee 285 10317 Berlin Tel: 030 453 087 80 CBS IT-Service - Bitstore Kaulsdorf UG Tel: 030 453 087 880 1 Büro Dallgow-Döberitz Tel: 03322 507 020 Büro Kloster Lehnin Tel: 033207 566 530 PCE IT-Service - Bitstore Darmstadt UG Darmstadt Tel: 06151 392 973 0 Büro Neuruppin Tel: 033932 606 090 ACI EDV Systemhaus - Bitstore Dresden GmbH Dresden Tel: 0351 254 410 Das Systemhaus - Bitstore Magdeburg GmbH Magdeburg Tel: 0391 636 651 0 Allerdata.IT - Bitstore Wittenberg GmbH Wittenberg Tel: 03491 876 735 7 Büro Liebenwalde Tel: 033054 810 00 HSA - das Büro - Bitstore Altenburg UG Altenburg Tel: 0344 784 390 97 Bitstore IT – Consulting GmbH NL Piesteritz Piesteritz Tel: 03491 644 868 6 Solltec IT-Services - Bitstore Braunschweig UG Braunschweig Tel: 0531 206 068 0 MF Computer Service - Bitstore Gütersloh GmbH Gütersloh Tel: 05245 920 809 3 Firmensitz: AM-Soft IT-Service - Bitstore Hameln GmbH i.G. , Brandenburger Str. 7c , 31789 Hameln Geschäftsführer Janine Galonska
Re: insert ..... returning problem
Hallo, no. I am using postgres c library, namely I'm calling PQexec() Best regards, On 24.06.2021 16:14, Laurenz Albe wrote: > On Thu, 2021-06-24 at 12:31 +0300, Michael Ivanov wrote: >> I'm trying to insert a record returning column value using the following >> query: >> >>INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning >> load_idx INTO :idx >> >> But when I try to prepare the query I'm getting the following error: >> >>PREPARE p1515544c0001 AS insert into t_load (load_id, load_type) >> values ($1, $2) returning load_idx into $3]: >>ERROR: syntax error at or near "into" LINE 1: ...ad_id, load_type) >> values ($1, $2) returning load_idx into $3 >> >> ^ >> Why 'into var' is not accepted here? > > Are you talking about ECPG? > > Yours, > Laurenz Albe > -- \ / | | (OvO) | Михаил Иванов| (^^^) | | \^/ | E-mail: iv...@isle.spb.ru | ^ ^ | |
RE: removing "serial" from table definitions.
From: Joe Conway [mailto:m...@joeconway.com] >Sent: Donnerstag, 24. Juni 2021 14:47 >To: Marc Mamin ; pgsql-general > >Subject: Re: removing "serial" from table definitions. > >On 6/24/21 8:33 AM, Marc Mamin wrote: >> Hi, >> >> Is there a way to change a data type from serial to int? >> >> I tried with : >> >>ALTER TABLE "admin".db_jobs ALTER COLUMN id TYPE int USING id::int; >> >> But this seems not to change anything, as if Posgres woud consider the >> statement as a no-op. > >serial is not an actual data type -- it is essentially an integer with a >default and an automatically created sequence. See: > >https://www.postgresql.org/docs/13/datatype-numeric.html#DATATYPE-SERIAL Yes, I undersand that serial is just a hint at table creation time, but is there a place in catalog where we can see if the table was created using 'serial' ? if yes, I'm looking for a way to remove that. Another cause for my problem may be in the way how the default value information is stored in pg_attrdef. The difference we see between the source and target database is that a schema prefix is displayed with the sequence on one side, and not on the other.. I'm not sure yet if this really come directly from the catalog or from the way how the client read the table definition, maybe along with some search_path differences Were there any change in this area between PG 9.6 and PG 11 ? example: The Default value of column 'id' in table 'db_jobs_history' in database 'oms_db' is different on source and target servers. It's 'nextval('admin.db_jobs_history_id_seq'::regclass)' on source and 'nextval('db_jobs_history_id_seq'::regclass)' on target. Thanks Marc
OSX: migrating Postgres db from one Mac to another
Hello, my Mac unfortunately crashed. I have a Time Machine backup. Using the Migration Assistant on a new Mac allowed me to get all my documents back, but unfortunately Postgres is not able to start. So I was trying to manually set-up my Postgres db. Here are the steps: - install a fresh copy of Postgres (v11) - replace the /Library/PostgreSQL/11/data folder with the one I dig out from my dead Mac, where I assume my data is stored - check/fix owner of folders (must be 'postgres', I assume) - start pgAdmin - enter the PW for postgres user - click on "PostgreSQL 11" server to show databases This is where I have problems. I am prompted with the same pop-up asking me to enter the password for the user 'postgres'. I type it and I get the following: [image: Screen Shot 2021-06-24 at 15.25.03.png] *Carlo*
Re: insert ..... returning problem
On Thu, 24 Jun 2021 at 15:25, Michael Ivanov wrote: > Hallo, no. > > I am using postgres c library, namely I'm calling PQexec() > > Best regards, > > On 24.06.2021 16:14, Laurenz Albe wrote: > > On Thu, 2021-06-24 at 12:31 +0300, Michael Ivanov wrote: > >> I'm trying to insert a record returning column value using the > following query: > >> > >>INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning > load_idx INTO :idx > >> > >> But when I try to prepare the query I'm getting the following error: > >> > >>PREPARE p1515544c0001 AS insert into t_load (load_id, load_type) > values ($1, $2) returning load_idx into $3]: > >>ERROR: syntax error at or near "into" LINE 1: ...ad_id, > load_type) values ($1, $2) returning load_idx into $3 > >> >^ > >> Why 'into var' is not accepted here? > > > > Are you talking about ECPG? > > > > Yours, > > Laurenz Albe > > > > > -- > \ / | | > (OvO) | Михаил Иванов| > (^^^) | | > \^/ | E-mail: iv...@isle.spb.ru | > ^ ^ | | > Hi, Client libraries substitute the values(!) of the variables into the sql queries (Or in this case do a PREPARE and provide the values to that), but the final query must comply with Postgres syntax. You can get the values defined in the returning clause from the result set, like you would do in the case of a SELECT. Regards, Sándor
OSX: migrating Postgres db from one Mac to another
Hello, my Mac unfortunately crashed. I have a Time Machine backup. Using the Migration Assistant on a new Mac allowed me to get all my documents back, but unfortunately Postgres is not able to start. So I was trying to manually set-up my Postgres db. Here are the steps: - install a fresh copy of Postgres (v11) - replace the /Library/PostgreSQL/11/data folder with the one I dig out from my dead Mac, where I assume my data is stored - check/fix owner of folders (must be 'postgres', I assume) - start pgAdmin - enter the PW for postgres user - click on "PostgreSQL 11" server to show databases This is where I have problems. I am prompted with the same pop-up asking me to enter the password for the user 'postgres'. I type it and I get the message "could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?
Re: OSX: migrating Postgres db from one Mac to another
# -*- coding: utf-8 -*- ## # # pgAdmin 4 - PostgreSQL Tools # # Copyright (C) 2013 - 2021, The pgAdmin Development Team # This software is released under the PostgreSQL Licence # # config.py - Core application configuration settings # ## . ## # Master password is used to encrypt/decrypt saved server passwords # Applicable for desktop mode only ## MASTER_PASSWORD_REQUIRED = True -> False чт, 24 июн. 2021 г. в 18:44, Carlo Poso : > Hello, > my Mac unfortunately crashed. I have a Time Machine backup. > Using the Migration Assistant on a new Mac allowed me to get all my > documents back, but unfortunately Postgres is not able to start. > > So I was trying to manually set-up my Postgres db. > > Here are the steps: > >- install a fresh copy of Postgres (v11) >- replace the /Library/PostgreSQL/11/data folder with the one I dig >out from my dead Mac, where I assume my data is stored >- check/fix owner of folders (must be 'postgres', I assume) >- start pgAdmin >- enter the PW for postgres user >- click on "PostgreSQL 11" server to show databases > > This is where I have problems. I am prompted with the same pop-up asking > me to enter the password for the user 'postgres'. I type it and I get the > following: > [image: Screen Shot 2021-06-24 at 15.25.03.png] > > > *Carlo* >
RE: removing "serial" from table definitions.
>serial is not an actual data type -- it is essentially an integer with a >default and an automatically created sequence. See: >> >>https://www.postgresql.org/docs/13/datatype-numeric.html#DATATYPE-SERIA >>L > >Yes, I undersand that serial is just a hint at table creation time, but is >there a place in catalog where we can see if the table was created using >'serial' ? >if yes, I'm looking for a way to remove that. > >Another cause for my problem may be in the way how the default value >information is stored in pg_attrdef. >The difference we see between the source and target database is that a schema >prefix is displayed with the sequence on one side, and not on the other.. >I'm not sure yet if this really come directly from the catalog or from the way >how the client read the table definition, maybe along with some search_path >differences Were there any change in this area between PG 9.6 and PG 11 ? > >example: > >The Default value of column 'id' in table 'db_jobs_history' in database >'oms_db' is different on source and target servers. >It's 'nextval('admin.db_jobs_history_id_seq'::regclass)' on source and >'nextval('db_jobs_history_id_seq'::regclass)' on target. I've probably found the origin of our problem: https://www.postgresql.org/docs/9.3/release-8-1.html => Add proper dependencies for arguments of sequence functions (Tom) But I won't be able to check that in the next few days.. best regards, marc Mamin
Re: insert ..... returning problem
On Thu, 2021-06-24 at 16:25 +0300, Michael Ivanov wrote: > > > I'm trying to insert a record returning column value using the following > > > query: > > > > > >INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning > > > load_idx INTO :idx > > > > > > But when I try to prepare the query I'm getting the following error: > > > > > >PREPARE p1515544c0001 AS insert into t_load (load_id, load_type) > > > values ($1, $2) returning load_idx into $3]: > > >ERROR: syntax error at or near "into" LINE 1: ...ad_id, > > > load_type) values ($1, $2) returning load_idx into $3 > > > > > > ^ > > > Why 'into var' is not accepted here? > > > > Are you talking about ECPG? > > I am using postgres c library, namely I'm calling PQexec() Then you don't need it. You may be mixing up SQL syntax and PL/pgSQL syntax. Just run the statement without the INTO, and it will return a result set, just like a SELECT would. By the way, you cannot have parameters with PQexec(). Use PQexecParams() for that. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
Re: OSX: migrating Postgres db from one Mac to another
Thanks. The message I get after entering the PW is "could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?" *Carlo* Il giorno gio 24 giu 2021 alle ore 15:55 Дмитрий Иванов < firstdis...@gmail.com> ha scritto: > # -*- coding: utf-8 -*- > > ## > # > # pgAdmin 4 - PostgreSQL Tools > # > # Copyright (C) 2013 - 2021, The pgAdmin Development Team > # This software is released under the PostgreSQL Licence > # > # config.py - Core application configuration settings > # > ## > . > ## > # Master password is used to encrypt/decrypt saved server passwords > # Applicable for desktop mode only > ## > MASTER_PASSWORD_REQUIRED = True -> False > > > > чт, 24 июн. 2021 г. в 18:44, Carlo Poso : > >> Hello, >> my Mac unfortunately crashed. I have a Time Machine backup. >> Using the Migration Assistant on a new Mac allowed me to get all my >> documents back, but unfortunately Postgres is not able to start. >> >> So I was trying to manually set-up my Postgres db. >> >> Here are the steps: >> >>- install a fresh copy of Postgres (v11) >>- replace the /Library/PostgreSQL/11/data folder with the one I dig >>out from my dead Mac, where I assume my data is stored >>- check/fix owner of folders (must be 'postgres', I assume) >>- start pgAdmin >>- enter the PW for postgres user >>- click on "PostgreSQL 11" server to show databases >> >> This is where I have problems. I am prompted with the same pop-up asking >> me to enter the password for the user 'postgres'. I type it and I get the >> following: >> [image: Screen Shot 2021-06-24 at 15.25.03.png] >> >> >> *Carlo* >> >
Re: removing "serial" from table definitions.
Marc Mamin writes: > Yes, I undersand that serial is just a hint at table creation time, but is > there a place in catalog where we can see if the table was created using > 'serial' ? No. Where the docs say "these are equivalent", they mean that very literally. > The difference we see between the source and target database is that a schema > prefix is displayed with the sequence on one side, and not on the other.. This likely has to do with the search_path settings being different in the sessions inspecting the two DBs. I do not think it is related to serial-ness at all, it's just the normal behavior of regclass_out for the OID constant that's the argument of nextval(). regards, tom lane
Re: How much data does server side "loread" return?
On Thu, 2021-06-24 at 15:24 +0200, Thorsten Schöning wrote: > I need to read some large object and am wondering how much data > "loread" returns in case it successfully reads at all AND the object > contains at least as much data as requested. > > In that case, does "loread" always return the requested amount of data > or does one need to always loop when calling that function? That SQL function just calls the C function lo_read, see https://www.postgresql.org/docs/current/lo-interfaces.html#LO-READ The documentation there says: "The number of bytes actually read is returned; this will be less than 'len' if the end of the large object is reached first. In the event of an error, the return value is -1." So it will always read as many bytes as possible. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
Re: insert ..... returning problem
I was just trying to use it similar to oracle handling in "returning COLUMN into :VAR" So in postgres I should not explicitly specify destination variable for 'returning' value in query itself, but rather retrieve the result from PGresult returned by PQexec(). ok, thanks for clarification! On 24.06.2021 17:13, Laurenz Albe wrote: > On Thu, 2021-06-24 at 16:25 +0300, Michael Ivanov wrote: I'm trying to insert a record returning column value using the following query: INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning load_idx INTO :idx But when I try to prepare the query I'm getting the following error: PREPARE p1515544c0001 AS insert into t_load (load_id, load_type) values ($1, $2) returning load_idx into $3]: ERROR: syntax error at or near "into" LINE 1: ...ad_id, load_type) values ($1, $2) returning load_idx into $3 ^ Why 'into var' is not accepted here? >>> >>> Are you talking about ECPG? >> >> I am using postgres c library, namely I'm calling PQexec() > > Then you don't need it. You may be mixing up SQL syntax and PL/pgSQL syntax. > > Just run the statement without the INTO, and it will return a result set, just > like a SELECT would. > > By the way, you cannot have parameters with PQexec(). > Use PQexecParams() for that. > > Yours, > Laurenz Albe > -- \ / | | (OvO) | Михаил Иванов| (^^^) | | \^/ | E-mail: iv...@isle.spb.ru | ^ ^ | |
Re: OSX: migrating Postgres db from one Mac to another
Carlo Poso writes: > The message I get after entering the PW is "could not connect to server: > Connection refused Is the server > running on host "localhost" (::1) and accepting TCP/IP connections on port > 5432? could not connect to server: Connection refused Is the server running > on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port > 5432?" So the step you missed is "start the Postgres server". On OSX that's usually done by setting up a LaunchDaemon script, IIRC (it's been awhile since I messed with that). regards, tom lane
Re: OSX: migrating Postgres db from one Mac to another
On 6/24/21 6:53 AM, Carlo Poso wrote: Hello, my Mac unfortunately crashed. I have a Time Machine backup. Using the Migration Assistant on a new Mac allowed me to get all my documents back, but unfortunately Postgres is not able to start. So I was trying to manually set-up my Postgres db. Here are the steps: - install a fresh copy of Postgres (v11) - replace the /Library/PostgreSQL/11/data folder with the one I dig out from my dead Mac, where I assume my data is stored - check/fix owner of folders (must be 'postgres', I assume) You have not started the server. - start pgAdmin pgAdmin != Postgres pgAdmin is just a client, the server needs to be started in order for it to connected. - enter the PW for postgres user - click on "PostgreSQL 11" server to show databases This is where I have problems. I am prompted with the same pop-up asking me to enter the password for the user 'postgres'. I type it and I get the The message is telling you what the problem is, the server is not running, or if it where running not listening on port 5432. message "could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? -- Adrian Klaver adrian.kla...@aklaver.com
Re: Psql wants to use IP6 when connecting to self using tcp...
On 2021-06-23 20:33:33 -0400, Tom Lane wrote: > Jerry LeVan writes: > > bigbox has a freshly installed Fedora 34 system and it runs postgresql 13.3 > > > the following are being run ON bigbox > > > nslookup bigbox returns the correct ipv4 address > > nslookup bigbox.skynet returns the correct ipv4 address > > > psql -h bigbox.skynet allows me to connect to the database > > psql -h bigbox fails because the ipv6 address is returned and there is no > > hba entry.. > > It's quite odd that nslookup isn't agreeing with glibc's name > resolution code. AFAIK nslookup uses only DNS. Glibc uses all methods in nsswitch.conf. Typically that also includes "files", and at least on Debian/Ubuntu that has higher priority than "dns", which is why I advised the OP to check /etc/hosts. As Alan noted, mdns4_minimal is also often there and might return spurious results. > I suspect that the problem lies with systemd, This is the next one, yes. On my Ubuntu laptop, the hosts line reads: hosts: files mdns4_minimal [NOTFOUND=return] resolve [!UNAVAIL=return] dns myhostname mymachines (wrapped for readability) So glibc first looks in /etc/hosts, then does an MDNS (Avahi) lookup, then asks systemd-resolved, and only then gets around to do a traditional DNS lookup. (And systemd-resolved does do some magic for the local hostname, so that might indeed be the problem) hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
Re: Psql wants to use IP6 when connecting to self using tcp...
"Peter J. Holzer" writes: > On 2021-06-23 20:33:33 -0400, Tom Lane wrote: >> It's quite odd that nslookup isn't agreeing with glibc's name >> resolution code. > AFAIK nslookup uses only DNS. Glibc uses all methods in nsswitch.conf. Ah, right. So looking at a Fedora 34 installation, I see in /etc/nsswitch.conf: ... hosts: files myhostname resolve [!UNAVAIL=return] dns ... /etc/hosts is pretty innocuous: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 On the other hand, "man nss-myhostname" says o The local, configured hostname is resolved to all locally configured IP addresses ordered by their scope, or -- if none are configured -- the IPv4 address 127.0.0.2 (which is on the local loopback) and the IPv6 address ::1 (which is the local host). so it might be that that's what is causing the weird behavior. On this machine, with a recent PG build installed but no running server, I see: $ psql -h rpi3 psql: error: connection to server at "rpi3" (fe80::ba27:ebff:fe51:3b34), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? connection to server at "rpi3" (192.168.1.61), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? so indeed *something* is injecting an IPv6 address that I didn't ask for. However, I see the same two addresses probed with the fully-qualified machine name, so it's not quite like Jerry's result. regards, tom lane