Re: Upgrade from PostgreSQL 9.6 to 11
On 10/6/19 9:52 π.μ., Pawan Sharma wrote: Thanks all, Is pg_upgrade is the best method if I am doing upgrade on same server or different server. Same server means: source and Target on same server Different: source and Target are different server. A traditional upgrade by definition is on the same server, (if we are not talking about logical replication or any other equivalent technology). So yes pg_upgrade is what would be best, along with -k (--link) for maximum speed. On Mon, Jun 10, 2019, 12:07 PM Michael Paquier mailto:mich...@paquier.xyz>> wrote: On Mon, Jun 10, 2019 at 09:00:38AM +0300, Achilleas Mantzios wrote: > On 10/6/19 7:36 π.μ., Pawan Sharma wrote: >> What is the best way to upgrade from PostgreSQL 9.6 to PostgreSQL >> 11 instead of pg_upgrade. > > why not pg_upgrade ? If the size is near the 3TB mark as you say, > pg_upgrade is the fastest IMHO. When it comes to upgrades, you could also look at logical replication. Unfortunately your origin version cannot do that. Have you looked at things like BDR or Slony? They are logical-based, meaning a lower downtime than pg_upgrade, but they take longer. For 3TB pg_upgrade can also be very fast if you use --link. Be wary of having backups though, all the time. -- Michael -- Achilleas Mantzios IT DEV Lead IT DEPT Dynacom Tankers Mgmt
Re: Connection refused (0x0000274D/10061)
Hello, Thanks for your quick response. I am new to pgsql, didnot configured it , can you please give a guidance for that? I will be highly oblised if you can assist me. I am using windows 8.1, 64 bit. ruby 2.3.3p222 (2016-11-21 revision 56859) [i386-mingw32] Rails 5.2.3 Trying to setup ruby on rails on my local host On Mon, Jun 10, 2019 at 1:13 AM Ray O'Donnell wrote: > On 09/06/2019 19:51, Sourav Majumdar wrote: > > > > could not connect to server: Connection refused (0x274D/10061) > > Is the server running on host "localhost" (::1) and accepting TCP/IP > > connections on port 5432? could not connect to server: Connection > > refused (0x274D/10061) Is the server running on host "localhost" > > (127.0.0.1) and accepting TCP/IP connections on port 5432? > > Hi there, > > From the above, you have the server listening on port 3000 (you would > have set this in postgresql.conf - did you?), but the client is trying > to connect on port 5432 (the default). > > You therefore need to tell the client to connect to port 3000 - if you > are using psql, you need the -p option: > > psql -p 3000 (... etc ...) > > HTH, > > Ray. > > -- > Raymond O'Donnell // Galway // Ireland > r...@rodonnell.ie > -- *Regards-* *Sourav Majumdar* *Mob.- 9732354141*
Re: Upgrade from PostgreSQL 9.6 to 11
On 6/9/19 11:36 PM, Pawan Sharma wrote: Hello All. What is the best way to upgrade from PostgreSQL 9.6 to PostgreSQL 11 instead of pg_upgrade. - Less downtime. - Approx database size are 1-3TB. If you really don't want to do pg_upgrade, then a possibility is multi-threaded pg_dump using directory format, followed by a parallel rsync if required, and then multi-threaded pg_restore. -- Angular momentum makes the world go 'round.
Re: Inserting into the blob
On 6/9/19 10:06 AM, Igor Korot wrote: Hi, Adrian, On Sun, Jun 9, 2019 at 11:14 PM Adrian Klaver wrote: On 6/9/19 8:28 AM, Igor Korot wrote: Hi, ALL, How do I insert the png file (content, not a name) into the BLOB field in the table i PostgreSQL DB? Are you talking about this data type?: https://www.postgresql.org/docs/11/datatype-binary.html Or this?: https://www.postgresql.org/docs/11/largeobjects.html Which one is best to hold an images of unknown size? Probably bytea as it is easier to work with overall. What client/language are you using? psql if possible. Take a look at(NOTE: need to be superuser): https://www.postgresql.org/docs/current/functions-admin.html pg_read_binary_file is similar to pg_read_file, except that the result is a bytea value; accordingly, no encoding checks are performed. In combination with the convert_from function, this function can be used to read a file in a specified encoding: Thank you. Thank you. -- Adrian Klaver adrian.kla...@aklaver.com -- Adrian Klaver adrian.kla...@aklaver.com
found xmin * from before relfrozenxid *
Hello PostgresSQL users and admins, I need some help with my problem. I looked at postgresql logs and it is littered with error messages like: ERROR: found xmin 3875696185 from before relfrozenxid 1599104090 CONTEXT: automatic vacuum of table "FMS.pg_catalog.pg_database" ERROR: found xmin 3875696185 from before relfrozenxid 1599104090 CONTEXT: automatic vacuum of table "FMS.pg_catalog.pg_authid" ERROR: found xmin 158590964 from before relfrozenxid 1599104090 CONTEXT: automatic vacuum of table "FMS.pg_catalog.pg_auth_members" First error occured 2 months ago. Database has never been crashed. Year ago it was upgraded with pg_upgrade from version 9.2. DB version:PostgreSQL 10.4 (Debian 10.4-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit How can I solve theese errors? Evaldas
Re: Inserting into the blob
Hi, Adrian, On Mon, Jun 10, 2019 at 8:38 AM Adrian Klaver wrote: > > On 6/9/19 10:06 AM, Igor Korot wrote: > > Hi, Adrian, > > > > On Sun, Jun 9, 2019 at 11:14 PM Adrian Klaver > > wrote: > >> > >> On 6/9/19 8:28 AM, Igor Korot wrote: > >>> Hi, ALL, > >>> How do I insert the png file (content, not a name) into the BLOB field > >>> in the table i PostgreSQL DB? > >> > >> Are you talking about this data type?: > >> > >> https://www.postgresql.org/docs/11/datatype-binary.html > >> > >> Or this?: > >> > >> https://www.postgresql.org/docs/11/largeobjects.html > > > > Which one is best to hold an images of unknown size? > > Probably bytea as it is easier to work with overall. OK. > > > > >> > >> What client/language are you using? > > > > psql if possible. > > Take a look at(NOTE: need to be superuser): > > https://www.postgresql.org/docs/current/functions-admin.html > > pg_read_binary_file is similar to pg_read_file, except that the result > is a bytea value; accordingly, no encoding checks are performed. In > combination with the convert_from function, this function can be used to > read a file in a specified encoding: According to https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea, the file needs to be on the server inside PGDATA directory. It is not a problem in general, but just curious - is there a more generic solution (to get the file from the client)? Thank you. > > > > > Thank you. > > > >> > >> > >>> > >>> Thank you. > >>> > >>> > >>> > >> > >> > >> -- > >> Adrian Klaver > >> adrian.kla...@aklaver.com > > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com
Drive Architecture for new PostgreSQL Environment
Hello, We're in the process of building a new PostgreSQL environment on Scientific Linux release 7.6. The new environment will have a Primary & 2 Standby servers & have asynchronous replication. It will use repmgr to manage failover/switchover events. In the past, we've always had separate separate physical drives for data, pg_xlog & backups. We did this as a precaution against disk failure. If we lose one, we would still have the other two to recover from. Is that really necessary anymore, with having a repmgr cluster? My Linux Admin wants to do the following instead: What I propose is to set this up as a single drive and isolate the three directories using the Linux logical volume manager. As a result, each directory would be on a separate filesystem. This would provide the isolation that you require but would give me the ability to modify the sizes of the volumes should you run out of space. Also, since this is a VM and all drives are essentially “virtual”, the performance of this different drive structure would be essentially identical to one with three separate drives. Your thoughts would be appreciated. Regards, Karin Hilbert
Re: Drive Architecture for new PostgreSQL Environment
Am 10.06.19 um 18:35 schrieb Hilbert, Karin: We did this as a precaution against disk failure. If we lose one, we would still have the other two to recover from. Is that really necessary anymore, with having a repmgr cluster? Repmgr is for HA, not for Backup/Recovery. Regards, Andreas -- 2ndQuadrant - The PostgreSQL Support Company. www.2ndQuadrant.com
Re: Inserting into the blob
El día Monday, June 10, 2019 a las 11:30:42AM -0500, Igor Korot escribió: > > According to > https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea, > the file needs to be on the server inside PGDATA directory. > It is not a problem in general, but just curious - is there a more > generic solution (to get the file from the client)? When we migrated a Sybase database to PG, we used the format TEXT for the COPY command and loaded all image data too. The column of the image data must be coded in 2-byte hex values and must have \\x in front of it. The file was just in "user land", i.e. COPY tablename FROM 'myfile' matthias -- Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045 Public GnuPG key: http://www.unixarea.de/key.pub May, 9: Спаси́бо освободители! Thank you very much, Russian liberators!
Re: Inserting into the blob
Igor Korot wrote: > It is not a problem in general, but just curious - is there a more > generic solution (to get the file from the client)? With psql: \lo_import /path/to/file It creates a large object with the contents of the file from the client file system and returns its unique ID, in the output and into the :LASTOID variable in psql. There's no equivalent for bytea contents. Bytea contents need to be either injected into the query as text, or passed separately as parameters, but psql does not provide helper methods for this, and it also lack binary support for variables. So it's easier to implement "upload bytea to server" in a script language than in psql. Alternatively, if you don't care about the contents being written twice, a file can be imported as a large object, copied as bytea into a row of the target table, and the large object purged immediately. In psql, a sequence like this should work: \lo_import /path/to/file \set tmp_oid :LASTOID insert into tablename ( bytea_col, [other columns] ) values ( lo_get(:tmp_oid), [other values] ) \lo_unlink :tmp_oid Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite
Re: found xmin * from before relfrozenxid *
Evaldas Užpalis writes: > Hello PostgresSQL users and admins, > > I need some help with my problem. > > I looked at postgresql logs and it is littered with error messages like: > > ERROR: found xmin 3875696185 from before relfrozenxid 1599104090 > CONTEXT: automatic vacuum of table "FMS.pg_catalog.pg_database" > ERROR: found xmin 3875696185 from before relfrozenxid 1599104090 > CONTEXT: automatic vacuum of table "FMS.pg_catalog.pg_authid" > > ERROR: found xmin 158590964 from before relfrozenxid 1599104090 > CONTEXT: automatic vacuum of table "FMS.pg_catalog.pg_auth_members" > > First error occured 2 months ago. Database has never been > crashed. Year ago it was upgraded with pg_upgrade from version 9.2. > > DB version:PostgreSQL 10.4 (Debian 10.4-2.pgdg90+1) on > x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 > 20170516, 64-bit Your bug was fixed in minor version 10.5 :-) Plz go look at the release notes and then upgrade to latest minor release. HTH > > How can I solve theese errors? > > > Evaldas > > > > -- Jerry Sievers Postgres DBA/Development Consulting e: postgres.consult...@comcast.net
Re: Inserting into the blob
On 6/10/19 9:30 AM, Igor Korot wrote: According to https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea, the file needs to be on the server inside PGDATA directory. It is not a problem in general, but just curious - is there a more generic solution (to get the file from the client)? This would depend on what is is you are trying to accomplish: 1) Are you really wanting to insert a file at a time at the psql command line? 2) If not then is there a program you are using/writing that will insert the data? Thank you. -- Adrian Klaver adrian.kla...@aklaver.com
Re: Inserting into the blob
Hi, Adrian, On Mon, Jun 10, 2019 at 7:03 PM Adrian Klaver wrote: > > On 6/10/19 9:30 AM, Igor Korot wrote: > > > > > According to > > https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea, > > the file needs to be on the server inside PGDATA directory. > > It is not a problem in general, but just curious - is there a more > > generic solution (to get the file from the client)? > > This would depend on what is is you are trying to accomplish: > > 1) Are you really wanting to insert a file at a time at the psql command > line? Yes. > > 2) If not then is there a program you are using/writing that will insert > the data? More like the program will query for the data... Thank you. > > > > > Thank you. > > > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com
Re: Inserting into the blob
> On Jun 10, 2019, at 6:40 AM, Igor Korot wrote: > > Hi, Adrian, > >> On Mon, Jun 10, 2019 at 7:03 PM Adrian Klaver >> wrote: >> >>> On 6/10/19 9:30 AM, Igor Korot wrote: >>> >>> >>> According to >>> https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea, >>> the file needs to be on the server inside PGDATA directory. >>> It is not a problem in general, but just curious - is there a more >>> generic solution (to get the file from the client)? >> >> This would depend on what is is you are trying to accomplish: >> >> 1) Are you really wanting to insert a file at a time at the psql command >> line? > > Yes. Gnarly. I suppose you could open the pdf in emacs and tell emacs to NOT render it. Cut the entire buffer and paste it, properly quoted, into your psql command line. But \lo stuff seems much more likely to work. > >> >> 2) If not then is there a program you are using/writing that will insert >> the data? > > More like the program will query for the data... > > Thank you. > >> >>> >>> Thank you. >>> >> >> >> >> -- >> Adrian Klaver >> adrian.kla...@aklaver.com > >
Re: Inserting into the blob
On Mon, Jun 10, 2019 at 8:32 PM Rob Sargent wrote: > > On Jun 10, 2019, at 6:40 AM, Igor Korot wrote: > > > > Hi, Adrian, > > > >> On Mon, Jun 10, 2019 at 7:03 PM Adrian Klaver < > adrian.kla...@aklaver.com> wrote: > >> > >>> On 6/10/19 9:30 AM, Igor Korot wrote: > >>> > >>> According to > https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea > , > >>> the file needs to be on the server inside PGDATA directory. > >>> It is not a problem in general, but just curious - is there a more > >>> generic solution (to get the file from the client)? > >> > >> This would depend on what is is you are trying to accomplish: > >> > >> 1) Are you really wanting to insert a file at a time at the psql command > >> line? > > > > Yes. > Gnarly. I suppose you could open the pdf in emacs and tell emacs to NOT > render it. Cut the entire buffer and paste it, properly quoted, into your > psql command line. But \lo stuff seems much more likely to work. > > This is what I've used for text: \set file_content `cat './file.txt'` SELECT:'file_content'; For smallish files I'd probably just stick with the same theme but encode the binary data as Base64 and then decode it into the bytea field. For not-so-small files probably better off storing the content elsewhere and inserting location data into the database. I have not yet had the desire to incorporate the large object API into my designs. David J.
Re: Inserting into the blob
> > >> 1) Are you really wanting to insert a file at a time at the psql command > >> line? > > > > Yes. > Gnarly. I suppose you could open the pdf in emacs and tell emacs to NOT > render it. Cut the entire buffer and paste it, properly quoted, into your > psql command line. But \lo stuff seems much more likely to work. > > > This is what I've used for text: > > \set file_content `cat './file.txt'` > SELECT:'file_content'; > > For smallish files I'd probably just stick with the same theme but encode the > binary data as Base64 and then decode it into the bytea field. > > For not-so-small files probably better off storing the content elsewhere and > inserting location data into the database. > > I have not yet had the desire to incorporate the large object API into my > designs. > > David J. Yes if you’re comfortable managing the great leap of faith that the file remains where it once claimed to be. And of course the other camp must contend with what could be excessively large database data directories.
Re: Inserting into the blob
Hi, David, On Mon, Jun 10, 2019 at 10:45 PM David G. Johnston wrote: > > On Mon, Jun 10, 2019 at 8:32 PM Rob Sargent wrote: >> >> > On Jun 10, 2019, at 6:40 AM, Igor Korot wrote: >> > >> > Hi, Adrian, >> > >> >> On Mon, Jun 10, 2019 at 7:03 PM Adrian Klaver >> >> wrote: >> >> >> >>> On 6/10/19 9:30 AM, Igor Korot wrote: >> >>> >> >>> According to >> >>> https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea, >> >>> the file needs to be on the server inside PGDATA directory. >> >>> It is not a problem in general, but just curious - is there a more >> >>> generic solution (to get the file from the client)? >> >> >> >> This would depend on what is is you are trying to accomplish: >> >> >> >> 1) Are you really wanting to insert a file at a time at the psql command >> >> line? >> > >> > Yes. >> Gnarly. I suppose you could open the pdf in emacs and tell emacs to NOT >> render it. Cut the entire buffer and paste it, properly quoted, into your >> psql command line. But \lo stuff seems much more likely to work. >> > > This is what I've used for text: > > \set file_content `cat './file.txt'` > SELECT:'file_content'; > > For smallish files I'd probably just stick with the same theme but encode the > binary data as Base64 and then decode it into the bytea field. > > For not-so-small files probably better off storing the content elsewhere and > inserting location data into the database. > > I have not yet had the desire to incorporate the large object API into my > designs. You are lucky you didn't work with the face databases... Or photographs... Thank you. > > David J.