Re: Errors when restoring backup created by pg_dumpall
CREATE EXTENSION cube; I do not know if you might need this one as well. I am assuming that you are working on a gist server. CREATE EXTENSION earthdistance; I am assuming you are working with a gist server. This ought to be useful. https://gist.cs.berkeley.edu/pggist/ You might want to read this: https://docs.gitlab.com/ee/install/postgresql_extensions.html My advice is to go to google, then chat GPT if you do not get any good feedback here. Hopefully, this will give you good leads. On Sat, Nov 30, 2024, 8:27 PM PopeRigby wrote: > On 11/30/24 18:41, David G. Johnston wrote: > > On Saturday, November 30, 2024, PopeRigby wrote: > >> On 11/30/24 17:27, David G. Johnston wrote: >> >> On Saturday, November 30, 2024, PopeRigby wrote: >> >>> On 11/29/24 17:47, Adrian Klaver wrote: >>> On 11/29/24 17:34, PopeRigby wrote: psql:all.sql:4104: ERROR: type "earth" does not exist LINE 1: ...ians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth QUERY: SELECT cube(cube(cube(earth()*cos(radians($1))*cos(radians($2))),earth()*cos(radians($1))*sin(radians($2))),earth()*sin(radians($1)))::earth CONTEXT: SQL function "ll_to_earth" during inlining The earthdistance module is even getting added between the table with the earth type is added, so shouldn't there be no problem? >>> >> The fact that “earth” is not schema qualified leads me to suspect you are >> getting bit by safe search_path environment rules. >> >> David J. >> >> Ah. How can I fix that? >> > Since you are past the point of fixing the source to produce valid > dumps…that leaves finding the places in the text the lack the schema > qualification and manually adding them in. > > David J. > > Oh boy. How can I prevent this from happening again? >
Re: Alter table fast
This is the right approach, Peter J. Holzer, from a well season DBA perspective "ALTER TABLE working_table ADD COLUMN B INTEGER ; UPDATE working_table SET B = A;" Bare in mind the indexes or existing references to an from other tables and act accordingly-- define the new and drop the old. Good luck. On Sun, Jan 12, 2025, 2:20 PM Peter J. Holzer wrote: > On 2025-01-09 20:52:27 +0100, sham...@gmx.net wrote: > > Am 09.01.25 um 20:17 schrieb veem v: > > > > Out of curiosity, why NUMERIC(15,0) instead of BIGINT? > > > > > > It's for aligning the database column types to the data model and > > > it's happening across all the upstream downstream systems. I was > > > thinking if this can be made faster with the single line alter > > > statement "Alter table alter column type > > > numeric(15,0) USING ::NUMERIC(15,0);" > > > > Hmm, I would rather change numeric(15,0) to bigint if I had to "align" > types across systems. > > I'm also wondering what "the data model" is. > > If I have numeric(15,0) in an abstract data model, that means that I > expect values larger than 99,999,999,999,999 but at most > 999,999,999,999,999. That seems to be oddly specific and also somewhat > at odds with reality when until now there apparently haven't been any > values larger than 2,147,483,647. What kind of real world value could > suddenly jump by more than 5 orders of magnitude but certainly not by 7? > > A bigint is much less precise (more than 2,147,483,647 but not more > than 9,223,372,036,854,775,807) and therefore more suitable for values > where you don't really know the range. > > However, for the problem at hand, I doubt it makes any difference. > Surely converting a few million values takes much less time than > rewriting a 50 GB table and all its indexes. > > So there isn't really a faster way to do what Veem wants. There may > however be less disruptive way: He could create a new column with the > new values (which takes at least as long but can be done in the > background) and then switch it over and drop the old column. > > hp > > -- >_ | Peter J. Holzer| Story must make more sense than reality. > |_|_) || > | | | h...@hjp.at |-- Charles Stross, "Creative writing > __/ | http://www.hjp.at/ | challenge!" >
Re: Getting error "too many clients already" despite having a db connection limit set
You might want to explore pgpool and pgbouncer. Depending in your use case you might want to glue them togeter. On Mon, Jun 16, 2025, 10:39 AM Tom Lane wrote: > adolfo flores writes: > > I hope you can help me with an issue we're experiencing. We have an app > > running on Kubernetes that opens a huge number of connections within a > > couple of seconds. > > You need to fix that app to be less unfriendly, or maybe put it behind > a connection pooler. > > > Is it expected behavior to reach the max_connections limit when that app > > opens many connections in a short period of time, even if a connection > > limit is set for that database and everything else uses no more than 10% > of > > the max_connections? > > It takes a finite amount of time for a new backend process to figure > out which database it's supposed to connect to and then detect whether > the per-DB connection limit is exceeded. In the meantime, that > session does count against the global limit, so yeah this isn't > surprising if the connection arrival rate is high enough. > > regards, tom lane > > >