bug regclass::oid
* hi , i am here to tell you that this test in query is not working when the table name in the database or schema name contain space a.table_name::regclass=b.attrelid a is information_schema.columns b is pg_attribute trying this in two different databases the first database contain table with space in his name (Problem when running query) the second no ( work fine) the same problme if you get Oid from schema name. SELECT 'public'::regnamespace::oid; work ; create schema " this is test" ; SELECT 'this is test'::regnamespace::oid; not working ; i have question how use join between information_schema.columns and pg_attribute ? thanks regards*
Re: bug regclass::oid
On 6/13/19 8:14 AM, John Mikel wrote: * hi , i am here to tell you that this test in query is not working when the table name in the database or schema name contain space a.table_name::regclass=b.attrelid a is information_schema.columns b is pg_attribute trying this in two different databases the first database contain table with space in his name (Problem when running query) the second no ( work fine) the same problme if you get Oid from schema name. SELECT 'public'::regnamespace::oid; work ; create schema " this is test" ; SELECT 'this is test'::regnamespace::oid; not working ; Try: SELECT '"this is test"'::regnamespace::oid; not working ; Same for table name. As example: select '"space table"'::regclass; regclass --- "space table" (1 row) i have question how use join between information_schema.columns and pg_attribute ? thanks regards* -- Adrian Klaver adrian.kla...@aklaver.com
Re: Advice on setting cost for function
guy...@relevantlogic.com writes: > The documentation in CREATE FUNCTION is fine as far is it goes regarding the > COST setting, but that isn’t very far, and I haven’t had any luck finding > good advice. Fair complaint. A quick data dump: procost = 1 is supposed to represent the cost of a simple built-in function --- think addition or comparison. Expensive C functions, such as tsvector parsing, are supposed to have procosts like 10 or 100. Keep in mind that procost is actually scaled by cpu_operator_cost (0.0025 typically) before being folded into a plan cost estimate. PL functions are inevitably going to be way more expensive than a simple built-in function. It's likely that the default procost setting of 100 is a drastic underestimate in most cases, and that a saner default might be more like 1000. I'm hesitant to change that for fear of causing surprising plan changes, though. Both the CREATE FUNCTION man page and the pg_proc catalog documentation claim that for a set-returning function, procost is per-output-row. That may have been true when written but it seems to be horsepucky now; it's only charged once regardless. We should change the docs. > If I have a function that looks up a single value from an index, should that > be lower than 100? 20, say? Well, if you accept the planner's default estimate that a single-row indexed lookup costs around 8 cost units (twice random_page_cost), then a function embodying that should also cost that much; dividing by cpu_operator_cost leads to the conclusion that its procost should be 3200. There are lots of reasons to be more optimistic than that, but for sure I wouldn't think that a function that embodies database access should have procost less than 100. > I can imagine that a SQL function which is just inlined might ignore the COST > estimate. Is that so? Are there other such considerations? Yeah, once it gets inlined its procost marking is no longer a factor; we'll look at the substituted expression instead. > If I have a function that calls multiple other functions, each of which does > a reasonable amount of work, should I set the caller to a higher COST, or > will Postgres use the costs of the functions it calls? Nope, you'll need to adjust the cost of the calling function. Except for the inlined case, PG will just take the cost marking at face value. I believe that the PostGIS people just recently increased the cost markings on all their expensive functions to better reflect reality. You might want to go dig in their git repo to see what they did (I don't think those changes are released yet). > In general, a section in the CREATE FUNCTION documentation of two or three > paragraphs with a few examples and general guidelines would be very helpful. > I would be happy to write it if someone will explain it to me. Have at it ... regards, tom lane
RE: Drive Architecture for new PostgreSQL Environment
From: Hilbert, Karin Hello, We're in the process of building a new PostgreSQL environment on Scientific Linux release 7.6. The new environment will have a Primary & 2 Standby servers & have asynchronous replication. It will use repmgr to manage failover/switchover events. In the past, we've always had separate separate physical drives for data, pg_xlog & backups. We did this as a precaution against disk failure. If we lose one, we would still have the other two to recover from. Is that really necessary anymore, with having a repmgr cluster? My Linux Admin wants to do the following instead: What I propose is to set this up as a single drive and isolate the three directories using the Linux logical volume manager. As a result, each directory would be on a separate filesystem. This would provide the isolation that you require but would give me the ability to modify the sizes of the volumes should you run out of space. Also, since this is a VM and all drives are essentially “virtual”, the performance of this different drive structure would be essentially identical to one with three separate drives. === As with so many situations, “it depends”. 😊 I think the most important part you mentioned is that you’re in a VM, so it’s really up to your host server and you can do anything you like. I’d probably make 3 separate virtual disks so you can expand them as needed individually. We use real/standalone hardware and create 1 large RAID6 array with LVM on top and then create partitions on top of LVM. Our tablespace is in 1 partition and the rest is in another partition, and backups are mirrored to another server. I can probably come up with other ways to do things, like the tablespace on SSD while the logs & backups are on some slower but perhaps “more durable” storage (like a NAS/SAN/whatever). Our hardware can support 2-1TB M2 drives in RAID1 which makes me go “hmm, very fast access for the tablespace”. 😊 Probably can’t convince the “powers” to buy it though. It really does depends on what’s important to you and what resources you have available (including budget). HTH, Kevin This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.
Re: Connection refused (0x0000274D/10061)
On 6/13/19 9:56 AM, Sourav Majumdar wrote: Hi, I have tried many time for setup postgreSQL for my local host. Hence I have tried to reInstall the one click app from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads - Windows x86-64 - PostgreSQL Version 11.3. When installing using default port 5432, Its giving an error "Database Cluster Installation Failed". Please advice how to resolve the issue. I am struggling to setup this POSTGRESQL from more than a week. The error screenshot is attached. PFA Sounds like a permissions problem to me. See: https://www.enterprisedb.com/docs/en/11.0/PG_Inst_Guide_v11/PostgreSQL_Installation_Guide.1.13.html# to find the log of the install process. It will probably have more information. -- Adrian Klaver adrian.kla...@aklaver.com