Space Stalker in SQL Output
Why would a psql statement insert a leading space into the output, which is a single integer value? The leading space caused my job call to fail elsewhere in the same shell script as the psql call. Here is the anonymized version of the psql call to assign a value to a shell script variable: IDz=`psql -d proddb -U produser -h 10.9.999.99 -p 99900 -t < last_id.sql` The output is simply a max(id) value, which is defined as an integer data type in the source table column. The output looked like this (notice the leading space before the integer value): echo “IDz =${IDz} IDz =’ 100’ The last_id.sql itself is simply: select max(id) from prodtable; I'm using: PostgreSQL 9.5.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit I fixed the output in the shell script with a tr command but why should that be necessary? What is causing the space to be prepended to integer value? ID=`echo ${IDz} | tr -d ''` IDz =’100’ Knowing the root cause of the space stalker would be most helpful. Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: Space Stalker in SQL Output
Wow! The -A option worked perfectly! Thanks for the syntax lesson Steve and Jerry! Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2018-06-27 14:38, Jerry Sievers wrote: Susan Hurst writes: Why would a psql statement insert a leading space into the output, which is a single integer value? The leading space caused my job call to fail elsewhere in the same shell script as the psql call. Here is the anonymized version of the psql call to assign a value to a shell script variable: IDz=`psql -d proddb -U produser -h 10.9.999.99 -p 99900 -t < last_id.sql` Get in the habit of including -A which gets rid of alignment padding in psql output. As in... shellvar=`psql -Atqc 'select froboz;'` $db HTH
array must have even number of elements
Why must an array have an even number of elements? I need to use a trigger function on any table, some of which may have an odd number of columns that I want to cleanse before inserting/updating. Is there a workaround for this? ERROR: array must have even number of elements SQL state: 2202E Context: SQL statement "SELECT ($1 #= hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ',' ')),trim(replace(regexp_replace($1.vc,'( ){2,}',' ','g'),' ',' ')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*" PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE -- my test table create table dm.trg_test (c character(8), vc varchar(16), t text); insert into dm.trg_test (c,vc,t) values ('ctest',' vctest ',' ttest '); -- code snippet that produced the error. -- new will be substituted for $1 during execution with using clause l_query_string := 'select ($1 #= hstore(array[' || l_column_list || '])).*'; execute format(l_query_string) using new into new; return new; Thanks for your help! -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: array must have even number of elements
Thanks, everyone! I get it now. It's not just an array but an hstore array. I changed my code to include the original values so now it works: -- new will be substituted for $1 during execution with using clause l_query_string := 'select ($1 #= hstore(array[' || l_orig_list || '],' || 'array[' || l_clean_list || '])).*'; Pavel...I am creating a trigger function to look for columns with char, varchar or text data types to purge any incoming or updated data of extraneous spaces and tabs both within the string and on either end. We can use the same function from any table that calls it from a trigger. Now that it works, we can refactor it to make it better. I would welcome your suggestions for alternatives to hstore. Thanks for your help! Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2018-09-20 13:04, Pavel Stehule wrote: > Hi > > čt 20. 9. 2018 v 19:55 odesílatel Susan Hurst > napsal: > >> Why must an array have an even number of elements? I need to use a >> trigger function on any table, some of which may have an odd number of >> columns that I want to cleanse before inserting/updating. > > The hstore function get parameters as sequence of pairs (key, value) - so the > number should be even. Odd parameter signalize broken format. > > Your example is pretty crazy - I cannot to decode it. Maybe you should to use > different function, I don't see a sense for using hstore type there. But I > cannot to decode it. > > Regards > > Pavel > >> Is there a workaround for this? >> >> ERROR: array must have even number of elements >> >> SQL state: 2202E >> >> Context: SQL statement "SELECT ($1 #= >> hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ',' >> ')),trim(replace(regexp_replace($1.vc [1],'( ){2,}',' ','g'),' ',' >> ')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*" >> PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE >> >> -- my test table >> create table dm.trg_test (c character(8), vc varchar(16), t text); >> insert into dm.trg_test (c,vc,t) values ('ctest',' vctest ',' >> ttest '); >> >> -- code snippet that produced the error. >> -- new will be substituted for $1 during execution with using clause >> l_query_string := 'select ($1 #= hstore(array[' || l_column_list || >> '])).*'; >> execute format(l_query_string) using new into new; >> return new; >> >> Thanks for your help! >> >> -- >> >> Susan E Hurst >> Principal Consultant >> Brookhurst Data LLC >> Email: susan.hu...@brookhurstdata.com >> Mobile: 314-486-3261 Links: -- [1] http://1.vc
Missing Trigger after pgdump install
What scenarios can cause a single trigger to be omitted when populating an empty database from a pgdump file? We have nightly backups of our production database that we load into a fresh, empty database in our sandbox using the pgdump file. psql.exe -h localhost -U mi601db -p 5432 -o C:db_create_output.txt -L C:\\db_create_log.txt -d skyfall < C:\mi601db.pg All objects and data appear in the new database as expected, except for a single trigger named subscribers_iur_trg. The trigger exists in production and in the pgdump file. I can add it manually with no errors but it's always missing after our automated process. Nothing useful appears in the log file. The dependent function, devops.subscribers_update() is present and accounted for as is the view, devops.subscribers. CREATE TRIGGER subscribers_iur_trg INSTEAD OF UPDATE ON devops.subscribers FOR EACH ROW EXECUTE PROCEDURE devops.subscribers_update(); We've checked everything we can think of but we're still missing the trigger every day. Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: Missing Trigger after pgdump install
The dump command used by the DBA to create the pgdump file is: pg_dump --clean --if-exists --create --format=plain --no-owner --no-tablespaces \ --file=${BKUPDIR}/${TS}.${USER}.pg 2>&1 \ | tee -a ${LOGDIR}/${TS}_biar_dump.log No noticeable difference when -b is added, but we're not sure what this is for. What should we look for? The log file named db_create_log.txt is where nothing useful appeared. Here is a snippet from the postgres server log that shows an error message that the view devops.subscribers does not exist, however according to the line numbers the view was created before the trigger. Error from Postgres server log (postgresql-2019-08-16_140110.log): 2019-08-16 14:04:24 CDT ERROR: relation "devops.subscribers" does not exist 2019-08-16 14:04:24 CDT STATEMENT: create trigger subscribers_iur_trg instead of update on devops.subscribers for each row execute procedure devops.subscribers_update(); CREATE VIEW subscribers appears on line 11,968 in the dump file CREATE FUNCTION subscribers_update() appears on line 2,466 CREATE TRIGGER subscribers_iur_trg appears on line 5,457,362 --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-08-16 13:37, Adrian Klaver wrote: On 8/16/19 11:27 AM, Susan Hurst wrote: What scenarios can cause a single trigger to be omitted when populating an empty database from a pgdump file? We have nightly backups of our production database that we load into a fresh, empty database in our sandbox using the pgdump file. psql.exe -h localhost -U mi601db -p 5432 -o C:db_create_output.txt -L C:\\db_create_log.txt -d skyfall < C:\mi601db.pg What is the dump command? What happens if you add -b to above? Which log file are you referring to below, the one generated above or the Postgres server log? All objects and data appear in the new database as expected, except for a single trigger named subscribers_iur_trg. The trigger exists in production and in the pgdump file. I can add it manually with no errors but it's always missing after our automated process. Nothing useful appears in the log file. The dependent function, devops.subscribers_update() is present and accounted for as is the view, devops.subscribers. CREATE TRIGGER subscribers_iur_trg INSTEAD OF UPDATE ON devops.subscribers FOR EACH ROW EXECUTE PROCEDURE devops.subscribers_update(); We've checked everything we can think of but we're still missing the trigger every day. Thanks for your help! Sue
Re: Missing Trigger after pgdump install
Production version: PostgreSQL 9.5.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit Sandbox version: "PostgreSQL 9.5.14, compiled by Visual C++ build 1800, 64-bit" --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-08-16 16:24, Adrian Klaver wrote: On 8/16/19 1:00 PM, Susan Hurst wrote: The dump command used by the DBA to create the pgdump file is: pg_dump --clean --if-exists --create --format=plain --no-owner --no-tablespaces \ --file=${BKUPDIR}/${TS}.${USER}.pg 2>&1 \ | tee -a ${LOGDIR}/${TS}_biar_dump.log No noticeable difference when -b is added, but we're not sure what this is for. What should we look for? This should echo the errors below. Not sure where that actually ends up on Windows. What are the versions of Postgres you are using on the dump/restore ends? Here is a snippet from the postgres server log that shows an error message that the view devops.subscribers does not exist, however according to the line numbers the view was created before the trigger. Error from Postgres server log (postgresql-2019-08-16_140110.log): 2019-08-16 14:04:24 CDT ERROR: relation "devops.subscribers" does not exist 2019-08-16 14:04:24 CDT STATEMENT: create trigger subscribers_iur_trg instead of update on devops.subscribers for each row execute procedure devops.subscribers_update(); CREATE VIEW subscribers appears on line 11,968 in the dump file CREATE FUNCTION subscribers_update() appears on line 2,466 CREATE TRIGGER subscribers_iur_trg appears on line 5,457,362 --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-08-16 13:37, Adrian Klaver wrote: On 8/16/19 11:27 AM, Susan Hurst wrote: What scenarios can cause a single trigger to be omitted when populating an empty database from a pgdump file? We have nightly backups of our production database that we load into a fresh, empty database in our sandbox using the pgdump file. psql.exe -h localhost -U mi601db -p 5432 -o C:db_create_output.txt -L C:\\db_create_log.txt -d skyfall < C:\mi601db.pg What is the dump command? What happens if you add -b to above? Which log file are you referring to below, the one generated above or the Postgres server log? All objects and data appear in the new database as expected, except for a single trigger named subscribers_iur_trg. The trigger exists in production and in the pgdump file. I can add it manually with no errors but it's always missing after our automated process. Nothing useful appears in the log file. The dependent function, devops.subscribers_update() is present and accounted for as is the view, devops.subscribers. CREATE TRIGGER subscribers_iur_trg INSTEAD OF UPDATE ON devops.subscribers FOR EACH ROW EXECUTE PROCEDURE devops.subscribers_update(); We've checked everything we can think of but we're still missing the trigger every day. Thanks for your help! Sue
Re: Missing Trigger after pgdump install
We're using the 9.5.14 in the sandbox to extract data and objects from the pgdump that was created in the 9.5.0 version. Hope I answered your question correctly. If not, let me know and I'll try again. Our biggest concern is that there may be other silent issues that we have not yet discovered. Thanks for the info you just provided, Adrian. We'll read up on the path settings to see if we can find an answer there. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-08-16 17:29, Adrian Klaver wrote: On 8/16/19 3:18 PM, Susan Hurst wrote: Production version: PostgreSQL 9.5.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit Sandbox version: "PostgreSQL 9.5.14, compiled by Visual C++ build 1800, 64-bit" I going to say it has something to do with this: https://www.postgresql.org/docs/9.5/release-9-5-12.html "Avoid use of insecure search_path settings in pg_dump and other client programs (Noah Misch, Tom Lane)" Are you using the 9.5.0 or 9.5.14 version of pg_dump to dump from the production server?
Re: Posible off topic ? pgmodeler
Hi Stan! Don't know if you're interested in pgmodeler specifically or database modelers in general. At work, we use postgresql a lot and have found that DBeaver, which is open source, does a decent job of creating ERD diagrams. Personally, I don't like using DBeaver's interface for executing SQL statements but some of its other features are nice. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-09-02 12:31, stan wrote: Some very helpful folks pointed me to pgmodeler recently. I realize this list may not be the appropriate forum for discussing this, and would welcome pointers to a more appropriate forum. Having said that, this discussion may be a bit more generic. I have seen a number of open source products (Amanda is a good example), where a company sort of "takes over" the project, the company's bossiness plan is generally that of making money on support. Most of these companies have a community edition, which trails development of the product with their enhancements. U understand that this model is acceptable to GPL licensed software. The company supporting pgmodeler seems to have a different model. It looks like they provide a version of the tool that requires a license key, with limited life span. What I am trying to do, at the moment is get a complete understanding of their bossiness model, regarding the source code for this project. Thanks for any input on this.
Re: Inexplicable duplicate rows with unique constraint
That's why I created a virtual_string function to squeeze out everything but alpha characters and numbers 0-9 from any varchar or text columns that I want to use as business key columns. For example, if I have a column named job_name, I will have a companion column named v_job_name. The v_ column is to replicate Oracle's virtual column, since postgres doesn't have it. You don't put any values in the v_ column directly. I simply have a trigger on insert or update to put the value in the v_job_name column using the virtual_string(new.job_name) function. It's the v_job_name column that use in my unique constraint so that I avoid any unexpected sorting. Meanwhile, my job_name column is still human readable with whatever characters I want to see, including diacritics. Here is my function, if you want to try it out: create or replace function store.virtual_string(string_in text) returns text as $body$ declare l_return text; begin l_return := regexp_replace (lower(unaccent(string_in)),'[^0-9a-z]','','g'); return l_return; end; $body$ language plpgsql volatile security definer ; Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-01-16 11:48, Tom Lane wrote: Richard van der Hoff writes: On 16/01/2020 17:12, Magnus Hagander wrote: See https://wiki.postgresql.org/wiki/Locale_data_changes for hints on which linux distros updated when. It seems like a plausible explanation but it's worth noting that all the indexed data here is (despite being in text columns), plain ascii. I'm surprised that a change in collation rules would change the sorting of such strings, and hence that it could lead to this problem. Am I naive? Unfortunately, strings containing punctuation do sort differently after these changes, even with all-ASCII data. The example given on that wiki page demonstrates this. RHEL6 (old glibc): $ ( echo "1-1"; echo "11" ) | LC_COLLATE=en_US.utf8 sort 11 1-1 Fedora 30 (new glibc): $ ( echo "1-1"; echo "11" ) | LC_COLLATE=en_US.utf8 sort 1-1 11 I concur with Daniel's suggestion that maybe "C" locale is the thing to use for this data. regards, tom lane
Re: Help : Removal of leading spaces in all the columns of a table
I once wrote a trigger function to do just what you asked, however, it was a huge drain on performance so I didn't use it for long, so I dropped the trigger. Hopefully, someone has a more practical approach. I would be interested in this also. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-02-12 06:42, srikkanth wrote: > Hi Team, > > How can i write the syntax to remove the leading spaces on a table for all > the columns. > > Also, want to know how to do the all words of all the columns in capital > along with removing of leadingexcessivetrailing spaces at a time. > > Can you please help me out, let me know in case of any inputs. > > Thanks, > > Srikanth B
Re: PG Admin 4
I gave up on PGAdmin4 and went back to PGAdmin3, although 3 is deprecated now. The History tab will show you what you want after executing a SQL statement. I don't use Windows any more than I have to but when I did try PGAdmin4 on windows, the feedback was sometimes there and sometimes not. Linux works better. That said, I agree that executing a script from a command line is the best approach for implementing DDL statements. You can capture the script and the output for further confirmation of success or failure. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-07-10 15:20, Tim Clarke wrote: > Why would you shun the ease of command line batch control? > Tim Clarke MBCS > IT Director > Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420 > > On 10/07/2020 17:36, rwest wrote: > >> Oh sorry, should have specified that. >> >> We're running on a Windows platform. >> >> We're trying to avoid running anything command-line when doing DDL releases >> and leverage whatever PG Admin 4 can provide us. I'm just wondering why we >> don't see the results of each CREATE or ALTER statement as the script runs >> in the tool. That seems very strange to me. Is there some sort of setting >> or parameter that can force the tool to do this for us? >> >> -- >> Sent from: >> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html [1] > > Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | > Frankfurt: +49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 > 503 2848 > Web: https://www.manifest.co.uk/ [2] > > Minerva Analytics Ltd - A Solactive Company > 9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United > Kingdom > > - > > Copyright: This e-mail may contain confidential or legally privileged > information. If you are not the named addressee you must not use or disclose > such information, instead please report it to ad...@minerva.info > Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: > Registered in England Number 11260966 & The Manifest Voting Agency Ltd: > Registered in England Number 2920820 Registered Office at above address. > Please Click Here https://www.manifest.co.uk/legal/ [3] for further > information. Links: -- [1] https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html [2] https://www.manifest.co.uk/ [3] https://www.manifest.co.uk/legal/
shp2pgsql is missing
How can I acquire or download the shp2pgsql file that I need for the tiger geocoder? All of the online searches for a solution have not been helpful. Supposedly, shp2pgsql is supposed to just be there when postgis is installed, but it's not. I even tried updating postgis with: ALTER EXTENSION postgis UPDATE; ...but the message just says it's already there: NOTICE: version "3.0.0" of extension "postgis" is already installed. Where can I find the elusive file so I can install it properly? Here are the specifics of my system: Location: AWS EC2 instance OS: FreeBSD 12.1-RELEASE-p7 FreeBSD 12.1-RELEASE-p7 GENERIC amd64 PG Location: /usr/local/lib/postgresql/ PG Version: PostgreSQL 11.7 on amd64-portbld-freebsd12.1, compiled by FreeBSD clang version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1), 64-bit locate shp2pgsql returns nothing I understood that I should find shp2pgsql in /usr/local/lib/postgresql/, but its not there. BTW, it's not in /usr/lib/ either, which is where most of the online sources say it should be. I would welcome any insight into finding the missing shp2pgsql file. Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: shp2pgsql is missing
Thanks, Christoph! Looks like we'll be able to fix this now that we know what to do. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-07-26 14:58, Christoph Moench-Tegeder wrote: ## Susan Hurst (susan.hu...@brookhurstdata.com): OS: FreeBSD 12.1-RELEASE-p7 FreeBSD 12.1-RELEASE-p7 GENERIC amd64 There's your answer: the FreeBSD port of PostGIS 3.0 (databases/postgis30) installs shp2pgsql only if option LOADERGUI has been enabled on the port's build (the port defaults to LOADERGUI=off). You need to rebuild PostGIS or convince the maintainer of that port to switch the default (the usual way for that is the FreeBSD bugzilla system). Perhaps one could even argue that LOADERGUI should only toggle shp2pgsql-gui, but I haven't really looked into that. Regards, Christoph
Finding description pg_description
How do I find the source of an objoid from pg_catalog.pg_description? I comment everything in my databases and can find most of the comments in pg_catalog.pd_description, which only gives me objoid and classoid for the source of a comment. If I could find the oid sources I could make this work. I can find what I need for tables, columns, functions and a few other things but I cannot complete loops for foreign_data_wrappers, schemas, triggers and foreign keys. For example, I created a foreign_data_wrapper comment and can find it with this query: select * from pg_catalog.pg_description where description like '%FDW%'; 102432;1259;0;"Alias: FDW - Foreign data wrapper that acquires data from a remote database as specified in the column: devops.stp2_foreign_data_wrappers.remote_db_connection." ...but I have no idea where the objoid is coming from so I can't join it to anything programmatically. Here is the DDL for schemas that I'm trying to finish: -- drop view devops.${DBNAME}_schemas; create view devops.${DBNAME}_schemas (schema_name ,object_type ,schema_description ) as select s.schema_name ,'Schema'::text -- for validation log file ,pd.description from information_schema.schemata s left join pg_description pd on (pd.objoid = ??.oid ) --- what do I join to? where s.schema_name not in ('dba','information_schema','pg_catalog','public') ; comment on view devops.${DBNAME}_schemas is 'Alias: SCH - Master view of all ${DBNAME} schemas. Each schema has a purpose and provides a safe habitat for its business data and functionality.'; In contrast, the following view works perfectly as planned since I know how to find the oid: -- drop view devops.${DBNAME}_functions; create view devops.${DBNAME}_functions ( schema ,function_name ,function_arguments ,function_description ) as select pn.nspname ,pp.proname ,pp.proargnames ,pd.description from pg_proc pp left join pg_description pd on (pd.objoid = pp.oid ) ,pg_namespace pn where pn.oid = pp.pronamespace and pn.nspname not in ('dba','pg_catalog','information_schema','public') order by pn.nspname ,pp.proname ; comment on view devops.${DBNAME}_functions is 'Alias: FN - Master view of all ${DBNAME} functions and their arguments from all ${DBNAME} schemas.'; Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Function Speed vs UI Function Speed
Is there a significant difference in execution speed between similar functions that do the same DML but differ in the extra validations and messaging that are required for providing feedback to UI users? For example, a batch load script that inserts multiple rows into a table may call an insert function within a cursor to populate each row into the target table. Meanwhile, a non-technical business user may want to create only one entity via a UI. However, the UI must have validations and helpful messages to to user so the insert worksor won't, but at least user would know what they need to do differently to get to a successful outcome. Typically, the teams that I've worked on typically create the bare functions that will get the job done but later we have to come back and make new functions specifically for UIs. BTWI'm a firm believer in having the database protect itself and not totally depend upon the application layer to protect data. Anyone with database access can bypass a UI and still corrupt data unintentionally (incomplete WHERE clause, for example). Now I wonder if it would be prudent to create the UI ready functions only and use those same functions for batch loads or any other process that needs to be done in bulk. I'm not sure about the performance impact of that. Any insights or ideas? Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Foreign Data Wrapper Handler
Can anyone recommend a good online resource for learning how to set up a foreign data wrapper using a custom fdw name? It seems the trick is to use a handler to make it work but so far the search results have been elusive for creating a fdw with a successful outcome. I'm using 'PostgreSQL 11.8 on amd64-portbld-freebsd12.1, compiled by FreeBSD clang version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1), 64-bit' Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: Foreign Data Wrapper Handler
The first pass thru https://www.postgresql.org/docs/12/fdwhandler.html does not tell me what I think I need to know, but I will digest this more thoroughly. Maybe I need to understand more of the lingo re: foreign data wrappers. I do understand that all fdw names must be unique within a database so if I want to connect to more than one foreign db, I need a different name for each connection. I cannot name each fdw postgres_fdw. I would like to name the fdws something like: dbname_to_foreigndbname. For example, here are 2 possible fdws: create foreign data wrapper stp_to_geo; create foreign data wrapper stp_to_metrics; That syntax creates the fdw and I can create user mappings but I cannot import any foreign schemas into my database. The error message says that there is no handler for the fdw. That's where I'm stuck. BTW, I did try using postgres_fdw as a handler... create foreign data wrapper stp_to_geo handler postgres_fdw; ...but then I got these errors: ERROR: function postgres_fdw() does not exist ERROR: foreign-data wrapper "stp_to_geo" does not exist Looks like I need to study a bit more. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-11-07 14:51, Adrian Klaver wrote: On 11/7/20 12:44 PM, Susan Hurst wrote: Can anyone recommend a good online resource for learning how to set up a foreign data wrapper using a custom fdw name? It seems the trick is to use a handler to make it work but so far the search results have been elusive for creating a fdw with a successful outcome. Have you gone through this?: https://www.postgresql.org/docs/12/fdwhandler.html Also can we get a definition of 'custom fdw name'? I'm using 'PostgreSQL 11.8 on amd64-portbld-freebsd12.1, compiled by FreeBSD clang version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1), 64-bit' Thanks for your help! Sue
Re: Foreign Data Wrapper Handler
Thanks for the detailed instructions, Laurenz! "The foreign server encapsulates the connection string to access a remote PostgreSQL database. Define one per remote database you want to access." Where do I define "one per remote database"?.in pg_hba.conf? --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-11-09 11:12, Laurenz Albe wrote: On Sun, 2020-11-08 at 13:09 -0600, Susan Hurst wrote: The first pass thru https://www.postgresql.org/docs/12/fdwhandler.html does not tell me what I think I need to know, but I will digest this more thoroughly. Maybe I need to understand more of the lingo re: foreign data wrappers. I do understand that all fdw names must be unique within a database so if I want to connect to more than one foreign db, I need a different name for each connection. I cannot name each fdw postgres_fdw. I would like to name the fdws something like: dbname_to_foreigndbname. For example, here are 2 possible fdws: create foreign data wrapper stp_to_geo; create foreign data wrapper stp_to_metrics; That syntax creates the fdw and I can create user mappings but I cannot import any foreign schemas into my database. The error message says that there is no handler for the fdw. That's where I'm stuck. BTW, I did try using postgres_fdw as a handler... create foreign data wrapper stp_to_geo handler postgres_fdw; ...but then I got these errors: ERROR: function postgres_fdw() does not exist ERROR: foreign-data wrapper "stp_to_geo" does not exist Looks like I need to study a bit more. This is how you would create a new foreign data wrapper object for PostgreSQL: CREATE FOREIGN DATA WRAPPER myfdw HANDLER public.postgres_fdw_handler VALIDATOR public.postgres_fdw_validator; This assumes that you installed the extension "postgres_fdw" in schema "public". But you normally don't have to create a new foreign data wrapper: the one named "postgres_fdw" that is created by the extension is good enough. The only reason would be to have a foreign data wrapper with non-default options, but since there are no options for "postgres_fdw", that is moot. So don't do that. The hierarchy of objects is as follows: - The foreign data wrapper encapsulates the code required to access the foreign data source. You need only one per database; no need to create a new one. - The foreign server encapsulates the connection string to access a remote PostgreSQL database. Define one per remote database you want to access. - The user mapping encapsulates the credentials for a user to access a foreign server. Create one per user and foreign server (or a single one for PUBLIC = everybody). - The foreign table describes how a remote table is mapped locally. Define one per table that interests you. Yours, Laurenz Albe
copy command - something not found
I am trying to use the copy command from a csv files using a UNIX shell script but something is 'not found'...I can't figure out what is 'not found'. Below is my command from the shell script, the executed command, the content of the csv file and the output from trying to execute the command. Clearly, the file is being read but I can't figure out what is not found. BTW, the column names in the stg.bar_active table match the names and order in the first row of the csv file. What should I be looking for? Thanks for your help! Sue ##-- shell script command psql -d ${DBNAME} -U ${DBNAME} -h ${HOSTNAME} -c < ${CSVPATH}copycmd.z ##-- executed command "copy stg.bar_active from '/home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv' delimiter ',' CSV HEADER;" ##-- content of .csv file schema_name,table_name,table_alias...(this is the header row) chief,source_systems,ssys chief,lookup_categories,lcat chief,lookup_data,ldat ##-- output from terminal window /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: schema_name,table_name,table_alias: not found /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: chief,source_systems,ssys: not found /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: chief,lookup_categories,lcat: not found /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: chief,lookup_data,ldat: not found ##-- select version(); PostgreSQL 11.9 on amd64-portbld-freebsd12.1, compiled by FreeBSD clang version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1), 64-bit -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: copy command - something not found
Thank you for the reminders, John. I changed my .z file extension to .tmp and removed the trailing / from my pre-defined directory path. I'm still getting the same results as before though after changing the .z file extension to .tmp. The ${DBNAME} and ${HOSTNAME} variables are input at run time. Here is usage format: ### Usage: copy_data.sh rfc_name db_name [db_host] Note: [db_host] (hostname) is optional with default localhost, which is what I'm using in this particular case. My actual input at execution time was this: ./install_db.sh RFC-1 stp This script calls the copy_data.sh script. ${CSVPATH} is: /home/dbzone/stp/rfc_db/RFC-1 (after removing the trailing /) --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-12-29 13:20, John McKown wrote: > Can't really tell. You might want to post the output of the "printenv" > command to show us what the shell variables you are using have in them. Oh, > does ${CSVPATH} end in a slash? If it is something like "~/mycsvs" then > ${CSVPATH}copycmd.z will expand to "~/mycsvscopycmd.z". Most "PATH" > environment variables don't end in a /, perhaps you need > "${CSVPATH}/copycmd.z" ? Also, as an aside. most UNIX files which end in .z > are compressed, IIRC. Make sure the contents of the file are plain text. > > On Tue, Dec 29, 2020 at 1:12 PM Susan Hurst > wrote: > >> I am trying to use the copy command from a csv files using a UNIX shell >> script but something is 'not found'...I can't figure out what is 'not >> found'. Below is my command from the shell script, the executed command, >> the content of the csv file and the output from trying to execute the >> command. >> >> Clearly, the file is being read but I can't figure out what is not >> found. BTW, the column names in the stg.bar_active table match the names >> and order in the first row of the csv file. What should I be looking >> for? >> >> Thanks for your help! >> >> Sue >> >> ##-- shell script command >> psql -d ${DBNAME} -U ${DBNAME} -h ${HOSTNAME} -c < ${CSVPATH}copycmd.z >> >> ##-- executed command >> "copy stg.bar_active from >> '/home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv' >> delimiter ',' CSV HEADER;" >> >> ##-- content of .csv file >> schema_name,table_name,table_alias ...(this is the header >> row) >> chief,source_systems,ssys >> chief,lookup_categories,lcat >> chief,lookup_data,ldat >> >> ##-- output from terminal window >> /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: >> schema_name,table_name,table_alias: not found >> /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: >> chief,source_systems,ssys: not found >> /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: >> chief,lookup_categories,lcat: not found >> /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: >> chief,lookup_data,ldat: not found >> >> ##-- select version(); >> PostgreSQL 11.9 on amd64-portbld-freebsd12.1, compiled by FreeBSD clang >> version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1), >> 64-bit >> >> -- >> >> Susan E Hurst >> Principal Consultant >> Brookhurst Data LLC >> Email: susan.hu...@brookhurstdata.com >> Mobile: 314-486-3261
Re: copy command - something not found
Actually, the -c was in an example of a copy command that I found while working at my last job. I tried executing the command without the -c and got the same results as before, so I suppose I really don't know what it means. Can you enlighten me? --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-12-29 13:23, David G. Johnston wrote: > On Tuesday, December 29, 2020, Susan Hurst > wrote: > >> ##-- shell script command >> psql -c < ${CSVPATH}copycmd.z > > Given the meaning of "-c" what are you expecting that to do? > > David J.
Re: copy command - something not found
Tom...I think you are right about feeding the contents of the csv file to the shell instead of psql. After drilling down a bit more into my script, I now think I have a UNIX shell script problem rather than a psql problem. I do appreciate everyone's input as it has been most helpful in my efforts to figure out what is not the problem. Thanks for your help! Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2020-12-29 13:55, Tom Lane wrote: Susan Hurst writes: ##-- output from terminal window /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: schema_name,table_name,table_alias: not found /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: chief,source_systems,ssys: not found /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: chief,lookup_categories,lcat: not found /home/dbzone/stp/rfc_db/RFC-1/RFC-1_copy_bar_active.csv: chief,lookup_data,ldat: not found I'd say you're feeding the contents of the csv file to the Unix shell, not to psql. It's not real clear how that would happen given what you show as your initial shell command, but maybe ${CSVPATH} contains something odd? Also, as David noted, "-c" without any argument certainly isn't right. regards, tom lane
FDW to postGIS Database
Let say that I have a database named stp. I also have another database named geo that is a PostGIS database for storing geopolitical and geospatial data. I want to set up a foreign data wrapper in stp to connect to geo, so that for each address in stp, I only have to store the primary key value of the lowest level geo unit, such as a city. I can always display the city's county, state, country, whatever in views in stp. I plan to have other databases that need geo data so I want a single source of truth for geopolitical and geospatial data. My questions are: 1. Do I have to install PostGIS in stp?or is it sufficient to install PostGIS only in geo? 2. Do I need the postgis extension in stp? (I'm thinking yes, but I don't want to guess.) 3. I want to geocode addresses in stp. Do I need to install any geocoding software in stp?...or can it be used from geo? Thanks for your help! Sue -- -- Susan E Hurst Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: FDW to postGIS Database
Thank you, Brent! You told me what I needed to know. --- -- Susan E Hurst Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2022-03-06 19:35, Brent Wood wrote: In line below... On Monday, March 7, 2022, 05:36:34 AM GMT+13, Susan Hurst wrote: Let say that I have a database named stp. I also have another database named geo that is a PostGIS database for storing geopolitical and geospatial data. I want to set up a foreign data wrapper in stp to connect to geo, so that for each address in stp, I only have to store the primary key value of the lowest level geo unit, such as a city. I can always display the city's county, state, country, whatever in views in stp. I plan to have other databases that need geo data so I want a single source of truth for geopolitical and geospatial data. My questions are: 1. Do I have to install PostGIS in stp?or is it sufficient to install PostGIS only in geo? If you want to use Postgis functions & datatypes in stp you will need Postgis there 2. Do I need the postgis extension in stp? (I'm thinking yes, but I don't want to guess.) Isn't this the same as (1) above? (Postgis is installed via "create extension postgis;") 3. I want to geocode addresses in stp. Do I need to install any geocoding software in stp?...or can it be used from geo? If you can geocode in geo & access the resulting data via fdw in stp, you should not need geocoding tools in stp. If you need to access spatial data from geo in stp & geocode in stp, you will need the geocoding tools in stp. Thanks for your help! Sue -- -- Susan E Hurst Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Displaying Comments in Views
What is the trick for displaying column comments in views? The query below works as expected when the table_schema includes tables, however it shows nothing when the table_schema contains only views. I tried putting the query into an inline statement as a column selection in a wrapper query...I got all the table/column data but the comment column values were all null. There must be a way to display comments if I can display the table/column definitions, especially since the query joins directly to information_schema columns. What am I missing? Thanks for your help! Sue select c.table_schema ,c.table_name ,c.column_name ,pd.description from pg_catalog.pg_statio_all_tables st ,pg_catalog.pg_descriptionpd ,information_schema.columns c where pd.objoid = st.relid and pd.objsubid = c.ordinal_position and c.table_schema = st.schemaname and c.table_name = st.relname and c.table_schema = 'devops' order by c.table_schema ,c.table_name ,c.column_name ; -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: Displaying Comments in Views
Thx for the great info. I appreciate your pointing me in the right direction. Sue --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-01-28 09:27, Tom Lane wrote: Susan Hurst writes: What is the trick for displaying column comments in views? The query below works as expected when the table_schema includes tables, however it shows nothing when the table_schema contains only views. No surprise, since you're using pg_statio_all_tables as the source of tables, and that contains, well, only tables. I'm not quite sure why you'd choose that view anyway. Personally I'd have gone directly to pg_class, and then probably filtered on relkind if there were things I didn't want to see. Or you could use information_schema.tables. Also, I'm too lazy to check on how information_schema.columns defines "ordinal_position", but I wonder if it tries to leave out dropped columns, or might do so in future. That puts this join condition at risk: "pd.objsubid = c.ordinal_position". You'd likely be better off to join pg_class and pg_attribute to pg_description, rather than working with proxies for them. https://www.postgresql.org/docs/current/catalogs.html regards, tom lane
Display View Columns and Their Source Tables and Columns
I'm so close but I can't quite figure out how to match view columns to their source columns in a query. Looks like I might need yet another table to join that makes that match, but I'm not having any success finding such a bridge. Matching views to their source tables works well enough. What am I missing? Is there a better approach? I would welcome any comments or leads that you have. Thanks for your help! Sue Here is what I have so far: select vcu.view_name view_name ,c.column_name view_column ,vcu.table_schema source_schema ,vcu.table_namesource_table ,vcu.column_name source_column ,c.is_updatableis_updatable from information_schema.view_column_usage vcu ,information_schema.columns c where vcu.view_schema = 'devops' and vcu.table_schema in ('devops','chief','store') and vcu.view_schema = c.table_schema and vcu.view_name = c.table_name and Help! * order by vcu.view_name ,vcu.table_name ,c.column_name ; -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
TAbles/Columns missing in information schema
Why would schemas/tables/columns be missing from information_schema.tables/information_schema.columns? We recently promoted some new tables to production but only 2 of the 4 schemas appeared in information_schema.tables/information_schema.columns. The schemas/tables/columns do exist as expected but they are not represented in information_schema.tables/information_schema.columns. Why is that? Where can I find the missing info for use in a view that displays the tables and columns in a database? Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: TAbles/Columns missing in information schema
Oh my! I'll check on that...thanks, David! --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-05-10 17:51, David G. Johnston wrote: > On Fri, May 10, 2019 at 3:46 PM Susan Hurst > wrote: > >> Why would schemas/tables/columns be missing from >> information_schema.tables/information_schema.columns? > > The user you are using to check information_schema doesn't have permissions > on the objects in question? > > David J.
Permissions for information_schema
What are the correct permissions to give to a role so that all objects in the information_schema (and pg_catalog) are visible to a user? Permissions seem to make a difference but I don't know which adjustments to make without causing unintended consequences. We revoked select on all tables and functions from public, if that makes a difference. We don't use the public schema but it appears that postgres does. Should I be looking at something other than permissions to make information_schema more visible? We are particularly interested in using the comments on everything to create views of our database structures that we can use for our team's training documentation. Of course, the comments/descriptions can't be selected in isolation so we need full visibility. Below are samples of select statements with outputs that disagree based upon the database and presumably, the permissions. Thanks for your help! Sue Production db logged in as admin: CREATE ROLE admin LOGIN NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION; GRANT read TO admin; GRANT write TO admin; select * from information_schema.table_constraints; -- 206 rows select * from information_schema.constraint_column_usage; -- 0 rows sandbox db logged in as postgres: CREATE ROLE postgres LOGIN ENCRYPTED PASSWORD '' SUPERUSER INHERIT CREATEDB CREATEROLE REPLICATION; select * from information_schema.table_constraints;-- 621 rows select * from information_schema.constraint_column_usage; -- 127 rows -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261
Re: Permissions for information_schema
The objects are granted SELECT to PUBLIC. --- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261 On 2019-05-16 11:53, David G. Johnston wrote: > On Thu, May 16, 2019 at 9:50 AM Susan Hurst > wrote: > >> What are the correct permissions to give to a role so that all objects >> in the information_schema (and pg_catalog) are visible to a user? > > Have you determined that using the underlying pg_catalog schema is not > viable. > > David J.
User Connecting to Remote Database
I would welcome your comments and suggestions for connecting a user (not a superuser) to a foreign server. I have a database, named geo, in which I have geospatial and geopolitical data. I want to be able to select data from geo from other databases. The database that I want to connect up to geo is named stp. I have a foreign data wrapper in stp that defines geo as the data source for the foreign server named geoserver. User stp is defined in both geo and stp as superusers, so I am able to select geo data just fine from stp. However, when I try to select geo data as user geo_user, I get this error: ERROR: permission denied for relation geoadm_l0 SQL state: 42501 What am I missing? Here are the relevant grants etc that I set up in both geo and stp. -- user and user mapping in stp database create user geo_user with login nosuperuser inherit nocreatedb nocreaterole noreplication password '**'; CREATE USER MAPPING FOR geo_user SERVER geoserver OPTIONS (password '**', "user" 'geo_user'); grant usage on foreign data wrapper postgres_fdw to geo_user; GRANT USAGE ON FOREIGN SERVER geoserver TO geo_user; -- user in geo database create user geo_user with login nosuperuser inherit nocreatedb nocreaterole noreplication password '**'; -- grants in geo database GRANT ALL ON TABLE public.geoadm_l0 TO susan; GRANT SELECT ON TABLE public.geoadm_l0 TO geo_user; GRANT SELECT ON TABLE public.geoadm_l0 TO read; GRANT ALL ON TABLE public.geoadm_l0 TO geo; GRANT INSERT, UPDATE, DELETE ON TABLE public.geoadm_l0 TO write; Thanks for your help! Sue -- Susan E Hurst Principal Consultant Brookhurst Data LLC Email: susan.hu...@brookhurstdata.com Mobile: 314-486-3261