Question about replication
Hi all, I recently hit a problem about sync replication of postgres. When I check the state of the replication, I got this: postgres=# select * from pg_stat_replication ; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state --+--+--+-+---+-+-+---+--+---+---+++-+---+ 4674 | 10 | postgres | 10.87.107.182:20133 | 10.87.107.182 | ip-10-87-107-182.us-west-2.compute.internal | 42408 | 2018-08-20 23:51:22.742259+00 | | streaming | 0/1C83A00 | 0/1C83A00 | 0/1C83B50 | 0/1C83B50 | 1 | sync As you can see, the flush_location is ahead of write_location. I am wondering how this could happen? Thanks, Simon
How to check whether table is busy or free before running the ALTER or creating TRIGGER on that table
*Hi All,* We have thousands of tables. Out of these tables we have few tables. Which are busy some times. If I execute any ALTER statement or creating trigger on those tables I am unable to do it. How to check whether table is busy or free before running the *ALTER/DDL *or creating *TRIGGER *on that table in postgresql database. -- Regards, Raghavendra Rao J S V
Re: How to check whether table is busy or free before running the ALTER or creating TRIGGER on that table
On 08/21/2018 06:25 AM, Raghavendra Rao J S V wrote: *Hi All,* We have thousands of tables. Out of these tables we have few tables. Which are busy some times. If I execute any ALTER statement or creating trigger on those tables I am unable to do it. How to check whether table is busy or free before running the *ALTER/DDL *or creating *TRIGGER *on that table in postgresql database. Take a look at: https://www.postgresql.org/docs/10/static/view-pg-locks.html -- Regards, Raghavendra Rao J S V -- Adrian Klaver adrian.kla...@aklaver.com
COPY FROM - to avoid WAL generation
In a recent thread of mine I learned something very interesting. If a table is created and data is loaded via COPY FROM within the same transaction, then PG will be smart enough to not generate WAL logs because all it needs to do is to track the status of the transaction and let the data load go to the new data file created for the table. If committed, the table is released for other sessions, if rolledback, vaccum will delete the data file later on. I tested it as follows for a table with 50 milllion rows. No indexes. Case 1 - create the table first. - in a separate transaction load the 50 million rows. Took 3 min 22 seconds Case 2 - start transaction - create table - load 50 million rows - commit transaction Took: 3 min 16 seconds. Am I missing anything?
Re: COPY FROM - to avoid WAL generation
On 8/21/18 9:00 AM, Ravi Krishna wrote: In a recent thread of mine I learned something very interesting. If a table is created and data is loaded via COPY FROM within the same transaction, then PG will be smart enough to not generate WAL logs because all it needs to do is to track the status of the transaction and let the data load go to the new data file created for the table. If committed, the table is released for other sessions, if rolledback, vaccum will delete the data file later on. I tested it as follows for a table with 50 milllion rows. No indexes. Case 1 - create the table first. - in a separate transaction load the 50 million rows. Took 3 min 22 seconds Case 2 - start transaction - create table - load 50 million rows - commit transaction Took: 3 min 16 seconds. Am I missing anything? Have you looked into pg_bulkload? https://github.com/ossc-db/pg_bulkload Docs are here: http://ossc-db.github.io/pg_bulkload/index.html Jeff
Re: COPY FROM - to avoid WAL generation
Hi, On 2018-08-21 15:00:03 +, Ravi Krishna wrote: > In a recent thread of mine I learned something very interesting. If a table > is created and data is loaded via COPY FROM within the same transaction, then > PG will be smart enough to not generate WAL logs because all it needs to do > is to track the status of the transaction and let the data load go to the new > data file created for the table. If committed, the table is released for > other sessions, if rolledback, vaccum will delete the data file later on. > I tested it as follows for a table with 50 milllion rows. No indexes. Please note this is only the case if wal_level = minimal. If replication (or PITR) is supported, that mode can't be used, because the data has to go into the WAL. Were you using wal_level = minimal? (FWIW, it's not VACUUM that'd unlink the data in cause of failure, but that doesn't really matter much). Greetings, Andres Freund
Re: COPY FROM - to avoid WAL generation
>Please note this is only the case if wal_level = minimal. If replication >(or PITR) is supported, that mode can't be used, because the data has to >go into the WAL. >Were you using wal_level = minimal? Aha. No it was not minimal. For a second I thought PG is super smart. Oh well. Thanks.
Re: Multiple COPY on the same table
On Mon, 20 Aug 2018 at 16:23, Adrian Klaver wrote: > > On 08/20/2018 08:56 AM, Nicolas Paris wrote: > >> Can I split a large file into multiple files and then run copy using > >> each file. > > > > AFAIK, copy command locks the table[1] while there is no mention of this > > in the documentation[2]. > > [1] Is from Postgres 7.1(17 years ago). I suspect the conditions have > changed at least a little:). oxrsdb-generated@localhost-> create temp table foo (); CREATE TABLE oxrsdb-generated@localhost-> begin; BEGIN oxrsdb-generated@localhost-> * \copy foo from '/dev/null'; COPY 0 oxrsdb-generated@localhost-> * select oid, relname from pg_class where relname = 'foo'; oid | relname +- 350686 | foo (1 row) oxrsdb-generated@localhost-> * select * from pg_locks ; locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath +--+--+--+---++---+-+---+--++---+--+-+-- relation | 345742 |11713 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 3455 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 2663 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 2662 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 2685 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 2684 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 2615 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 1259 | | || | | | | 2/200573 | 16754 | AccessShareLock | t | t relation | 345742 | 350686 | | || | | | | 2/200573 | 16754 | RowExclusiveLock | t | t virtualxid | | | | | 2/200573 | | | | | 2/200573 | 16754 | ExclusiveLock| t | t (10 rows) Table 'foo' has a RowExclusiveLock lock taken out as a consequence of running COPY against it. But that does not prevent other connections from concurrently writing to the table. Not all locks block other locks... -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"
Re: Linker errors while creating a PostgreSQL C extension function.
Hallo Do you or anyone know why is it trying to link with -L/usr/lib64 path and not -L/usr/local/lib as provided? After recompiling the libseal.a with the -fPIC flag and copying it manually from /usr/local/lib/ to /usr/lib64/ I get those errors: g++ -Wl,--no-undefined -shared -o seal_diff_cpp.so seal_diff_cpp.o -L/usr/pgsql-10/lib -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/usr/pgsql-10/lib',--enable-new-dtags -L/usr/local/lib -lseal -pthread seal_diff_cpp.o: In function `seal_diff_cpp': /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:106: undefined reference to `pg_detoast_datum_packed' /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:107: undefined reference to `pg_detoast_datum_packed' /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:108: undefined reference to `pg_detoast_datum_packed' /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:112: undefined reference to `text_to_cstring' /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:113: undefined reference to `text_to_cstring' /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:114: undefined reference to `text_to_cstring' /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:130: undefined reference to `cstring_to_text_with_len' Did I miss something in one of my files? Best regards, Tal -- Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
SPI_cursor_fetch Memory Issue
Hi all, My goal is to get data from the Postgres table into a C-function to be processed. Since the table can be very large, I only want to process one chunk of data per time to keep the memory stable. To achieve this, I use SPI cursor( https://www.postgresql.org/docs/current/static/spi-spi-cursor-fetch.html) to fetch row chunks from the table. Following is the pseudocode of my program: //Prepare the cursor SPI_connect(); SPIPlanPtr SPIplan = SPI_prepare_cursor(command, 0, NULL, 0); Portal cursor= SPI_cursor_open(NULL, SPIplan, NULL, NULL, true); // Fetch 500 rows per time from cursor SPI_cursor_fetch(cursor, true, 500); int row_returned= SPI_processed; while(row_returned != 0){ SPITupleTable *tuptable = SPI_tuptable; // … // Processing data: write data to a local file… // … SPI_freetuptable(tuptable); // fetch next 500 rows SPI_cursor_fetch(cursor, true, 500); row_returned= SPI_processed; } SPI_cursor_close(cursor); SPI_finish(); >From my understanding, cursor points at the entire result rows on heap. After fetching 500 rows, SPI_tuptable saves information of the row set. When read from SPI_tuptable, memory increases. After finishing process one chunk, I freed it by calling SPI_freetuptable( ) before next fetch. I expected the memory to be constant through out the program, However, the actual memory kept increasing when I run the program. Can anyone tell me why is it happening? Thanks in advance! Best, Ivy
Re: Linker errors while creating a PostgreSQL C extension function.
TalGloz wrote: > Do you or anyone know why is it trying to link with -L/usr/lib64 path and > not -L/usr/local/lib as provided? > > After recompiling the libseal.a with the -fPIC flag and copying it manually > from /usr/local/lib/ to /usr/lib64/ I get those errors: > > g++ -Wl,--no-undefined -shared -o seal_diff_cpp.so seal_diff_cpp.o > -L/usr/pgsql-10/lib -L/usr/lib64 -Wl,--as-needed > -Wl,-rpath,'/usr/pgsql-10/lib',--enable-new-dtags -L/usr/local/lib -lseal > -pthread > seal_diff_cpp.o: In function `seal_diff_cpp': > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:106: > undefined reference to `pg_detoast_datum_packed' > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:107: > undefined reference to `pg_detoast_datum_packed' > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:108: > undefined reference to `pg_detoast_datum_packed' > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:112: > undefined reference to `text_to_cstring' > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:113: > undefined reference to `text_to_cstring' > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:114: > undefined reference to `text_to_cstring' > /etc/opt/pgsql_c_functions/CPP/seal_extension/seal_diff_cpp.cpp:130: > undefined reference to `cstring_to_text_with_len' > > Did I miss something in one of my files? I think the --no-undefined is wrong. Any reference to PostgreSQL functions is undefined at build time and gets resolved when the shared library is loaded into PostgreSQL. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com