Table partitioning with sequence field in postgresql12
Hi, Partitioning of a table with sequence id as one of its fields is supported in postgresql12? Regards, Seenu.
Re: Table partitioning with sequence field in postgresql12
Hi, I have a parent table with one of the field as ""gid" int4 DEFAULT nextval('"ami_smart_new".aoi_boundary_gid_seq'::regclass)". I create child tables which inherit parent and use hash partition. When I directly insert into child tables, will there be any race condition causing two child tables getting the same sequence value for gid? Regards, Seenu. On Thu, Jun 18, 2020 at 12:09 PM amul sul wrote: > On Thu, Jun 18, 2020 at 12:00 PM Srinivasa T N wrote: > > > > Hi, > >Partitioning of a table with sequence id as one of its fields is > supported in postgresql12? > > > Could you please elaborate on your case a bit more? > > Regards, > Amul >
Re: Table partitioning with sequence field in postgresql12
I am using declarative partitioning on some other field of the same table in which gid also exists. Regards, Seenu. On Thu, Jun 18, 2020 at 12:31 PM amul sul wrote: > On Thu, Jun 18, 2020 at 12:22 PM Srinivasa T N wrote: > > > > Hi, > >I have a parent table with one of the field as ""gid" int4 DEFAULT > nextval('"ami_smart_new".aoi_boundary_gid_seq'::regclass)". > > > >I create child tables which inherit parent and use hash partition. > When I directly insert into child tables, will there be any race condition > causing two child tables getting the same sequence value for gid? > > > if "gid" is the partitioning key and assuming you are using declarative > partitioning[1], then it won't be possible to have the same "gid" value in > two > child partitions. > > 1] https://www.postgresql.org/docs/current/ddl-partitioning.html > > regards, > Amul >
HASH partitioning not working properly
Hi, After seeing the below, I feel partitioning is not working properly or it maybe case that my understanding is wrong. Can somebody explain me what is happening? postgres=# create table busbar_version (objectid int, ver_id int) partition by hash(ver_id); CREATE TABLE postgres=# CREATE TABLE busbar_version0 PARTITION OF busbar_version FOR VALUES WITH (MODULUS 10, REMAINDER 0); CREATE TABLE postgres=# CREATE TABLE busbar_version5 PARTITION OF busbar_version FOR VALUES WITH (MODULUS 10, REMAINDER 5); CREATE TABLE postgres=# CREATE TABLE busbar_version6 PARTITION OF busbar_version FOR VALUES WITH (MODULUS 10, REMAINDER 6); CREATE TABLE postgres=# CREATE TABLE busbar_version7 PARTITION OF busbar_version FOR VALUES WITH (MODULUS 10, REMAINDER 7); CREATE TABLE I did insert using following: postgres=# insert into busbar_version(objectid,ver_id) values (5,5); INSERT 0 1 postgres=# insert into busbar_version(objectid,ver_id) values (6,6); INSERT 0 1 I was of the opinion that the above rows were inserted into busbar_version5 and busbar_version6, but I'm wrong. postgres=# select * from busbar_version; objectid | ver_id --+ 5 | 5 6 | 6 (2 rows) postgres=# select * from busbar_version5; objectid | ver_id --+ 5 | 5 (1 row) postgres=# select * from busbar_version6; objectid | ver_id --+ (0 rows) postgres=# select * from busbar_version7; objectid | ver_id --+ 6 | 6 (1 row) Why second insert has gone to table busbar_version7 instead of busbar_version6? If it helps, I am trying on the "psql (12.3 (Debian 12.3-1.pgdg100+1))" container. Regards, Seenu.
Re: HASH partitioning not working properly
On Fri, Jun 19, 2020 at 11:44 AM David Rowley wrote: > On Fri, 19 Jun 2020 at 17:42, Srinivasa T N wrote: > >After seeing the below, I feel partitioning is not working properly > or it maybe case that my understanding is wrong. Can somebody explain me > what is happening? > > > postgres=# select * from busbar_version6; > > objectid | ver_id > > --+ > > (0 rows) > > > > postgres=# select * from busbar_version7; > > objectid | ver_id > > --+ > > 6 | 6 > > (1 row) > > > >Why second insert has gone to table busbar_version7 instead of > busbar_version6? > > It's your understanding that's not correct. The value of is passed > through a hash function and the partition is selected based partition > matching the remainder value after dividing the return value of the > hash function by the largest modulus of any partition. > > That might surprise you, but how would you select which partition a > varchar value should go into if you didn't use a hash function. > > David > How can I see the output of hash function that is used internally? Regards, Seenu.
Re: HASH partitioning not working properly
On Fri, Jun 19, 2020 at 12:34 PM Laurenz Albe wrote: > On Fri, 2020-06-19 at 12:12 +0530, Srinivasa T N wrote: > > On Fri, Jun 19, 2020 at 11:44 AM David Rowley > wrote: > > > On Fri, 19 Jun 2020 at 17:42, Srinivasa T N wrote: > > > >After seeing the below, I feel partitioning is not working > properly or it maybe case that my understanding is wrong. Can somebody > explain me what is happening? > > > > > > It's your understanding that's not correct. The value of is passed > > > through a hash function and the partition is selected based partition > > > matching the remainder value after dividing the return value of the > > > hash function by the largest modulus of any partition. > > > > > > That might surprise you, but how would you select which partition a > > > varchar value should go into if you didn't use a hash function. > > > > > > David > > > > How can I see the output of hash function that is used internally? > > In the case of "integer", the hash function is "pg_catalog"."hashint4". > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com > > I guess output formatting is wrong, any help? postgres=# select pg_catalog.hashint4(7); hashint4 -978793473 (1 row) Regards, Seenu.
Re: HASH partitioning not working properly
On Fri, Jun 19, 2020 at 3:09 PM Amul Sul wrote: > On Fri, Jun 19, 2020 at 1:28 PM Srinivasa T N wrote: > > > > On Fri, Jun 19, 2020 at 12:34 PM Laurenz Albe > wrote: > >> > >> On Fri, 2020-06-19 at 12:12 +0530, Srinivasa T N wrote: > >> > On Fri, Jun 19, 2020 at 11:44 AM David Rowley > wrote: > >> > > On Fri, 19 Jun 2020 at 17:42, Srinivasa T N > wrote: > >> > > >After seeing the below, I feel partitioning is not working > properly or it maybe case that my understanding is wrong. Can somebody > explain me what is happening? > >> > > > >> > > It's your understanding that's not correct. The value of is passed > >> > > through a hash function and the partition is selected based > partition > >> > > matching the remainder value after dividing the return value of the > >> > > hash function by the largest modulus of any partition. > >> > > > >> > > That might surprise you, but how would you select which partition a > >> > > varchar value should go into if you didn't use a hash function. > >> > > > >> > > David > >> > > >> > How can I see the output of hash function that is used internally? > >> > >> In the case of "integer", the hash function is "pg_catalog"."hashint4". > >> > >> Yours, > >> Laurenz Albe > >> -- > >> Cybertec | https://www.cybertec-postgresql.com > >> > > I guess output formatting is wrong, any help? > > > > postgres=# select pg_catalog.hashint4(7); > > hashint4 > > > > -978793473 > > (1 row) > > > Instead of direct hash function, the easiest way to use > satisfies_hash_partition() what is used in defining hash > partitioning constraint. > > You can see the partition constraint by description partition table i.e. > use \d+ busbar_version5. > > Regards, > Amul > Sorry, I did not get you. My current \d+ is postgres=# \d+ busbar_version6; Table "test.busbar_version6" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description --+-+---+--+-+-+--+- objectid | integer | | | | plain | | ver_id | integer | | | | plain | | Partition of: busbar_version FOR VALUES WITH (modulus 10, remainder 6) Partition constraint: satisfies_hash_partition('16397'::oid, 10, 6, ver_id) Access method: heap Regards, Seenu.
Re: HASH partitioning not working properly
On Fri, Jun 19, 2020, 5:45 PM Amul Sul wrote: > On Fri, Jun 19, 2020 at 3:50 PM Srinivasa T N wrote: > > > > > > > > On Fri, Jun 19, 2020 at 3:09 PM Amul Sul wrote: > >> > >> On Fri, Jun 19, 2020 at 1:28 PM Srinivasa T N > wrote: > >> > > >> > On Fri, Jun 19, 2020 at 12:34 PM Laurenz Albe < > laurenz.a...@cybertec.at> wrote: > >> >> > >> >> On Fri, 2020-06-19 at 12:12 +0530, Srinivasa T N wrote: > >> >> > On Fri, Jun 19, 2020 at 11:44 AM David Rowley < > dgrowle...@gmail.com> wrote: > >> >> > > On Fri, 19 Jun 2020 at 17:42, Srinivasa T N > wrote: > >> >> > > >After seeing the below, I feel partitioning is not working > properly or it maybe case that my understanding is wrong. Can somebody > explain me what is happening? > >> >> > > > >> >> > > It's your understanding that's not correct. The value of is > passed > >> >> > > through a hash function and the partition is selected based > partition > >> >> > > matching the remainder value after dividing the return value of > the > >> >> > > hash function by the largest modulus of any partition. > >> >> > > > >> >> > > That might surprise you, but how would you select which > partition a > >> >> > > varchar value should go into if you didn't use a hash function. > >> >> > > > >> >> > > David > >> >> > > >> >> > How can I see the output of hash function that is used internally? > >> >> > >> >> In the case of "integer", the hash function is > "pg_catalog"."hashint4". > >> >> > >> >> Yours, > >> >> Laurenz Albe > >> >> -- > >> >> Cybertec | https://www.cybertec-postgresql.com > >> >> > >> > I guess output formatting is wrong, any help? > >> > > >> > postgres=# select pg_catalog.hashint4(7); > >> > hashint4 > >> > > >> > -978793473 > >> > (1 row) > >> > > >> Instead of direct hash function, the easiest way to use > >> satisfies_hash_partition() what is used in defining hash > >> partitioning constraint. > >> > >> You can see the partition constraint by description partition table i.e. > >> use \d+ busbar_version5. > >> > >> Regards, > >> Amul > > > > > > Sorry, I did not get you. > > > > My current \d+ is > > > > postgres=# \d+ busbar_version6; > > Table "test.busbar_version6" > > Column | Type | Collation | Nullable | Default | Storage | Stats > target | > > Description > > > --+-+---+--+-+-+--+- > > > > objectid | integer | | | | plain | > | > > ver_id | integer | | | | plain | > | > > Partition of: busbar_version FOR VALUES WITH (modulus 10, remainder 6) > > Partition constraint: satisfies_hash_partition('16397'::oid, 10, 6, > ver_id) > > Access method: heap > > > By executing "SELECT satisfies_hash_partition('16397'::oid, 10, 6, > ) " > will tell you whether fits in the partition having modulus 10 and > remainder 6 or not. > > Regards, > Amul > OK.. Thanks. BTW, is it possible to have a custom hash function instead of predefined hash function? Regards, Seenu. >
Re: Unable to init and run postgresql-12 on centos 8.2.2004
On Sun, Jun 21, 2020, 8:57 AM Vishal Agrawal wrote: > Hello, > > I'm trying to install postgresql 12 on CentOS 8.2 as following - > > First I run docker container on the host - > # docker run --privileged -dit --name pgdg centos:8.2.2004 /sbin/init > > Then inside the container, I run the commands given below - > > # yum -y install glibc-langpack-en glibc-locale-source > # localedef -i en_US -f UTF-8 en_US.UTF-8 > > # dnf install > https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm > > > # dnf -qy module disable postgresql > > # dnf install postgresql12-server > > # /usr/pgsql-12/bin/postgresql-12-setup initdb > > # systemctl enable postgresql-12 > > # systemctl start postgresql-12 > > Postgresql starts at this point... > > However, when I try to stop it, it just hangs - > > # systemctl stop postgresql-12 (hangs) > > Can someone help how to resolve this? > > Thanks, > Vishal > Looking at logs might help. Regards, Seenu. >
Re: Unable to init and run postgresql-12 on centos 8.2.2004
On Sun, Jun 21, 2020, 8:23 PM Vishal Agrawal wrote: > Seenu, I don't see any journalctl entry other than the log that postmaster > has been killed. (Jun 21 14:52:30 f9cbc3b08e20 systemd[1]: > postgresql-12.service: Killing process 1096 (postmaster) with signal > SIGKILL.) > Any further suggestions on where to look ? > Looks like problem with systemd but no harm in checking postgres log files. I assume that you have already tried "journalctl -l". > > It does seem to work with systemd-239-18 (8.1.1911) but fails > with systemd-239-30 (In 8.2.2004 repo). > Any problem reverting to older version of systemd? Regards, Seenu. > >> > On Sun, Jun 21, 2020 at 6:05 AM Srinivasa T N wrote: > >> >> >> On Sun, Jun 21, 2020, 8:57 AM Vishal Agrawal >> wrote: >> >>> Hello, >>> >>> I'm trying to install postgresql 12 on CentOS 8.2 as following - >>> >>> First I run docker container on the host - >>> # docker run --privileged -dit --name pgdg centos:8.2.2004 /sbin/init >>> >>> Then inside the container, I run the commands given below - >>> >>> # yum -y install glibc-langpack-en glibc-locale-source >>> # localedef -i en_US -f UTF-8 en_US.UTF-8 >>> >>> # dnf install >>> https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm >>> >>> >>> # dnf -qy module disable postgresql >>> >>> # dnf install postgresql12-server >>> >>> # /usr/pgsql-12/bin/postgresql-12-setup initdb >>> >>> # systemctl enable postgresql-12 >>> >>> # systemctl start postgresql-12 >>> >>> Postgresql starts at this point... >>> >>> However, when I try to stop it, it just hangs - >>> >>> # systemctl stop postgresql-12 (hangs) >>> >>> Can someone help how to resolve this? >>> >>> Thanks, >>> Vishal >>> >> >> Looking at logs might help. >> >> Regards, >> Seenu. >> >>>
Re: PostgresQL 12 - could not connect to server: FATAL: the database system is in recovery mode
On Mon, Jul 13, 2020, 7:16 PM FOUTE K. Jaurès wrote: > Hello, > > How can i solve this error > More info required Regards, Seenu. > > -- > Jaurès FOUTE >
Re: Pgpool in docker container
On Tue, Jul 21, 2020 at 10:46 PM Vasu Madhineni wrote: > Hi All, > > Planning to build standalone postgres and with pgpool as connection pooler > in docker containers. > Shall we try option like installing pgpool in one docker container and > postgres in another docker container, is it possible? > As I know, the only way of communication between containers is through socket. But we are using pgpool to reduce the overhead in tcp/ip communication. Regards, Seenu. >
Re: Unexplained disk usage in AWS Aurora Postgres
There may be lot of wal files or the size of log files in pg_log might be huge. "du -sh *" of data directory holding the database might help. Regards, Seenu. On Tue, Aug 4, 2020 at 2:09 PM Chris Borckholder < chris.borckhol...@bitpanda.com> wrote: > Hi! > > We are experiencing a strange situation with an AWS Aurora postgres > instance. > The database steadily grows in size, which is expected and normal. > After enabling logical replication, the disk usage reported by AWS metrics > increases much faster then the database size (as seen by \l+ in psql). The > current state is that database size is ~290GB, while AWS reports >640GB > disk usage. > We reached out to AWS support of course, which is ultimately responsible. > Unfortunately they were not able to diagnose this until now. > > I checked with the queries from wiki > https://wiki.postgresql.org/wiki/Disk_Usage , which essentially give the > same result. > I tried to check on wal segment file size, but we have no permission to > execute select pg_ls_waldir(). > The replication slot is active and it also progresses > (pg_replication_slots.confirmed_flush_lsn increases and is close to > pg_current_wal_flush_lsn). > > Can you imagine other things that I could check from within postgres with > limited permissions to diagnose this? > > Best Regards > Chris > > >
Re: Database logins taking longer and longer, showing up as "authentication" in ps(1)
On Tue, Aug 18, 2020, 8:04 PM Stephen Frost wrote: > Greetings, > > * Adam Sjøgren (a...@koldfront.dk) wrote: > > We have a PostgreSQL 11.3¹ running on an Ubuntu 16.04 server, which > > sometimes exhibits a behaviour I can't quite understand: simply logging > > into the database starts to take minutes to complete. > > > > We have 60 processes (workers) running on different machines accessing > > the database, that all grab jobs from a queue and update rows in a table > > after doing some calculations (which vary in time from <1s to perhaps a > > minute, many of them fast). > > > > Sometimes new database logins slow down, from usually taking <0.05s to > > taking minutes. This is for psql as a normal user using Kerberos, for > > psql as the postgres superuser, for the web-application logging into the > > database, for everything. > > When in doubt, blame DNS. > > Alternatively, in your case, the issue might be the KDC taking forever > to issue a ticket for the service. > > Note that, in both cases, this is a client-side issue. If it is a client side issue, how do you explain the behaviour that OP is not facing problem when there are only few clients? Regards, Seenu. > Once the ticket > has been acquired for the PG service, the actual authentication on the > server side should be very fast and not depend on things external > (though you might check if you have log_hostnames on..). > > Thanks, > > Stephen >
Re: FATAL: terminating connection due to administrator command
On Thu, Oct 1, 2020 at 2:47 PM Alban Hertroys < alban.hertr...@apollovredestein.com> wrote: > Hi all, > > We're seeing the FATAL error message from the subject pop up in our logs > at regular intervals, but I haven't been able to pinpoint what is causing > it. I'm hoping for some insights here. > > We run a PostgreSQL 11.9 server on CentOS 7, within a vmware environment: > PostgreSQL 11.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 > 20150623 (Red Hat 4.8.5-39), 64-bit > > The package was installed from the PGDG repository. > > I'm not even sure I should be worried, there doesn't appear to be any > impact on the servers' functioning, but it does say 'FATAL'. > What we're seeing are lines like these two instances: > > 2020-09-30 22:27:56.446 CEST [30659] STATEMENT: select count(*) from > "dm_b2b"."prlwytzkofskiv1" > 2020-09-30 22:27:56.446 CEST [30658] FATAL: terminating connection due > to administrator command > 2020-09-30 22:27:56.446 CEST [30658] STATEMENT: select count(*) from > "dm_b2b"."prlwytzkofskiv1" > 2020-09-30 22:27:56.446 CEST [30657] FATAL: terminating connection due > to administrator command > 2020-09-30 22:27:56.446 CEST [30657] STATEMENT: select count(*) from > "dm_b2b"."prlwytzkofskiv1" > 2020-09-30 22:27:56.446 CEST [30656] FATAL: terminating connection due > to administrator command > 2020-09-30 22:27:56.446 CEST [30656] STATEMENT: select count(*) from > "dm_b2b"."prlwytzkofskiv1" > 2020-09-30 22:27:56.446 CEST [30655] FATAL: terminating connection due > to administrator command > 2020-09-30 22:27:56.446 CEST [30655] STATEMENT: select count(*) from > "dm_b2b"."prlwytzkofskiv1" > 2020-09-30 22:27:56.459 CEST [6482] LOG: background worker "parallel > worker" (PID 30655) exited with exit code 1 > 2020-09-30 22:27:56.459 CEST [6482] LOG: background worker "parallel > worker" (PID 30656) exited with exit code 1 > 2020-09-30 22:27:56.459 CEST [6482] LOG: background worker "parallel > worker" (PID 30657) exited with exit code 1 > 2020-09-30 22:27:56.459 CEST [6482] LOG: background worker "parallel > worker" (PID 30658) exited with exit code 1 > 2020-09-30 22:27:56.459 CEST [6482] LOG: background worker "parallel > worker" (PID 30659) exited with exit code 1 > 2020-09-30 22:43:08.459 CEST [8055] 172.30.2.25 selfservice_prd ERROR: > schema "somethingelse" does not exist at character 71 > > I am guessing that 6 background workers are started, 1 worker had the result and hence killing the other 5 workers. Maybe, some more pg experts can comment. Anyway, explain of your query helps. Regards, Seenu. Apparently, something is sending SIGTERM to our pg processes. I know that > I'm not doing that, certainly not at those hours, and I'm the one who set > up this system and am the only DBA of it. > > Advice I found on the Internet is to use systemtap with some tap-script, > but the scripts that I found just displayed the PID's of processes without > telling me their names, which I didn't find all that useful in figuring out > who was responsible, so I made an attempt (I have no experience with stap) > at modifying it to print process names of signal sender and target: > > > *The information contained in this e-mail is intended solely for the use > of the individual or entity to whom it is addressed. If you are not the > intended recipient, you are hereby notified that any disclosure, copying, > distribution or action in relation to the contents of this information is > strictly prohibited and may be unlawful and request you to delete this > message and any attachments and advise the sender by return e-mail. The > confidentiality of this message is not warranted. Apollo Vredestein and its > subsidiaries rule out any and every liability resulting from this or any > other electronic transmission* >Please consider the environment before printing this e-mail >
Both type of replications from a single server?
Hi All, Is it possible to have both type of replications (streaming and logical) from a single server? If I have 3 servers A,B and C, then I want to have streaming replication from A to B whereas logical replication from A to C. Is it possible? Regards, Seenu.
Re: Both type of replications from a single server?
On Thu, Oct 8, 2020 at 12:19 PM Michael Paquier wrote: > On Thu, Oct 08, 2020 at 11:43:26AM +0530, Srinivasa T N wrote: > >Is it possible to have both type of replications (streaming and > logical) > > from a single server? > > Yes. > > >If I have 3 servers A,B and C, then I want to have streaming > replication > > from A to B whereas logical replication from A to C. Is it possible? > > And yes. > My next question is "Are there any setup instructions available for the same"? Regards, Seenu.
Re: Both type of replications from a single server?
On Thu, Oct 8, 2020 at 12:50 PM Srinivasa T N wrote: > On Thu, Oct 8, 2020 at 12:19 PM Michael Paquier > wrote: > >> On Thu, Oct 08, 2020 at 11:43:26AM +0530, Srinivasa T N wrote: >> >Is it possible to have both type of replications (streaming and >> logical) >> > from a single server? >> >> Yes. >> >> >If I have 3 servers A,B and C, then I want to have streaming >> replication >> > from A to B whereas logical replication from A to C. Is it possible? >> >> And yes. >> > My next question is "Are there any setup instructions available for the > same"? > For streaming replication, I need to set wal_level to replica in A whereas for logical_replication we need to set wal_level to replica in the same A server. So, was wondering how to go about? Regards, Seenu. >
Meaning of below statement
Hi, I have the following in my log files: 2020-11-20 11:20:46.216 IST [38207] LOG: execute S_1/C_2: SELECT "gid",encode(ST_AsBinary(ST_Simplify(ST_Force2D("shape"), 14.929338247701526, true)),'base64') as "shape" FROM "ami_smart_new"."aoi_boundary" WHERE ("sectioncode" IN ('4683', '4587') AND "sectioncode" IS NOT NULL AND "shape" && ST_GeomFromText('POLYGON ((683696.123647752 989199.9990667417, 683696.123647752 1000723.135701899, 708574.8226023088 1000723.135701899, 708574.8226023088 989199.9990667417, 683696.123647752 989199.9990667417))', 32643)) 2020-11-20 11:20:46.218 IST [38207] LOG: execute S_3: ROLLBACK Does it mean that there was an error in "SELECT ..." and hence internally postgres executed ROLLBACK?? Regards, Seenu.
Setting up a server with previous day data
Hi All, I have a primary postgresql 12 server which is being continuously used for transaction processing. For reporting purposes, I want to set up a secondary server which has got previous day data. Everyday night, I want the data from primary to be shifted to secondary. I can achieve this manually using pg_basebackup on primary and pg_restore on secondary. Is there any other automated efficient way to achieve the same? Any relevant docs would be helpful. Regards, Seenu.
Replicating an existing (huge) database
Hi All, I am using postgresql 12. As part of streaming replication setup, I run pg_basebackup on the slave which copies the database from master to slave. But the database is huge and it takes around 2 to 3 days for the pg_basebackup to finish. When pg_basebackup is running, a huge number of wal files are generated on the master which occupies a lot of space (even though it is for 2-3 days, disk space is low). Is there any other way to start replication without using pg_baseback? Regards, Seenu.