Re: Long running query causing XID limit breach

2024-05-29 Thread Laurenz Albe
On Wed, 2024-05-29 at 01:34 +0530, yudhi s wrote:
> > The only way you can have no delay in replication AND no canceled queries is
> > if you use two different standby servers with different settings for
> > "max_standby_streaming_delay".  One of the server is for HA, the other for
> > your long-running queries.
>
> When you suggest having different max_standby_streaming_delay for first 
> replica
> (say 10 sec for High availability) and second replica(say -1 for long running 
> queries).
> Do you also suggest  keeping "hot_feedback_standby" as "OFF" for all the three
> instances i.e. master and both the replicas?

The parameter is ignored on the master.
It needs to be off on the standby that is running long queries.
For the other standby it probably doesn't matter if you are not running any
queries on it.  I would leave "hot_standby_feedback = off" there as well.

Actually, I would set "hot_standby = off" on the standby that is only used
for HA.

> Also OP has added a few other parameters as below, do you think these should 
> be needed? 
>  I think the master and first replica should have the same set up because in 
> case
> of any disaster to master the first replica should be able to take the place 
> of master.
>  
> Master/PrimaryFirst Replica/Standby for High 
> Availability Second Replica for Reporting
> hot_standby_feedback=ON   hot_standby_feedback=ON 
> hot_standby_feedback=OFF
> max_standby_streaming_delay=10 secmax_standby_streaming_delay=10 
> sec  max_standby_streaming_delay=-1 (Infinite)
> statement_timeout = "2hrs"statement_timeout="2hrs"
> No statement_timeout i.e. infinite
> idle_in_transaction_session_timeout=10minutes 
> idle_in_transaction_session_timeout=10minutes   No 
> idle_in_transaction_session_timeout i.e. infinite
> autovacuum_freeze_max_age=100Mautovacuum_freeze_max_age=100M  
> autovacuum_freeze_max_age=100M
> Log_autovacuum_min_duration=0 Log_autovacuum_min_duration=0   
> Log_autovacuum_min_duration=0

- I would leave "hot_standby_feedback" off everywhere.
- "max_standby_streaming_delay" should be -1 on the reporting standby and very
  low or 0 on the HA standby. It doesn't matter on the primary.
- "statement_timeout" should be way lower on the first two nodes.
- "idle_in_transaction_session_timeout" is good.
- I would leave "autovacuum_freeze_max_age" at the default setting but 100 
million
  is ok too.

Yours,
Laurenz Albe




Re: Use of inefficient index in the presence of dead tuples

2024-05-29 Thread David Rowley
On Wed, 29 May 2024 at 12:53, Tom Lane  wrote:
> It would be interesting to see a non-artificial example that took
> into account when the last auto-vacuum and auto-analyze really
> happened, so we could see if there's any less-fragile way of
> dealing with this situation.

I think we need to find a way to give some additional preference to
using indexes with more keys matching to quals.

David




Re: Memory issues with PostgreSQL 15

2024-05-29 Thread Francisco Olarte
Hi Christian:

On Wed, 29 May 2024 at 00:59, Christian Schröder
 wrote:
> Thank you for your advice. I used "ipcs" to get more readable information 
> about the shared memory:
...
> As far as I understand, there is no upper limit to the size of the shared 
> memory. The database only holds a single shared memory segment, which doesn't 
> seem to have a relevant size.

Seems the same to me, so I will disregard that.

> I am surprised to see this since I would have expected much more shared 
> memory to be used by the database. Is there anything in the configuration 
> that prevents the shared memory from being used?

I am not too current with postgres, that one was a thing which
happened to me when I did more administration, and is one you always
want to check. I assume you have all checked, but I would follow by
insuring every mounted partition in your system has space. I am not
current on the details, but I know Pg can be mmaping things and doing
other stuff. Your problem seems more of resource exhaustion, so I
would follow by checking that, both disk, tmpfs and all the other
stuff. I cannot give you advice on that as it depends a lot on your
server configuration and from the age in the mssages I suspect you
have the usual suspects debugged. But as you have a configuration
crahsing in minutes and it seems to be a dev server you could do it
easily.

Sorry.

   Francisco Olarte.




Re: Memory issues with PostgreSQL 15

2024-05-29 Thread Muhammad Salahuddin Manzoor
Greetings,

The error message you encountered, "could not fork autovacuum worker
process: Cannot allocate memory," indicates that your PostgreSQL server
attempted to start an autovacuum worker process but failed because the
system ran out of memory.

Steps to verify.
1 Check system available memory with commands.
free -m
top
2. Check PG configurations.
shared_buffers --Typically 25% of total mem.
work_mem
maintenance_work_mem--For maintenance op like autovaccume create index etc.
Increase it to 64MB or appropriate to your requirement.
max_connections

Monitor /var/log/messages file for errors.

2024-05-21 11:34:46 CEST - mailprocessor> ERROR:  could not resize shared
memory segment "/PostgreSQL.2448337832" to 182656 bytes: No space left on
device

Check  share memory limits.
/etc/sysctl.conf
kernel.shmmax = 68719476736  # Example value, adjust as needed
kernel.shmall = 16777216 # Example value, adjust as needed

Restart system and db

Ensure you have enough disk space available check and monitor disk space
with command
df -h

Reduce  max_parallel_workers_per_gather = 2;
If it is set to high value.

I think setting up OS parameter.
Increasing maintenance mem value and reducing max paralell workers xan help
in solution.

Regards,
Salahuddin.

On Tue, 28 May 2024, 21:40 Christian Schröder, 
wrote:

> Hi all,
> We migrated from PostgreSQL 9.4 to PostgreSQL 15 a while ago. Since then,
> we have a lot of memory issues in our QA environment (which is a bit tense
> in resources). We did not have these problems before the migration, and we
> do not have them in our production environment, which has a lot more
> memory. So, it is not super critical for us, but I would still like to
> understand better how we can improve our configuration.
>
> Our PostgreSQL version is "PostgreSQL 15.5 on x86_64-pc-linux-gnu,
> compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit". The
> database server is a dedicated server with 15 GB RAM (and 4 cores, if this
> matters).
> We used the following settings:
> shared_buffers = 4GB
> work_mem = 4MB
>
> After a while, we saw the following error in the logs:
>
> <2024-05-20 12:01:03 CEST - > LOG:  could not fork autovacuum worker
> process: Cannot allocate memory
>
> However, according to "free", a lot of memory was available:
>
> # free -m
>   totalusedfree  shared  buff/cache
>  available
> Mem:  158824992 4634195   10427
> 6365
> Swap:  1999 2711728
>
> Our Grafana charts showed a slow increase in memory consumption until it
> plateaus at 4.66 GB.
> We also found the following error:
>
> <2024-05-21 11:34:46 CEST - mailprocessor> ERROR:  could not resize shared
> memory segment "/PostgreSQL.2448337832" to 182656 bytes: No space left on
> device
>
> I thought this could all be related to our "shared_buffers" setting, so I
> increased it to 8 GB. This almost immediately (after a few minutes) gave me
> these errors:
>
> <2024-05-27 11:45:59 CEST - > ERROR:  out of memory
> <2024-05-27 11:45:59 CEST - > DETAIL:  Failed on request of size 201088574
> in memory context "TopTransactionContext".
> ...
> <2024-05-27 11:58:02 CEST - > ERROR:  out of memory
> <2024-05-27 11:58:02 CEST - > DETAIL:  Failed while creating memory
> context "dynahash".
> <2024-05-27 11:58:02 CEST - > LOG:  background worker "parallel worker"
> (PID 21480) exited with exit code 1
> ...
> <2024-05-27 12:01:02 CEST - > LOG:  could not fork new process for
> connection: Cannot allocate memory
> <2024-05-27 12:01:03 CEST - > LOG:  could not fork autovacuum worker
> process: Cannot allocate memory
> <2024-05-27 12:02:02 CEST - > LOG:  could not fork new process for
> connection: Cannot allocate memory
>
> Since this seemed worse than before, I changed the setting back to 4 GB. I
> noticed that "free" now reports even more available memory:
>
> # free -m
>   totalusedfree  shared  buff/cache
>  available
> Mem:  15882 621 3202256   14940
>  12674
> Swap:  1999 1991800
>
> So, does the "shared_buffers" setting have the opposite effect than I
> though? If I correctly remember similar discussions years ago, the database
> needs both "normal" and shared memory. By increasing the "shared_buffers"
> to 8 GB, I may have deprived it of "normal" memory. On the other hand, I
> would have expected the remaining 7 GB to still be enough.
>
> At this point, I am out of ideas. I clearly seem to misunderstand how the
> database manages its memory. This may have changed between 9.4 and 15, so
> my prior knowledge may be useless. I definitely need some help. ☹
>
> Thanks in advance,
> Christian
>
>
> --
> SUPPORT:
> For any issues, inquiries, or assistance, please contact our support team
> at supp...@wsd.com. Our dedicated team is available to help you and
> provide prompt assistance.
>
> CONFID

Unable to connect to any data source for foreign server

2024-05-29 Thread Russell Mercer

Hi,

I'm running into a problem connecting to a foreign server with my 
Postgres database.  I am using the OGR_FDW trying to connect to an Excel 
"XLSX".  I had this database on another server, and the foreign server 
was set up and working properly there.  I am trying to change the path 
to the file which it is referencing, and I am now receiving an error.  
In addition, I tried to just create a new foreign server to that same 
"XLSX" file and it also gave the error of "Failed to connect to data 
source".


I used the OGR_FDW_INfo tool to make sure I had the foreign server 
reference structured correctly, but it matches what I had before, and 
didn't make a difference.


I'm a bit lost as to where to look for possible solutions.  One idea I 
have is that perhaps it is a permissions issue with accessing other 
files on the server where Postgres is installed, with the "postgres" user.


Some basic info:

Server:  Windows Server 2022
PostgreSQL 16.2 - Installed using Stack Builder

Any information or ideas as to a solution, would be very much appreciated.

Thanks,
Russell





Re: Unable to connect to any data source for foreign server

2024-05-29 Thread Muhammad Salahuddin Manzoor
Greetings,

While configuring ORG_FWD.
I hope you are setting path like server name and absolute path to file on
your network share and network share has appropriate permissions.

'\\remote_server\path\to\file.xlsx' on network share

Create user mapping if required.

check your config through
ogr_fdw_info -s "remote_server\\path\\to\\file.xlsx" -l

To diagnose the issue you can check permissions, network accessability,
path format should include server name and absolute file path and any
compatability issues.

Regards,
Salahuddin.

On Wed, 29 May 2024, 14:42 Russell Mercer,  wrote:

> Hi,
>
> I'm running into a problem connecting to a foreign server with my
> Postgres database.  I am using the OGR_FDW trying to connect to an Excel
> "XLSX".  I had this database on another server, and the foreign server
> was set up and working properly there.  I am trying to change the path
> to the file which it is referencing, and I am now receiving an error.
> In addition, I tried to just create a new foreign server to that same
> "XLSX" file and it also gave the error of "Failed to connect to data
> source".
>
> I used the OGR_FDW_INfo tool to make sure I had the foreign server
> reference structured correctly, but it matches what I had before, and
> didn't make a difference.
>
> I'm a bit lost as to where to look for possible solutions.  One idea I
> have is that perhaps it is a permissions issue with accessing other
> files on the server where Postgres is installed, with the "postgres" user.
>
> Some basic info:
>
> Server:  Windows Server 2022
> PostgreSQL 16.2 - Installed using Stack Builder
>
> Any information or ideas as to a solution, would be very much appreciated.
>
> Thanks,
> Russell
>
>
>
>


Timeout gets unset on a syntax error.

2024-05-29 Thread ISHAN CHHANGANI .
On a particular query, I start an alarm (say for 5 sec) using RegisterTimeout , 
and when the alarm rings, I log something.
This works fine.
But if I run a query with a syntax error between the time duration, then the 
alarm never rings.
Is there some code within Postgres that resets/removes the signals in case a 
query hits any error?

TimeoutId timer = RegisterTimeout(USER_TIMEOUT, interval_handler);
enable_timeout_after(timer, 5 * 1000);

Thanks,
Ishan.

-- 
The information contained in this electronic communication is intended 
solely for the individual(s) or entity to which it is addressed. It may 
contain proprietary, confidential and/or legally privileged information. 
Any review, retransmission, dissemination, printing, copying or other use 
of, or taking any action in reliance on the contents of this information by 
person(s) or entities other than the intended recipient is strictly 
prohibited and may be unlawful. If you have received this communication in 
error, please notify us by responding to this email or telephone and 
immediately and permanently delete all copies of this message and any 
attachments from your system(s). The contents of this message do not 
necessarily represent the views or policies of BITS Pilani.


Re: Use of inefficient index in the presence of dead tuples

2024-05-29 Thread Alexander Staubo
> On 29 May 2024, at 02:53, Tom Lane  wrote:
> 
> Alexander Staubo  writes:
>> (2) Set up schema. It's important to create the index before insertion, in 
>> order to provoke a
>> situation where the indexes have dead tuples:
>> ...
>> (4) Then ensure all tuples are dead except one:
> 
>>DELETE FROM outbox_batches;
>>INSERT INTO outbox_batches (receiver, id) VALUES ('dummy', 'test');
> 
>> (5) Analyze:
> 
>>ANALYZE outbox_batches;
> 
> So the problem here is that the ANALYZE didn't see any of the dead rows
> and thus there is no way to know that they all match 'dummy'.  The cost
> estimation is based on the conclusion that there is exactly one row
> that will pass the index condition in each case, and thus the "right"
> index doesn't look any cheaper than the "wrong" one --- in fact, it
> looks a little worse because of the extra access to the visibility
> map that will be incurred by an index-only scan.
> 
> I'm unpersuaded by the idea that ANALYZE should count dead tuples.
> Since those are going to go away pretty soon, we would risk
> estimating on the basis of no-longer-relevant stats and thus
> creating problems worse than the one we solve.

Mind you, “pretty soon” could actually be “hours" if a pg_dump is running, or 
some other long-running transaction is holding back the xmin. Granted, 
long-running transactions should be avoided, but they happen, and the result is 
operationally surprising.

I have another use case where I used a transaction to do lock a resource to 
prevent concurrent access. I.e. the logic did “SELECT … FROM … WHERE id = $1 
FOR UPDATE” and held that transaction open for hours while doing maintenance. 
This ended up causing the exact same index issue with dead tuples, with some 
queries taking 30 minutes where they previously took just a few milliseconds. 
In retrospect, this process should have used advisory locks to avoid holding 
back vacuums. But the point stands that a small amount dead tuple cruft can 
massively skew performance in surprising ways.

> What is interesting here is that had you done ANALYZE *before*
> the delete-and-insert, you'd have been fine.  So it seems like
> somewhat out-of-date stats would have benefited you.
> 
> It would be interesting to see a non-artificial example that took
> into account when the last auto-vacuum and auto-analyze really
> happened, so we could see if there's any less-fragile way of
> dealing with this situation.

Just to clarify, this is a real use case, though the repro is of course 
artificial since the real production case is inserting and deleting rows very 
quickly.

According to collected metrics, the average time since the last autoanalyze is 
around 20 seconds for this table, same for autovacuum. The times I have 
observed poor performance is in situations where the autovacuum was not able 
reclaim non-removable rows, i.e. it’s not the absence of autovacuum, but rather 
the inability to clear up dead tuples.





Re: Timeout gets unset on a syntax error.

2024-05-29 Thread Greg Sabino Mullane
On Wed, May 29, 2024 at 6:29 AM ISHAN CHHANGANI . <
f20200...@hyderabad.bits-pilani.ac.in> wrote:

> Is there some code within Postgres that resets/removes the signals in case
> a query hits any error?
>

See the comment and call to disable_all_timeouts() in postgres.c, part of
the cleanup-after-exception efforts:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/tcop/postgres.c#l4356


Cheers,
Greg


Re: Pgpool with high availability

2024-05-29 Thread Adrian Klaver

On 5/28/24 22:15, vijay patil wrote:

Hi Adrian,

After modifying the pgpool.conf configuration to correct a subnet 
mistake,  Previously, the wrong subnet was specified as /26, and it has 
now been corrected to /24. The configuration changes were as follows:


This seems to be a networking issue:

[root@staging-ha0001 ~]# ping 10.127.1.18

PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.

From 10.127.1.10 icmp_seq=1 Destination Host Unreachable

<...>


Where are ha0001 and ha0003 on the network relative to ha0002?

Also are they reachable for other services?


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Use of inefficient index in the presence of dead tuples

2024-05-29 Thread Laurenz Albe
On Wed, 2024-05-29 at 14:36 +0200, Alexander Staubo wrote:
> > On 29 May 2024, at 02:53, Tom Lane  wrote:
> > I'm unpersuaded by the idea that ANALYZE should count dead tuples.
> > Since those are going to go away pretty soon, we would risk
> > estimating on the basis of no-longer-relevant stats and thus
> > creating problems worse than the one we solve.
> 
> Mind you, “pretty soon” could actually be “hours" if a pg_dump is running,
> or some other long-running transaction is holding back the xmin. Granted,
> long-running transactions should be avoided, but they happen, and the
> result is operationally surprising.

Don't do these things on a busy transactional database.

> I have another use case where I used a transaction to do lock a resource
> to prevent concurrent access. I.e. the logic did
> “SELECT … FROM … WHERE id = $1 FOR UPDATE” and held that transaction open
> for hours while doing maintenance.

That's a dreadful idea.

> 
> Just to clarify, this is a real use case, though the repro is of course
> artificial since the real production case is inserting and deleting rows
> very quickly.

No doubt.
Still I think that your main trouble are long-running transactions.
They will always give you trouble on a busy PostgreSQL database.
You should avoid them.

Yours,
Laurenz Albe




Re: tds_fdw >> Install Foreign data Wrapper on EDB Postgres to connect to SQL server database

2024-05-29 Thread Sumit Kochar
Hi Muhammad,

Tried to install as per step2 mentioned, but got message - No package 
llvm-toolset-7 available.

Log reference below:

[root@hostip tds_fdw-2.0.3]# yum list installed | grep llvm-toolset
This system is not registered with Spacewalk server.
You can use rhn_register to register.
Spacewalk support will be disabled.
Repository mc-ol7-base-x86_64 is listed more than once in the configuration
Repository mc-ol7-layered-x86_64 is listed more than once in the configuration
Repository mc-ol7_tsm-8.1.13_x86_64 is listed more than once in the 
configuration
Repository mc-ol7-chef_client_15-x86_64 is listed more than once in the 
configuration
[root@hostip tds_fdw-2.0.3]#
[root@hostip tds_fdw-2.0.3]#
[root@hostip tds_fdw-2.0.3]# sudo yum install llvm-toolset-7
Loaded plugins: rhnplugin
This system is not registered with Spacewalk server.
You can use rhn_register to register.
Spacewalk support will be disabled.
Repository mc-ol7-base-x86_64 is listed more than once in the configuration
Repository mc-ol7-layered-x86_64 is listed more than once in the configuration
Repository mc-ol7_tsm-8.1.13_x86_64 is listed more than once in the 
configuration
Repository mc-ol7-chef_client_15-x86_64 is listed more than once in the 
configuration
No package llvm-toolset-7 available.
Error: Nothing to do
[root@hostip tds_fdw-2.0.3]#
[root@hostip tds_fdw-2.0.3]#

From: Muhammad Imtiaz 
Sent: Tuesday, May 28, 2024 11:43 PM
To: Ian Lawrence Barwick 
Cc: Sumit Kochar ; 
pgsql-general@lists.postgresql.org
Subject: {EXTERNAL} Re: tds_fdw >> Install Foreign data Wrapper on EDB Postgres 
to connect to SQL server database

Hi , You need to install the LLVM toolset. 1)Check that if it is installed or 
not : yum list installed | grep llvm-toolset 2)If it is not installed, you can 
install it using the following command. sudo yum install llvm-toolset-7 
Regards,Muhammad

Hi ,

You need to install the  LLVM toolset.

1)Check that if it is installed or not :

yum list installed | grep llvm-toolset



2)If it is not installed, you can install it using the following command.

sudo yum install llvm-toolset-7



Regards,

Muhammad Imtiaz







On Wed, 29 May 2024 at 05:27, Ian Lawrence Barwick 
mailto:barw...@gmail.com>> wrote:
2024年5月29日(水) 6:10 Sumit Kochar 
mailto:sumit.koc...@mastercard.com>>:
>
> Installation of Foreign data Wrapper on EDB Postgres to connect to SQL server 
> database is not working.
>
>
>
> https://github.com/tds-fdw/tds_fdw/issues/357
>
>
>
> Please advise if this has been encountered or a workaround is available.
(...)

> [root@hostip tds_fdw-2.0.3]# make USE_PGXS=1 install
>
> /opt/rh/llvm-toolset-7/root/usr/bin/clang -Wno-ignored-attributes 
> -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2  
> -I./include/ -fvisibility=hidden  -I. -I./ -I/usr/edb/as13/include/server 
> -I/usr/edb/as13/include/internal -I/usr/libexec/edb-icu66/include/ 
> -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -flto=thin -emit-llvm -c 
> -o src/tds_fdw.bc src/tds_fdw.c
>
> make: /opt/rh/llvm-toolset-7/root/usr/bin/clang: Command not found

IIRC you'll need to find and install the SCLO package for Oracle Linux
7, assuming it's
available. For CentOS it's this repository:
http://mirror.centos.org/centos/7/sclo/
 .

Regards

Ian Barwick

CONFIDENTIALITY NOTICE This e-mail message and any attachments are only for the 
use of the intended recipient and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If you are not the 
intended recipient, any disclosure, distribution or other use of this e-mail 
message or attachments is prohibited. If you have received this e-mail message 
in error, please delete and notify the sender immediately. Thank you.


Dll libpq.dll 32 bits

2024-05-29 Thread José Mello Júnior
Where do i found this dll for acess postgresql 15?


Re: Dll libpq.dll 32 bits

2024-05-29 Thread Adrian Klaver




On 5/29/24 10:49 AM, José Mello Júnior wrote:

Where do i found this dll for acess postgresql 15?


Access Postgres using what client?

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Dll libpq.dll 32 bits

2024-05-29 Thread Adrian Klaver




On 5/29/24 10:59 AM, José Mello Júnior wrote:

Reply to list also.
Ccing list.


Sorry, for windows


That is the OS, what I am after is what client on Windows do you need 
the DLL for?






Em qua., 29 de mai. de 2024, 14:53, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> escreveu:




On 5/29/24 10:49 AM, José Mello Júnior wrote:
 > Where do i found this dll for acess postgresql 15?

Access Postgres using what client?

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Dll libpq.dll 32 bits

2024-05-29 Thread Adrian Klaver

On 5/29/24 11:18, José Mello Júnior wrote:

Again reply to list by using Reply All.
Ccing list.

For my projects in 32 bits, the language of PC-SOFT (wlanguage) use.


Are you talking about this?:

https://windev.com/pcsoft/index.html



Em qua., 29 de mai. de 2024, 15:09, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> escreveu:




On 5/29/24 10:59 AM, José Mello Júnior wrote:

Reply to list also.
Ccing list.

 > Sorry, for windows

That is the OS, what I am after is what client on Windows do you need
the DLL for?

 >
 >
 >
 > Em qua., 29 de mai. de 2024, 14:53, Adrian Klaver
 > mailto:adrian.kla...@aklaver.com>
>> escreveu:
 >
 >
 >
 >     On 5/29/24 10:49 AM, José Mello Júnior wrote:
 >      > Where do i found this dll for acess postgresql 15?
 >
 >     Access Postgres using what client?
 >
 >     --
 >     Adrian Klaver
 > adrian.kla...@aklaver.com 
>
 >

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Dll libpq.dll 32 bits

2024-05-29 Thread José Mello Júnior
Em qua., 29 de mai. de 2024, 15:41, Adrian Klaver 
escreveu:

> On 5/29/24 11:18, José Mello Júnior wrote:
>
> Again reply to list by using Reply All.
> Ccing list.
> > For my projects in 32 bits, the language of PC-SOFT (wlanguage) use.
>
> Are you talking about this?:
>
> https://windev.com/pcsoft/index.html
>
> >
> > Em qua., 29 de mai. de 2024, 15:09, Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> escreveu:
> >
> >
> >
> > On 5/29/24 10:59 AM, José Mello Júnior wrote:
> >
> > Reply to list also.
> > Ccing list.
> >
> >  > Sorry, for windows
> >
> > That is the OS, what I am after is what client on Windows do you need
> > the DLL for?
> >
> >  >
> >  >
> >  >
> >  > Em qua., 29 de mai. de 2024, 14:53, Adrian Klaver
> >  > mailto:adrian.kla...@aklaver.com>
> >  > >> escreveu:
> >  >
> >  >
> >  >
> >  > On 5/29/24 10:49 AM, José Mello Júnior wrote:
> >  >  > Where do i found this dll for acess postgresql 15?
> >  >
> >  > Access Postgres using what client?
> >  >
> >  > --
> >  > Adrian Klaver
> >  > adrian.kla...@aklaver.com 
> >  >>
> >  >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


RE: Memory issues with PostgreSQL 15

2024-05-29 Thread Christian Schröder
Hi Francisco,
Thank you for your advice. I used "ipcs" to get more readable information about 
the shared memory:

# ipcs -m -l --human

-- Shared Memory Limits 
max number of segments = 4096
max seg size = 16E
max total shared memory = 16E
min seg size = 1B

# ipcs -m

-- Shared Memory Segments 
keyshmid  owner  perms  bytes  nattch status
0x04000194 35 postgres   60056 19

# ipcs -m -i 35

Shared memory Segment shmid=35
uid=26  gid=26  cuid=26 cgid=26
mode=0600   access_perms=0600
bytes=56lpid=7653   cpid=3875   nattch=19
att_time=Tue May 28 22:56:35 2024
det_time=Tue May 28 22:56:35 2024
change_time=Tue May 28 07:59:59 2024

As far as I understand, there is no upper limit to the size of the shared 
memory. The database only holds a single shared memory segment, which doesn't 
seem to have a relevant size.
I am surprised to see this since I would have expected much more shared memory 
to be used by the database. Is there anything in the configuration that 
prevents the shared memory from being used?

Best,
Christian

-Original Message-
From: Francisco Olarte 
Sent: Tuesday, May 28, 2024 7:15 PM
To: Christian Schröder 
Cc: pgsql-general@lists.postgresql.org; Eric Wong 
Subject: Re: Memory issues with PostgreSQL 15

[EXTERNAL]

Hi Christian:

On Tue, 28 May 2024 at 18:40, Christian Schröder  
wrote:

> <2024-05-21 11:34:46 CEST - mailprocessor> ERROR:  could not resize
> shared memory segment "/PostgreSQL.2448337832" to 182656 bytes: No
> space left on device

This hints at some shm function getting an ENOSPC: Coupled with...

> I thought this could all be related to our "shared_buffers" setting, so I 
> increased it to 8 GB. This almost immediately (after a few minutes) gave me 
> these errors:

A faster fail when increasing it I would start by checking your IPC shared 
memory limits are ok, especially if you upgraded something in the OS when going 
from 9 to 15, which seems likely.

IIRC in linux you can read them in /proc/sys/kernel/shm*, and they were 
configured via sysctl.

Francisco Olarte.


--
SUPPORT:
For any issues, inquiries, or assistance, please contact our support team at 
supp...@wsd.com. Our dedicated team is available to help you and provide prompt 
assistance.

CONFIDENTIALITY NOTICE:
This email and any attachments are confidential and intended solely for the use 
of the individual or entity to whom it is addressed. If you have received this 
email in error, please notify the sender immediately and delete it from your 
system.


Re: Pgpool with high availability

2024-05-29 Thread vijay patil
Hi Adrian,

After modifying the pgpool.conf configuration to correct a subnet mistake,
Previously, the wrong subnet was specified as /26, and it has now been
corrected to /24. The configuration changes were as follows:



Previous Configuration:



delegate_ip = '10.127.1.18'

if_up_cmd = '/sbin/ip addr add $_IP_$/26 dev eth0 label eth0:1'

if_down_cmd = '/sbin/ip addr del $_IP_$/26 dev eth0'

arping_cmd = '/usr/sbin/arping -U $_IP_$ -w 1 -I eth0'



Updated Configuration:



delegate_ip = '10.127.1.18'

if_up_cmd = '/sbin/ip addr add $_IP_$/24 dev eth0 label eth0:1'

if_down_cmd = '/sbin/ip addr del $_IP_$/24 dev eth0'

arping_cmd = '/usr/sbin/arping -U $_IP_$ -w 1 -I eth0'



Current Issue:-



Following the subnet correction, the Virtual IP (VIP) 10.127.1.18 is now
reachable only from the leader node (ha0002), while it remains unreachable
from the standby nodes (ha0001 and ha0003). Below are the details of the
connectivity status and the commands executed:



*Leader Node (**ha0002**)*



[root@staging-ha0002 PG_LOGS]# ping 10.127.1.18

PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.

64 bytes from 10.127.1.18: icmp_seq=1 ttl=64 time=0.041 ms

64 bytes from 10.127.1.18: icmp_seq=2 ttl=64 time=0.058 ms

64 bytes from 10.127.1.18: icmp_seq=3 ttl=64 time=0.060 ms

--- 10.127.1.18 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2080ms

rtt min/avg/max/mdev = 0.041/0.053/0.060/0.008 ms



[root@staging-ha0002 PG_LOGS]# ping 10.127.1.18

PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.

64 bytes from 10.127.1.18: icmp_seq=1 ttl=64 time=0.041 ms

64 bytes from 10.127.1.18: icmp_seq=2 ttl=64 time=0.058 ms

64 bytes from 10.127.1.18: icmp_seq=3 ttl=64 time=0.060 ms

--- 10.127.1.18 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2080ms

rtt min/avg/max/mdev = 0.041/0.053/0.060/0.008 ms



[pgbigboss@staging-ha0002 ~]$ pcp_watchdog_info -h 10.127.1.18 -p 9898 -U
pgbigboss -W

Password:

3 3 YES ha0002: Linux staging-ha0002 ha0002

ha0002: Linux staging-ha0002 ha0002  9000 4 LEADER 0 MEMBER

ha0001: Linux staging-ha0001 ha0001  9000 7 STANDBY 0 MEMBER

ha0003: Linux staging-ha0003 ha0003  9000 7 STANDBY 0 MEMBER



*Standby Node (**ha0001**)*



*[root@staging-ha0001 ~]# ping 10.127.1.18*

*PING 10.127.1.18 (10.127.1.18) 56(84) bytes of data.*

*From 10.127.1.10 icmp_seq=1 Destination Host Unreachable*

*From 10.127.1.10 icmp_seq=2 Destination Host Unreachable*

*From 10.127.1.10 icmp_seq=3 Destination Host Unreachable*

*--- 10.127.1.18 ping statistics ---*

*5 packets transmitted, 0 received, +3 errors, 100% packet loss, time
4126ms*

*pipe 3*



*[pgbigboss@staging-ha0001 ~]$ pcp_watchdog_info -h 10.127.1.18 -p 9898 -U
pgbigboss -W*

*Password:*

*ERROR: connection to host "10.127.1.18" failed with error "No route to
host"*



The VIP 10.127.1.18 is accessible from the leader node (ha0002) but not
from the standby nodes (ha0001 and ha0003).



Thanks

Vijay

On Tue, May 28, 2024 at 11:31 PM Adrian Klaver 
wrote:

>
>
> On 5/28/24 1:31 AM, vijay patil wrote:
> >
> > HI Team,
> >
> > "I'm encountering challenges while configuring Pgpool with high
> > availability. The initial setup is completed, and Pgpool is operational
> > on a single node, functioning without issues. However, upon attempting
> > to start Pgpool on any additional nodes, particularly node 2, it becomes
> > immediately unreachable.
>
> And how we are supposed to arrive at an answer with essentially no
> information provided?
>
> Need:
>
> 1) Configuration for initial setup.
>
> 2) A more detailed explanation of what "... upon attempting
> to start Pgpool on any additional nodes" means? Include configuration
> changes.
>
> 3) The error messages.
>
> 4) Where the nodes are located?
>
>
> >
> > I'm seeking assistance to address this issue. My setup consists of three
> > nodes, each hosting both PostgreSQL and Pgpool services."
> >
> >
> > Thanks
> >
> > Vijay
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Pgpool with high availability

2024-05-29 Thread vijay patil
Thanks Muhammad !

Here i uploaded details.

Thanks
Vijay

On Wed, May 29, 2024 at 10:58 AM Muhammad Imtiaz 
wrote:

> Hi,
>
> Can you please provide the Pgpool logs? Additionally, in a distributed
> environment, ensure that Pgpool is properly configured. You can follow
> these documented steps to set up the Pgpool configurations
> pgpool.conf,pcp.conf and pool_hba.conf .
>
> Regards,
> Muhammad Imtiaz
>
> On Tue, 28 May 2024 at 23:01, Adrian Klaver 
> wrote:
>
>>
>>
>> On 5/28/24 1:31 AM, vijay patil wrote:
>> >
>> > HI Team,
>> >
>> > "I'm encountering challenges while configuring Pgpool with high
>> > availability. The initial setup is completed, and Pgpool is operational
>> > on a single node, functioning without issues. However, upon attempting
>> > to start Pgpool on any additional nodes, particularly node 2, it
>> becomes
>> > immediately unreachable.
>>
>> And how we are supposed to arrive at an answer with essentially no
>> information provided?
>>
>> Need:
>>
>> 1) Configuration for initial setup.
>>
>> 2) A more detailed explanation of what "... upon attempting
>> to start Pgpool on any additional nodes" means? Include configuration
>> changes.
>>
>> 3) The error messages.
>>
>> 4) Where the nodes are located?
>>
>>
>> >
>> > I'm seeking assistance to address this issue. My setup consists of
>> three
>> > nodes, each hosting both PostgreSQL and Pgpool services."
>> >
>> >
>> > Thanks
>> >
>> > Vijay
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>>
Pgpool.conf ==

listen_addresses = '*'
port = 
pcp_listen_addresses = '*'
backend_hostname0 = 'ha0001'
backend_port0 = 5432
backend_weight0 = 1
backend_hostname1 = 'ha0002'
backend_port1 = 5432
backend_weight1 = 1
backend_hostname2 = 'ha0003'
backend_port2 = 5432
backend_weight2 = 1
load_balance_mode = 'on'
master_slave_mode = 'on'
master_slave_sub_mode = 'stream'
sr_check_period = 10
sr_check_user = 'monitor'
sr_check_password = 'Monitor@321'
sr_check_database = 'monitor'
delay_threshold = 10240
allow_clear_text_frontend_auth = 'on'
pid_file = '/var/run/pgpool/pgpool.pid'
logging_collector = on
log_directory = '/u01/app/admin/Data/PG_LOGS'
log_filename = 'pgpool.log'
num_init_children = 30
enable_pool_hba = on
pool_passwd = '/usr/local/etc/pool_passwd'
health_check_period = 30
health_check_timeout = 20
health_check_user = 'pgbigboss'
health_check_password = 'pgbigboss'
health_check_database = 'postgres'
health_check_max_retries = 3
health_check_retry_delay = 1
connect_timeout = 1
statement_level_load_balance = on
use_watchdog = on
delegate_ip = '10.127.1.18'
if_up_cmd = '/sbin/ip addr add $_IP_$/24 dev eth0 label eth0:1'
if_down_cmd = '/sbin/ip addr del $_IP_$/24 dev eth0'
arping_cmd = '/usr/sbin/arping -U $_IP_$ -w 1 -I eth0 -c 3'
hostname0 = 'ha0001'
wd_port0 = 9000
pgpool_port0 = 
hostname1 = 'ha0002'
wd_port1 = 9000
pgpool_port1 = 
hostname2 = 'ha0003'
wd_port2 = 9000
pgpool_port2 = 
wd_lifecheck_method = 'heartbeat'
wd_interval = 10
heartbeat_hostname0 = 'ha0001'
heartbeat_port0 = 9694
heartbeat_device0 = ''
heartbeat_hostname1 = 'ha0002'
heartbeat_port1 = 9694
heartbeat_device1 = ''
heartbeat_hostname2 = 'ha0003'
heartbeat_port2 = 9694
heartbeat_device2 = ''
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
#trusted_servers = ''
trusted_server_command = 'ping -q -c3 %h'
backend_clustering_mode = 'streaming_replication'

[root@staging-ha0001 etc]#

#
pgpool_node_id

[root@staging-ha0001 etc]# cat pgpool_node_id
0
[root@staging-ha0001 etc]#

#

pool_hba.conf

local   all all   trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
hostall all ::1/128   trust
hostall all 10.127.1.10/32trust
hostall all 10.127.1.11/32 trust
hostall all 10.127.1.13/32 trust



pcp.conf

pgbigboss:66c7766130b3cf99e9ed3fde1a9f59de





Leader_pgpool.log
Description: Binary data


Standby1_pgpool.log
Description: Binary data


Standby_pgpool.log
Description: Binary data


Re: Memory issues with PostgreSQL 15

2024-05-29 Thread Tom Lane
=?utf-8?B?Q2hyaXN0aWFuIFNjaHLDtmRlcg==?=  writes:
> # ipcs -m

> -- Shared Memory Segments 
> keyshmid  owner  perms  bytes  nattch status
> 0x04000194 35 postgres   60056 19

> I am surprised to see this since I would have expected much more shared 
> memory to be used by the database. Is there anything in the configuration 
> that prevents the shared memory from being used?

SysV shared memory isn't that relevant to Postgres anymore.  Most
of what we allocate goes into POSIX-style shared memory segments,
which are not shown by "ipcs".  We do still create one small
fixed-size data structure in SysV memory, which is what you're
seeing here, for arcane reasons having to do with the lifespan of
the shared memory segments being different in those two APIs.

>> <2024-05-21 11:34:46 CEST - mailprocessor> ERROR:  could not resize
>> shared memory segment "/PostgreSQL.2448337832" to 182656 bytes: No
>> space left on device

This seems to indicate that you're hitting some kernel limit on
the amount of POSIX shared memory.  Not sure where to look for
that.

regards, tom lane




Re: Dll libpq.dll 32 bits

2024-05-29 Thread Juan Rodrigo Alejandro Burgos Mella
you can use ODBC with Windev, so the best is the Postgresql connector
https://www.postgresql.org/ftp/odbc/releases/

Atte
JRBM

El mié, 29 may 2024 a las 14:46, José Mello Júnior (<
jose.mello.jun...@gmail.com>) escribió:

>
> Em qua., 29 de mai. de 2024, 15:41, Adrian Klaver <
> adrian.kla...@aklaver.com> escreveu:
>
>> On 5/29/24 11:18, José Mello Júnior wrote:
>>
>> Again reply to list by using Reply All.
>> Ccing list.
>> > For my projects in 32 bits, the language of PC-SOFT (wlanguage) use.
>>
>> Are you talking about this?:
>>
>> https://windev.com/pcsoft/index.html
>>
>> >
>> > Em qua., 29 de mai. de 2024, 15:09, Adrian Klaver
>> > mailto:adrian.kla...@aklaver.com>>
>> escreveu:
>> >
>> >
>> >
>> > On 5/29/24 10:59 AM, José Mello Júnior wrote:
>> >
>> > Reply to list also.
>> > Ccing list.
>> >
>> >  > Sorry, for windows
>> >
>> > That is the OS, what I am after is what client on Windows do you
>> need
>> > the DLL for?
>> >
>> >  >
>> >  >
>> >  >
>> >  > Em qua., 29 de mai. de 2024, 14:53, Adrian Klaver
>> >  > mailto:adrian.kla...@aklaver.com>
>> > > > >> escreveu:
>> >  >
>> >  >
>> >  >
>> >  > On 5/29/24 10:49 AM, José Mello Júnior wrote:
>> >  >  > Where do i found this dll for acess postgresql 15?
>> >  >
>> >  > Access Postgres using what client?
>> >  >
>> >  > --
>> >  > Adrian Klaver
>> >  > adrian.kla...@aklaver.com 
>> > > >>
>> >  >
>> >
>> > --
>> > Adrian Klaver
>> > adrian.kla...@aklaver.com 
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>


Re: Pgpool with high availability

2024-05-29 Thread Kashif Zeeshan
Hi Vijay

The log shows that all the nodes are able to communicate now (master,
standby and standby1) and now what are the issue you are facing?

Regards
Kashif Zeeshan

On Thu, May 30, 2024 at 2:00 AM vijay patil 
wrote:

> Thanks Muhammad !
>
> Here i uploaded details.
>
> Thanks
> Vijay
>
> On Wed, May 29, 2024 at 10:58 AM Muhammad Imtiaz 
> wrote:
>
>> Hi,
>>
>> Can you please provide the Pgpool logs? Additionally, in a distributed
>> environment, ensure that Pgpool is properly configured. You can follow
>> these documented steps to set up the Pgpool configurations
>> pgpool.conf,pcp.conf and pool_hba.conf .
>>
>> Regards,
>> Muhammad Imtiaz
>>
>> On Tue, 28 May 2024 at 23:01, Adrian Klaver 
>> wrote:
>>
>>>
>>>
>>> On 5/28/24 1:31 AM, vijay patil wrote:
>>> >
>>> > HI Team,
>>> >
>>> > "I'm encountering challenges while configuring Pgpool with high
>>> > availability. The initial setup is completed, and Pgpool is
>>> operational
>>> > on a single node, functioning without issues. However, upon attempting
>>> > to start Pgpool on any additional nodes, particularly node 2, it
>>> becomes
>>> > immediately unreachable.
>>>
>>> And how we are supposed to arrive at an answer with essentially no
>>> information provided?
>>>
>>> Need:
>>>
>>> 1) Configuration for initial setup.
>>>
>>> 2) A more detailed explanation of what "... upon attempting
>>> to start Pgpool on any additional nodes" means? Include configuration
>>> changes.
>>>
>>> 3) The error messages.
>>>
>>> 4) Where the nodes are located?
>>>
>>>
>>> >
>>> > I'm seeking assistance to address this issue. My setup consists of
>>> three
>>> > nodes, each hosting both PostgreSQL and Pgpool services."
>>> >
>>> >
>>> > Thanks
>>> >
>>> > Vijay
>>> >
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>>>


Re: tds_fdw >> Install Foreign data Wrapper on EDB Postgres to connect to SQL server database

2024-05-29 Thread Kashif Zeeshan
Hi Sumit

Please make sure Developer Toolset Repo is enable, the package is not
getting installed as its not being found due to the missing Repo.

Regards
Kashif Zeeshan
Bitnine Global

On Wed, May 29, 2024 at 10:10 PM Sumit Kochar 
wrote:

> Hi Muhammad,
>
>
>
> Tried to install as per step2 mentioned, but got message - * No package
> llvm-toolset-7 available.*
>
>
>
> *Log reference below:*
>
>
>
> *[root@hostip tds_fdw-2.0.3]# yum list installed | grep llvm-toolset*
>
> *This system is not registered with Spacewalk server.*
>
> *You can use rhn_register to register.*
>
> *Spacewalk support will be disabled.*
>
> *Repository mc-ol7-base-x86_64 is listed more than once in the
> configuration*
>
> *Repository mc-ol7-layered-x86_64 is listed more than once in the
> configuration*
>
> *Repository mc-ol7_tsm-8.1.13_x86_64 is listed more than once in the
> configuration*
>
> *Repository mc-ol7-chef_client_15-x86_64 is listed more than once in the
> configuration*
>
> *[root@hostip tds_fdw-2.0.3]#*
>
> *[root@hostip tds_fdw-2.0.3]#*
>
> *[root@hostip tds_fdw-2.0.3]# sudo yum install llvm-toolset-7*
>
> *Loaded plugins: rhnplugin*
>
> *This system is not registered with Spacewalk server.*
>
> *You can use rhn_register to register.*
>
> *Spacewalk support will be disabled.*
>
> *Repository mc-ol7-base-x86_64 is listed more than once in the
> configuration*
>
> *Repository mc-ol7-layered-x86_64 is listed more than once in the
> configuration*
>
> *Repository mc-ol7_tsm-8.1.13_x86_64 is listed more than once in the
> configuration*
>
> *Repository mc-ol7-chef_client_15-x86_64 is listed more than once in the
> configuration*
>
> *No package llvm-toolset-7 available.*
>
> *Error: Nothing to do*
>
> *[root@hostip tds_fdw-2.0.3]#*
>
> *[root@hostip tds_fdw-2.0.3]#*
>
>
>
> *From:* Muhammad Imtiaz 
> *Sent:* Tuesday, May 28, 2024 11:43 PM
> *To:* Ian Lawrence Barwick 
> *Cc:* Sumit Kochar ;
> pgsql-general@lists.postgresql.org
> *Subject:* {EXTERNAL} Re: tds_fdw >> Install Foreign data Wrapper on EDB
> Postgres to connect to SQL server database
>
>
>
> Hi , You need to install the LLVM toolset. 1)Check that if it is installed
> or not : yum list installed | grep llvm-toolset 2)If it is not installed,
> you can install it using the following command. sudo yum install
> llvm-toolset-7 Regards,Muhammad
>
> Hi ,
>
> You need to install the  LLVM toolset.
>
> 1)Check that if it is installed or not :
>
> yum list installed | grep llvm-toolset
>
>
>
> 2)If it is not installed, you can install it using the following command.
>
> sudo yum install llvm-toolset-7
>
>
>
> Regards,
>
> Muhammad Imtiaz
>
>
>
>
>
>
>
>
>
> On Wed, 29 May 2024 at 05:27, Ian Lawrence Barwick 
> wrote:
>
> 2024年5月29日(水) 6:10 Sumit Kochar :
> >
> > Installation of Foreign data Wrapper on EDB Postgres to connect to SQL
> server database is not working.
> >
> >
> >
> > https://github.com/tds-fdw/tds_fdw/issues/357
> 
> >
> >
> >
> > Please advise if this has been encountered or a workaround is available.
> (...)
>
> > [root@hostip tds_fdw-2.0.3]# make USE_PGXS=1 install
> >
> > /opt/rh/llvm-toolset-7/root/usr/bin/clang -Wno-ignored-attributes
> -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2
> -I./include/ -fvisibility=hidden  -I. -I./ -I/usr/edb/as13/include/server
> -I/usr/edb/as13/include/internal -I/usr/libexec/edb-icu66/include/
> -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -flto=thin -emit-llvm
> -c -o src/tds_fdw.bc src/tds_fdw.c
> >
> > make: /opt/rh/llvm-toolset-7/root/usr/bin/clang: Command not found
>
> IIRC you'll need to find and install the SCLO package for Oracle Linux
> 7, assuming it's
> available. For CentOS it's this repository:
> http://mirror.centos.org/centos/7/sclo/
> 
> .
>
> Regards
>
> Ian Barwick
>
> CONFIDENTIALITY NOTICE This e-mail message and any attachments are only
> for the use of the intended recipient and may contain information that is
> privileged, confidential or exempt from disclosure under applicable law. If
> you are not the intended recipient, any disclosure, distribution or other
> use of this e-mail message or attachments is prohibited. If you have
> received this e-mail message in error, please delete and notify the sender
> immediately. Thank you.
>


Re: tds_fdw >> Install Foreign data Wrapper on EDB Postgres to connect to SQL server database

2024-05-29 Thread Kashif Zeeshan
Please refer to the following links for help

https://access.redhat.com/solutions/3500401
https://stackoverflow.com/questions/72778135/install-llvm-toolset-10-0-or-newer-in-centos7

Regards
Kashif Zeeshan
Bitnine Global


On Thu, May 30, 2024 at 8:50 AM Kashif Zeeshan 
wrote:

> Hi Sumit
>
> Please make sure Developer Toolset Repo is enable, the package is not
> getting installed as its not being found due to the missing Repo.
>
> Regards
> Kashif Zeeshan
> Bitnine Global
>
> On Wed, May 29, 2024 at 10:10 PM Sumit Kochar 
> wrote:
>
>> Hi Muhammad,
>>
>>
>>
>> Tried to install as per step2 mentioned, but got message - * No package
>> llvm-toolset-7 available.*
>>
>>
>>
>> *Log reference below:*
>>
>>
>>
>> *[root@hostip tds_fdw-2.0.3]# yum list installed | grep llvm-toolset*
>>
>> *This system is not registered with Spacewalk server.*
>>
>> *You can use rhn_register to register.*
>>
>> *Spacewalk support will be disabled.*
>>
>> *Repository mc-ol7-base-x86_64 is listed more than once in the
>> configuration*
>>
>> *Repository mc-ol7-layered-x86_64 is listed more than once in the
>> configuration*
>>
>> *Repository mc-ol7_tsm-8.1.13_x86_64 is listed more than once in the
>> configuration*
>>
>> *Repository mc-ol7-chef_client_15-x86_64 is listed more than once in the
>> configuration*
>>
>> *[root@hostip tds_fdw-2.0.3]#*
>>
>> *[root@hostip tds_fdw-2.0.3]#*
>>
>> *[root@hostip tds_fdw-2.0.3]# sudo yum install llvm-toolset-7*
>>
>> *Loaded plugins: rhnplugin*
>>
>> *This system is not registered with Spacewalk server.*
>>
>> *You can use rhn_register to register.*
>>
>> *Spacewalk support will be disabled.*
>>
>> *Repository mc-ol7-base-x86_64 is listed more than once in the
>> configuration*
>>
>> *Repository mc-ol7-layered-x86_64 is listed more than once in the
>> configuration*
>>
>> *Repository mc-ol7_tsm-8.1.13_x86_64 is listed more than once in the
>> configuration*
>>
>> *Repository mc-ol7-chef_client_15-x86_64 is listed more than once in the
>> configuration*
>>
>> *No package llvm-toolset-7 available.*
>>
>> *Error: Nothing to do*
>>
>> *[root@hostip tds_fdw-2.0.3]#*
>>
>> *[root@hostip tds_fdw-2.0.3]#*
>>
>>
>>
>> *From:* Muhammad Imtiaz 
>> *Sent:* Tuesday, May 28, 2024 11:43 PM
>> *To:* Ian Lawrence Barwick 
>> *Cc:* Sumit Kochar ;
>> pgsql-general@lists.postgresql.org
>> *Subject:* {EXTERNAL} Re: tds_fdw >> Install Foreign data Wrapper on EDB
>> Postgres to connect to SQL server database
>>
>>
>>
>> Hi , You need to install the LLVM toolset. 1)Check that if it is
>> installed or not : yum list installed | grep llvm-toolset 2)If it is not
>> installed, you can install it using the following command. sudo yum install
>> llvm-toolset-7 Regards,Muhammad
>>
>> Hi ,
>>
>> You need to install the  LLVM toolset.
>>
>> 1)Check that if it is installed or not :
>>
>> yum list installed | grep llvm-toolset
>>
>>
>>
>> 2)If it is not installed, you can install it using the following command.
>>
>> sudo yum install llvm-toolset-7
>>
>>
>>
>> Regards,
>>
>> Muhammad Imtiaz
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, 29 May 2024 at 05:27, Ian Lawrence Barwick 
>> wrote:
>>
>> 2024年5月29日(水) 6:10 Sumit Kochar :
>> >
>> > Installation of Foreign data Wrapper on EDB Postgres to connect to SQL
>> server database is not working.
>> >
>> >
>> >
>> > https://github.com/tds-fdw/tds_fdw/issues/357
>> 
>> >
>> >
>> >
>> > Please advise if this has been encountered or a workaround is available.
>> (...)
>>
>> > [root@hostip tds_fdw-2.0.3]# make USE_PGXS=1 install
>> >
>> > /opt/rh/llvm-toolset-7/root/usr/bin/clang -Wno-ignored-attributes
>> -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2
>> -I./include/ -fvisibility=hidden  -I. -I./ -I/usr/edb/as13/include/server
>> -I/usr/edb/as13/include/internal -I/usr/libexec/edb-icu66/include/
>> -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -flto=thin -emit-llvm
>> -c -o src/tds_fdw.bc src/tds_fdw.c
>> >
>> > make: /opt/rh/llvm-toolset-7/root/usr/bin/clang: Command not found
>>
>> IIRC you'll need to find and install the SCLO package for Oracle Linux
>> 7, assuming it's
>> available. For CentOS it's this repository:
>> http://mirror.centos.org/centos/7/sclo/
>> 
>> .
>>
>> Regards
>>
>> Ian Barwick
>>
>> CONFIDENTIALITY NOTICE This e-mail message and any attachments are only
>> for the use of the intended recipient and may contain information that is
>> privileged, confidential or exempt from disclosure under applicable law. If
>> you are not the intended recipient, any disclosure, distribution or other
>> use of this e-mail message or attachments is prohibited. If you have
>> received this e-mail message in error, pl

Re: Pgpool with high availability

2024-05-29 Thread vijay patil
HI Kashif,

The VIP 10.127.1.18 is accessible from the leader node (ha0002) only  but
not from the standby nodes (ha0001 and ha0003).

Thanks
Vijay

On Thu, May 30, 2024 at 9:17 AM Kashif Zeeshan 
wrote:

> Hi Vijay
>
> The log shows that all the nodes are able to communicate now (master,
> standby and standby1) and now what are the issue you are facing?
>
> Regards
> Kashif Zeeshan
>
> On Thu, May 30, 2024 at 2:00 AM vijay patil 
> wrote:
>
>> Thanks Muhammad !
>>
>> Here i uploaded details.
>>
>> Thanks
>> Vijay
>>
>> On Wed, May 29, 2024 at 10:58 AM Muhammad Imtiaz 
>> wrote:
>>
>>> Hi,
>>>
>>> Can you please provide the Pgpool logs? Additionally, in a distributed
>>> environment, ensure that Pgpool is properly configured. You can follow
>>> these documented steps to set up the Pgpool configurations
>>> pgpool.conf,pcp.conf and pool_hba.conf .
>>>
>>> Regards,
>>> Muhammad Imtiaz
>>>
>>> On Tue, 28 May 2024 at 23:01, Adrian Klaver 
>>> wrote:
>>>


 On 5/28/24 1:31 AM, vijay patil wrote:
 >
 > HI Team,
 >
 > "I'm encountering challenges while configuring Pgpool with high
 > availability. The initial setup is completed, and Pgpool is
 operational
 > on a single node, functioning without issues. However, upon
 attempting
 > to start Pgpool on any additional nodes, particularly node 2, it
 becomes
 > immediately unreachable.

 And how we are supposed to arrive at an answer with essentially no
 information provided?

 Need:

 1) Configuration for initial setup.

 2) A more detailed explanation of what "... upon attempting
 to start Pgpool on any additional nodes" means? Include configuration
 changes.

 3) The error messages.

 4) Where the nodes are located?


 >
 > I'm seeking assistance to address this issue. My setup consists of
 three
 > nodes, each hosting both PostgreSQL and Pgpool services."
 >
 >
 > Thanks
 >
 > Vijay
 >

 --
 Adrian Klaver
 adrian.kla...@aklaver.com





Re: Memory issues with PostgreSQL 15

2024-05-29 Thread Andrea Gelmini
Il giorno mar 28 mag 2024 alle ore 18:40 Christian Schröder
 ha scritto:
> Our PostgreSQL version is "PostgreSQL 15.5 on x86_64-pc-linux-gnu, compiled 
> by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit". The database server 
> is a dedicated server with 15 GB RAM (and 4 cores, if this matters).

Maybe you have PostgreSQL running inside a container with capped
resources (I saw this on some recent distro, running it from systemd)?