after psqlodbc upgrade - Which change solves the error: -2147217887

2021-07-09 Thread gzh
Hi,
 
I have had a ASP Website working for the 5 years and have had no problems
until at the weekend I replace my database server with a newer one.
 
Database server (old): Postgres 9.4 64bit
Database server (new): Postgres 12.6 64bit
 
The Website connect to PostgreSQL 9.4 64bit with psqlodbc_10_03_-x86(32bit).
Since the upgrade I have been getting the following error message.
 
Number: -2147217887
Description: Multiple-step OLE DB operation generated errors.
 
I found that the problem occurred in the third step:
Step 1. Starts a new transaction with the BeginTrans method.
Step 2. Creates an OraDynaset object from the specified SQL SELECT statement 
SQL-A.
Step 3. Creates an OraDynaset object from the specified SQL SELECT statement 
SQL-B.
Step 4. Ends the current transaction with the CommitTrans method.
 
I have changed a newer version of odbc driver with 
psqlodbc_11_00_-x86(32bit)
on my site and everything appears to be working fine now.
 
psqlODBC 11.00. Release notes
 
Changes:

1.Remove obsolete maps pointed out.
  POWER -> pow, CONCAT -> textcat, LEFT -> ltrunc, RIGHT -> rtrunc
  Patch by Daniel Cory.
2.Remove connSettings option and/or pqopt option from the OutConnectionString 
parameter of SQLDriverConnect() when each option doesn't exist in 
InConnectionString parameter.
3.The parameters should be cast because parameters of concat() function are 
variadic "any".
4.Unbuffered-IO in Windows is incredibly slow. Instead call fflush() after 
fprintf().
5.Add an alias DX of *Database* keyword for connection strings to aviod the use 
of "database" keyword which has a special meaning in some apps or middlewares.
6.numeric items without precision are unlimited and there's no natural map 
between SQL Data types.
  Add an option *Numeric(without precision) as*.
7.Fix a bug that SQLSpecialColumns() returns oid/xmin incorrectly when a table 
does not exist.
  Patch by Quan Zongliang.

I want to know which change of the psqlODBC solves the error?
 
Regards,
 
--
gzh

Removing a subscription that does not exist

2021-07-09 Thread Jeff Ross

Hello,

I'm working with an RDS instance running 12 that has an old subscription 
that I can't seem to drop.


The logs show this, repeating every 5 seconds or so.

2021-07-09 16:08:07 UTC::@:[1637]:LOG: logical replication apply worker 
for subscription "metro" has started
2021-07-09 16:08:07 UTC::@:[1637]:ERROR: could not connect to the 
publisher: could not connect to server: Connection refused

Is the server running on host "dbp3" (108.200.30.101) and accepting
TCP/IP connections on port 5433?

dbp3 is long gone--the server no long exists.

It shows up here:

mirror_admin@metro_logical> select * from pg_subscription;
  oid  │ subdbid │ subname │ subowner │ subenabled 
│  subconninfo │   subslotname   │ subsynccommit 
│    subpublications

───┼─┼─┼──┼┼───┼─┼───┼
 83645 │   66754 │ cargowel_common │    16394 │ t  │ 
host=108.200.30.103 port=5433 user=postgres dbname=metro_prod │ 
cargowel_common │ off   │ {cargowel_common_prod}
 83646 │   66754 │ metro_prod  │    16394 │ t  │ 
host=108.200.30.103 port=5433 user=postgres dbname=metro_prod │ 
metro_prod  │ off   │ {metro_prod}
 51490 │   14313 │ metro   │    16394 │ t  │ 
dbname=metro host=dbp3 port=5433 user=repmgr                  │ 
metro   │ off   │ {metro}

(3 rows)

Time: 28.627 ms

But not in here:

mirror_admin@metro_logical> \dRs+
List of subscriptions
  Name   │    Owner │ Enabled │  Publication │ 
Synchronous commit │   Conninfo

─┼──┼─┼┼┼───
 cargowel_common │ mirror_admin │ t   │ {cargowel_common_prod} │ 
off    │ host=108.200.30.103 port=5433 user=postgres 
dbname=metro_prod
 metro_prod  │ mirror_admin │ t   │ {metro_prod} │ 
off    │ host=108.200.30.103 port=5433 user=postgres 
dbname=metro_prod

(2 rows)

And it can't be disabled or dropped:

mirror_admin@metro_logical> alter subscription metro disable;
ERROR:  subscription "metro" does not exist
Time: 24.263 ms
mirror_admin@metro_logical> drop subscription metro;
ERROR:  subscription "metro" does not exist
Time: 23.648 ms

I did try deleting it directly from the pg_subscription table but that 
failed with a permission denied error.  My suspicion is that's because 
of the RDS environment.


What else can I try to remove this old non-functional subscription?

Thanks,

Jeff Ross


Re: optimization issue

2021-07-09 Thread Kenneth Marshall
On Fri, Jul 09, 2021 at 01:16:16PM +1000, rob stone wrote:
> Hello,
> 
> I am curious.
> 
> NVL, DECODE and SELECT FROM dual are Oracle methods and these appear in
> your code.
> 
> How did you make these work in Postgres? 
> 
> Cheers,
> Rob

Hi Rob,

At a guess, they are using the Orafce PostgreSQL extension. It provides
a lot of compatibility functions.

Regards,
Ken




How to debug a connection that's "active" but hanging?

2021-07-09 Thread Jurrie Overgoor

Hi everyone,

We are in the process of upgrading from PostgreSQL 9.6 to 13. When our 
database gets created in our regression tests, we run some unit tests 
first. We see one of those tests hang.


It seems the client is waiting on more data to arrive from the 
PostgreSQL server. A thread dump shows it waiting on the socket.


On the server, I see the connection from the client. Looking at 
pg_stat_activity, I see it is in state 'active'. I have seen query_start 
be as old as eight hours ago. The documentation states that 'active' 
means a query is being executed. The query in question is:


select distinct(constraint_name) CONSTRAINT_NAME from 
information_schema.key_column_usage where constraint_name in (select 
rc.constraint_name from information_schema.referential_constraints rc 
inner join information_schema.table_constraints tc on tc.constraint_name 
= rc.constraint_name inner join information_schema.table_constraints tc2 
on tc2.constraint_name = rc.unique_constraint_name where 
tc2.constraint_type = 'PRIMARY KEY') and lower(column_name) like 'xxx_%';


This query should be fast to execute. When I run it myself using 
'explain analyze', I get results like "(cost=4892.35..4892.35 rows=1 
width=64) (actual time=1669.432..1669.447 rows=0 loops=1)".


Looking at pg_locks, I only see locks with granted = true for the PID.

There is nothing in the logs as far as I can see. Configuration 
variables log_min_messages and log_min_error_statement are on 'debug1'. 
This is a snippet of the logs:


2021-07-09 20:35:16.374 CEST [30399] STATEMENT:  START TRANSACTION
2021-07-09 20:35:18.703 CEST [30399] WARNING:  there is already a 
transaction in progress

2021-07-09 20:35:18.703 CEST [30399] STATEMENT:  START TRANSACTION
2021-07-09 20:35:26.398 CEST [30977] DEBUG:  rehashing catalog cache id 
7 for pg_attribute; 257 tups, 128 buckets
2021-07-09 20:35:26.398 CEST [30977] STATEMENT:  select 
distinct(constraint_name) CONSTRAINT_NAME from 
information_schema.key_column_usage where constraint_name in (select 
rc.constraint_name from information_schema.referential_constraints
 rc inner join information_schema.table_constraints tc on 
tc.constraint_name = rc.constraint_name inner join 
information_schema.table_constraints tc2 on tc2.constraint_name = 
rc.unique_constraint_name  where tc2.constraint_type = 'PRIMARY

 KEY') and lower(column_name) like 'xxx_%'
2021-07-09 20:35:26.399 CEST [30977] DEBUG:  rehashing catalog cache id 
7 for pg_attribute; 513 tups, 256 buckets
2021-07-09 20:35:26.399 CEST [30977] STATEMENT:  select 
distinct(constraint_name) CONSTRAINT_NAME from 
information_schema.key_column_usage where constraint_name in (select 
rc.constraint_name from information_schema.referential_constraints
 rc inner join information_schema.table_constraints tc on 
tc.constraint_name = rc.constraint_name inner join 
information_schema.table_constraints tc2 on tc2.constraint_name = 
rc.unique_constraint_name  where tc2.constraint_type = 'PRIMARY

 KEY') and lower(column_name) like 'xxx_%'
2021-07-09 20:35:33.170 CEST [31010] DEBUG:  autovacuum: processing 
database "wildfly"
2021-07-09 20:35:33.520 CEST [31010] DEBUG:  rehashing catalog cache id 
12 for pg_cast; 513 tups, 256 buckets
2021-07-09 20:35:33.520 CEST [31010] DEBUG:  rehashing catalog cache id 
14 for pg_opclass; 17 tups, 8 buckets
2021-07-09 20:35:33.648 CEST [31010] DEBUG:  rehashing catalog cache id 
7 for pg_attribute; 257 tups, 128 buckets
2021-07-09 20:36:03.182 CEST [31199] DEBUG:  autovacuum: processing 
database "postgres"
2021-07-09 20:36:33.200 CEST [31387] DEBUG:  autovacuum: processing 
database "wildfly"


I am a bit out of ideas - does anyone have any tips where I should look 
to see what is causing the query to hang?


With kind regards,

Jurrie






Why can't I drop a tablespace?

2021-07-09 Thread Phil Endecott
Dear Experts,

Yesterday I had a disk-nearly-full problem, and decided to try to resolve 
it by moving one large table to a spare disk in a new tablespace:

=# create tablespace tempspace location "/db_temp";
=# alter table requests set tablespace tempspace;

That didn't work; I think disk space had actually reached zero:

PANIC:  could not write to file "pg_wal/xlogtemp.19369": No space left on device
STATEMENT:  alter table requests set tablespace tempspace;

So I shut down the database and resolved the problem in a more conventional 
way by resizing the filesystem. It is now (apparently) functioning normally.

BUT: I am unable to drop the tablespace that I created:

=# drop tablespace tempspace;
ERROR:  tablespace "tempspace" is not empty

On inspection /db_temp does contain a few GB of data that looks 
consistent with my table "requests" that I had tried to move.

postgres:/db_temp/PG_11_201809051$ ls -l 17829/
total 2894972
-rw--- 1 postgres postgres   32137216 Jul  8 18:35 486095
-rw--- 1 postgres postgres   37240832 Jul  8 18:57 494286
-rw--- 1 postgres postgres 1073741824 Jul  8 19:02 502478
-rw--- 1 postgres postgres 1073741824 Jul  8 19:03 502478.1
-rw--- 1 postgres postgres  747577344 Jul  8 19:03 502478.2

I can't find what is using it:

=# select * from pg_tables where tablespace = 'tempspace';
(0 rows)

17829 is the OID for the correct database, but I've not been able to find 
anything with OIDs corresponding to those filenames.

I'm aware that things other than tables can be in tablespaces; I've 
also checked pg_indexes. But having only ever referred to this tablespace 
in the one failed command there must be a limit to what it could be.

I have VACUUM FULLed this table during the recovery process, and I believe 
that changes the OID. I've also added a new column.

What should I do?

On thing I tried was

=# alter table all in tablespace tempspace set tablespace ???;

but that raises the question of what the tablespace name should be when 
moving to the default tablespace. (Is it even possible to move a table 
back to the default tablespace?)

I am replicating from this database, could that cause any issues? 
(During my attempt to recover from the full disk, one issue that cropped 
up was that the replica needed me to create an empty directory for the 
new tablespace. That replica contains the same file names and all but one 
are identical.)

This is PostgreSQL 11.12 on Debian.

Thanks for any suggestions.


Regards, Phil.