Re: Open source licenses

2021-04-07 Thread Laurenz Albe
On Wed, 2021-04-07 at 06:41 +, DAVID Nicolas wrote:
> > > It is to make an inventory of all the used opensource licenses from 
> > > all the used components,  to check and respect the terms of use, to 
> > > preserve copyrights and intellectual property.
> > >
> > > However, when I get PostgreSql binaries for Windows (Zip archive 
> > > linked to [EDB]), I can see in installation-notes.html :
> > >   -> "The software bundled together in this package is released under 
> > > a number of different  Open Source licences. By using any component of 
> > > this installation package, you agree to abide  by the terms and 
> > > conditions of it's licence."
> > >
> > > Could the PostgreSQL Global Development Group consider to provide these 
> > > information ?
> >
> > These installation packages are provided by EnterpriseDB, not by the PGDG.
> >
> > I think your request is reasonable, but you'll have to ask the packager.
> 
> Yes sure. I also did it ... without answer.

Not nice.

> But my initial question concerned only the open source components linked to 
> the PostgreSQL server
> that are not under the PostgreSQL license( ex: openssl, libcharset, ...).
> Regarding the other modules added by EDB, I will ask again to EDB.

That depends on how PostgreSQL was configured.

It may be a bit cumbersome, but you could go through all the shared libraries
(DLLs) in the "bin" directory that do not belong to PostgreSQL.  The licenses
for software like OpenSSL should be easy to find.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





RE: Open source licenses

2021-04-07 Thread DAVID Nicolas
Yes sure. I also did it ... without answer.
But my initial question concerned only the open source components linked to the 
PostgreSQL server that are not under the PostgreSQL license( ex: openssl, 
libcharset, ...).
Regarding the other modules added by EDB, I will ask again to EDB.

Best regards,

Nicolas DAVID

-Original Message-
From: Laurenz Albe  
Sent: 06 April 2021 16:13
To: DAVID Nicolas ; Adrian Klaver 
; pgsql-gene...@postgresql.org
Subject: Re: Open source licenses

This email is not from Hexagon's Office 365 instance. Please be careful while 
clicking links, opening attachments, or replying to this email.


On Tue, 2021-04-06 at 13:47 +, DAVID Nicolas wrote:
> My concern is, I guess, the same for all the software editor using opensource 
> components.
>
> It is to make an inventory of all the used opensource licenses from 
> all the used components,  to check and respect the terms of use, to preserve 
> copyrights and intellectual property.
>
> However, when I get PostgreSql binaries for Windows (Zip archive 
> linked to  
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.enterprisedb.com%2Fdownload-postgresql-binaries&data=04%7C01%7C%7C583222a39dbd466fc37208d8f9060595%7C1b16ab3eb8f64fe39f3e2db7fe549f6a%7C0%7C1%7C637533151545577321%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Kt7JKBaZ%2Ft0U3yIFh3fpGZ05NOp46NIiiLqPqDLtYaI%3D&reserved=0),
>  I can see in installation-notes.html :
>   -> "The software bundled together in this package is released under 
> a number of different  Open Source licences. By using any component of 
> this installation package, you agree to abide  by the terms and conditions of 
> it's licence."
>
> Could the PostgreSQL Global Development Group consider to provide these 
> information ?
>  Is there a team or a group in charge of this ? Is there a direct 
> email address to ask this  kind of request ?

These installation packages are provided by EnterpriseDB, not by the PGDG.

I think your request is reasonable, but you'll have to ask the packager.

Yours,
Laurenz Albe
--
Cybertec | 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cybertec-postgresql.com%2F&data=04%7C01%7C%7C583222a39dbd466fc37208d8f9060595%7C1b16ab3eb8f64fe39f3e2db7fe549f6a%7C0%7C1%7C637533151545577321%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=7AHjlx7QlEzfBD8Lv70mGVK1xSeOUJTJxliabSdKYuk%3D&reserved=0





RE: Upgrade from 11.3 to 13.1 failed with out of memory

2021-04-07 Thread Mihalidesová Jana
Open

Hi,

Thanks a lot for information.

Best regards,
 Jana

-Original Message-
From: Magnus Hagander  
Sent: Tuesday, April 6, 2021 3:23 PM
To: Mihalidesová Jana 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Upgrade from 11.3 to 13.1 failed with out of memory

On Tue, Apr 6, 2021 at 3:08 PM Mihalidesová Jana  
wrote:
>
> Open
>
>
> Hi,
>
>
>
> I have aproximetly 560GB large database and try to upgrade it from 11.3 to 
> 13.1. I’ve successfully upgraded dev,test and ref environment but on the 
> production pg_dump failed with out of memory. Yes, of course, the dev,test 
> and ref are much much smaller then production database.
>
> We are using OID data type so there’s a lot of large objects. pg_largeobject 
> it’s 59GB large.
>
> The upgrade process fail during the pg_dump schemas_only so I’m confused why 
> it’s not enough 35GB RAM which is free on the server when there’s no data. 
> When I tried to run same pg_dump command by hand as during upgrade it fails 
> on line pg_dump: reading large objects.
>
>
>
> Creating dump of global objects 
> "/pgsql/bin/13.1_/bin/pg_dumpall" --host /pgsql/data/ --port 50432 
> --username XX --globals-only --quote-all-identifiers --binary-upgrade 
> --verbose -f pg_upgrade_dump_globals.sql >> "pg_upgrade_utility.log" 2>&1
>
> ok
>
> Creating dump of database schemas
>
> "/pgsql/bin/13.1_/bin/pg_dump" --host /pgsql/data/ --port 
> 50432 --username XX --schema-only --quote-all-identifiers 
> --binary-upgrade --format=custom --verbose 
> --file="pg_upgrade_dump_16384.custom" 'dbname=' >> 
> "pg_upgrade_dump_16384.log" 2>&1
>
>
>
> *failure*
>
> There were problems executing ""/pgsql/bin/13.1_/bin/pg_dump" --host 
> /pgsql/data/ --port 50432 --username pgpnip --schema-only 
> --quote-all-identifiers --binary-upgrade --format=custom --verbose 
> --file="pg_upgrade_dump_16384.custom" 'dbname=' >> 
> "pg_upgrade_dump_16384.log" 2>&1"
>
>
>
>
>
> Do you have any idea how to upgrade the database? This is my upgrade command:
>
>
>
> /pgsql/bin/13.1_/bin/pg_upgrade -k -b /pgsql/bin/11.3_/bin -B 
> /pgsql/bin/13.1_/bin -d /pgsql/data/ -D 
> /pgsql/data//.new
>


This is unfortunately a known limitation in pg_dump (and therefor by proxy it 
becomes a problem with pg_upgrade) when you have many large objects. It doesn't 
really matter how big they are, it matters how
*many* they are. It takes a long time and uses crazy amounts of memory, but 
that's unfortunately where it's at. You'd have the same problem with a plain 
dump/reload as well, not just the "binary upgrade mode".

There's been some recent work on trying to find a remedy for this, but nothing 
is available at this point. You'll need to either trim the number of objects if 
you can (by maybe manually dumping them out to files before the restore and 
then reloading them back in later), or just add more memory/swap to the machine.

Long term you should probably consider switching to using bytea columns when 
you have that many objects.

--
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/


Using indexes through a plpgsql procedure

2021-04-07 Thread Ciaran Doherty
Hello,


Thank you for taking the time to read this.

As a quick question. Can Postgres make use of indexes on a table when the
data from that table is being returned from a procedure?

Some more details. I have a big table which has a long running query
running on it. This table gets rebuilt from scratch on a weekly basis.
Currently, I cannot update/replace the table in a timely manner, I have to
wait for the long running queries to finish before we can update the table
(the total time can be 5+ hours).

I am currently trying to build a procedure which will return data from the
latest version of the table (by dynamically choosing the table name,
something like
https://stackoverflow.com/questions/35559093/how-to-use-variable-as-table-name-in-plpgsql)
. To get a consistent return type I am having to use the `create type`  to
build a custom return type. The problem (I think) I am having is that the
indexes on the underlying tables are not usable after calling this
procedure

(e.g.
```
select *
  from example_table as et
join example_procedure() as ep on et.exapmle_column = ep.example_column
```
there will be no index on the column ep.example_column)

Is there a way to make sure indexes are used even if the data is accessed
threw a procdure?

Thank you

-- 
while (e) { kyoatie(); }


Cascade publication with pg_stat_replication empty

2021-04-07 Thread PegoraroF10
I have a server which replicates using Publication/Subscription. On
subscriber server I have some ETL processes running on it and from this
second server I´ve created a new publication to send this database to a
third server. While this second server was copying data with COPY to third,
everything was fine but when COPY finished and it would start streaming
data, both replication just stopped.

View pg_stat_replication is empty on first server and subscriber server is
not connected to it. 
Exactly same occurs on second, its pg_stat_replication is empty and
subscriber, the third server, is not connected to it. 

But WAL is being generated on first server, which seems that it is waiting
the second to connect to it.

So, pg_publication are filled correctly on publishers but
pg_stat_replication is empty on both

First and second servers are version 11.11 and third one is 13.0



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Cascade publication with pg_stat_replication empty

2021-04-07 Thread Laurenz Albe
On Wed, 2021-04-07 at 04:16 -0700, PegoraroF10 wrote:
> I have a server which replicates using Publication/Subscription. On
> subscriber server I have some ETL processes running on it and from this
> second server I´ve created a new publication to send this database to a
> third server. While this second server was copying data with COPY to third,
> everything was fine but when COPY finished and it would start streaming
> data, both replication just stopped.

Look into the PostgreSQL log on both machines.

Yours,
Laurenz Albe





Re: Using indexes through a plpgsql procedure

2021-04-07 Thread Peter J. Holzer
On 2021-04-07 12:15:37 +0100, Ciaran Doherty wrote:
> As a quick question. Can Postgres make use of indexes on a table when the data
> from that table is being returned from a procedure?
> 
> Some more details. I have a big table which has a long running query running 
> on
> it. This table gets rebuilt from scratch on a weekly basis. Currently, I 
> cannot
> update/replace the table in a timely manner, I have to wait for the long
> running queries to finish before we can update the table (the total time can 
> be
> 5+ hours).
> 
> I am currently trying to build a procedure which will return data from the
> latest version of the table (by dynamically choosing the table name, something
> like
> https://stackoverflow.com/questions/35559093/
> how-to-use-variable-as-table-name-in-plpgsql)

So you are creating a new table for each update and drop older tables
after some time? And your procedure just returns data from the newest
table?


>   . To get a consistent return type I am having to use the `create
> type`  to build a custom return type. The problem (I think) I am
> having is that the indexes on the underlying tables are not usable
> after calling this procedure
> 
> (e.g.
> ```
> select *
>   from example_table as et
>     join example_procedure() as ep on et.exapmle_column = ep.example_column
> ```
> there will be no index on the column ep.example_column)
> 
> Is there a way to make sure indexes are used even if the data is accessed 
> threw
> a procdure?

Any query within example_procedure() will be able to use an index. and
the join will in theory be able to use an index on example_table, although it
can't really get useful statistics (it doesn't know what fraction of the
example_table will be covered by the output of example_procedure()), so
it might default to doing a full scan anyway.

I would probably use a view for this: After creating the new table, just
recreate the view to use the new table. 

Or maybe you don't even need the view and can get away by renaming the
old and new table:

begin;
create table ep_new(...);
-- populate ep_new here
drop table if exists ep_old;
alter table ep rename to ep_old;
alter table ep_new rename to ep;
commit;

Partitioning should also work but that feels like a hack.

hp


-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


PostgreSQL log query's result size

2021-04-07 Thread Hellmuth Vargas
Excuse me in advance for my English, I'm improving :-)

Could you tell me if it is possible that in addition to the configuration
that the log presents the duration of the delayed queries, it can also
present the size of the result data? especially those who want to return a
lot of information. My request is that by configuring some parameter, the
size of the records obtained could be obtained something similar to the
parameter log_min_duration_statement

Thank you very much!

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S


Re: PostgreSQL log query's result size

2021-04-07 Thread Peter J. Holzer
On 2021-04-07 09:53:11 -0500, Hellmuth Vargas wrote:
> Could you tell me if it is possible that in addition to the
> configuration that the log presents the duration of the delayed
> queries, it can also present the size of the result data?

Not that I know of.

> especially those who want to return a lot of information. My request
> is that by configuring some parameter, the size of the records
> obtained could be obtained something similar to the parameter
> log_min_duration_statement

I second that request. 

But in my case I think I am more frequently interested in finding
queries which returned 0 rows than those that returned many rows.

And for "size of the result data" I think the number of rows would
generally be more useful than the size in bytes.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Primary keys and composite unique keys(basic question)

2021-04-07 Thread Ron

On 4/5/21 9:37 PM, Rob Sargent wrote:

It's a small thing, but UUIDs are absolutely not memorizable by
humans; they have zero semantic value.  Sequential numeric identifiers
are generally easier to transpose and the value gives some clues to
its age (of course, in security contexts this can be a downside).

I take the above as a definite plus.  Spent too much of my life correcting 
others’ use of “remembered” id’s that just happened to perfectly match the 
wrong thing.


People seem to have stopped appending check digits to identifiers about 20 
years ago, and I'm not sure why.


--
Angular momentum makes the world go 'round.


Re: Primary keys and composite unique keys(basic question)

2021-04-07 Thread Rob Sargent


> On Apr 7, 2021, at 10:17 AM, Ron  wrote:
> 
>  On 4/5/21 9:37 PM, Rob Sargent wrote:
>>> It's a small thing, but UUIDs are absolutely not memorizable by
>>> humans; they have zero semantic value.  Sequential numeric identifiers
>>> are generally easier to transpose and the value gives some clues to
>>> its age (of course, in security contexts this can be a downside).
>>> 
>> I take the above as a definite plus.  Spent too much of my life correcting 
>> others’ use of “remembered” id’s that just happened to perfectly match the 
>> wrong thing.
> 
> People seem to have stopped appending check digits to identifiers about 20 
> years ago, and I'm not sure why.
> 
No the problem is “start from one”. User has item/I’d 10875 in hand and types 
in 10785 which of course in a sequence supplied ID steam is perfectly valid and 
wrong.  Really hard to track down. 
> -- 
> Angular momentum makes the world go 'round.


Re: Primary keys and composite unique keys(basic question)

2021-04-07 Thread Ron

On 4/7/21 11:35 AM, Rob Sargent wrote:



On Apr 7, 2021, at 10:17 AM, Ron  wrote:

 On 4/5/21 9:37 PM, Rob Sargent wrote:

It's a small thing, but UUIDs are absolutely not memorizable by
humans; they have zero semantic value.  Sequential numeric identifiers
are generally easier to transpose and the value gives some clues to
its age (of course, in security contexts this can be a downside).

I take the above as a definite plus.  Spent too much of my life 
correcting others’ use of “remembered” id’s that just happened to 
perfectly match the wrong thing.


People seem to have stopped appending check digits to identifiers about 
20 years ago, and I'm not sure why.


No the problem is “start from one”. User has item/I’d 10875 in hand and 
types in 10785 which of course in a sequence supplied ID steam is 
perfectly valid and wrong.  Really hard to track down.


That's my point.  Adding a check digit (turning 10875 into 108753) would 
have caught that, since 107853 does not match 107854 (which is 10785 with a 
check digit added).


--
Angular momentum makes the world go 'round.


Re: Primary keys and composite unique keys(basic question)

2021-04-07 Thread Rob Sargent

On 4/7/21 11:59 AM, Ron wrote:

On 4/7/21 11:35 AM, Rob Sargent wrote:



On Apr 7, 2021, at 10:17 AM, Ron  wrote:

 On 4/5/21 9:37 PM, Rob Sargent wrote:

It's a small thing, but UUIDs are absolutely not memorizable by
humans; they have zero semantic value.  Sequential numeric identifiers
are generally easier to transpose and the value gives some clues to
its age (of course, in security contexts this can be a downside).

I take the above as a definite plus.  Spent too much of my life 
correcting others’ use of “remembered” id’s that just happened to 
perfectly match the wrong thing.


People seem to have stopped appending check digits to identifiers 
about 20 years ago, and I'm not sure why.


No the problem is “start from one”. User has item/I’d 10875 in hand 
and types in 10785 which of course in a sequence supplied ID steam is 
perfectly valid and wrong.  Really hard to track down.


That's my point.  Adding a check digit (turning 10875 into 108753) 
would have caught that, since 107853 does not match 107854 (which is 
10785 with a check digit added).
Well you forget that 108753 is also a number in the series from 1 to 
maxint.   Maybe you're on to something though: a checksum dispensing 
sequence!


Re: Primary keys and composite unique keys(basic question)

2021-04-07 Thread Ron

On 4/7/21 1:16 PM, Rob Sargent wrote:

On 4/7/21 11:59 AM, Ron wrote:

On 4/7/21 11:35 AM, Rob Sargent wrote:



On Apr 7, 2021, at 10:17 AM, Ron  wrote:

 On 4/5/21 9:37 PM, Rob Sargent wrote:

It's a small thing, but UUIDs are absolutely not memorizable by
humans; they have zero semantic value.  Sequential numeric identifiers
are generally easier to transpose and the value gives some clues to
its age (of course, in security contexts this can be a downside).

I take the above as a definite plus.  Spent too much of my life 
correcting others’ use of “remembered” id’s that just happened to 
perfectly match the wrong thing.


People seem to have stopped appending check digits to identifiers about 
20 years ago, and I'm not sure why.


No the problem is “start from one”. User has item/I’d 10875 in hand and 
types in 10785 which of course in a sequence supplied ID steam is 
perfectly valid and wrong.  Really hard to track down.


That's my point.  Adding a check digit (turning 10875 into 108753) would 
have caught that, since 107853 does not match 107854 (which is 10785 with 
a check digit added).
Well you forget that 108753 is also a number in the series from 1 to 
maxint.   Maybe you're on to something though: a checksum dispensing sequence!


Call a function which reads the next value from the sequence, appends the 
check digit and returns that number.  We were doing that 25 years ago.


--
Angular momentum makes the world go 'round.


could not receive data from client: Connection reset by peer

2021-04-07 Thread Jagmohan Kaintura
Hi Team,

We are receiving alot of below error messages in the postgresql logs.
These all connections are being created and released by the applications
attached to the database.

Please help how these failures can be identified and what all steps need to
be taken for further analysis.

2021-04-08 10:18:24 HKT [8252]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer


2021-04-08 10:18:24 HKT [8252]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  unexpected EOF on client connection with
an open transaction
2021-04-08 10:18:24 HKT [8252]: [5-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.550
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51690
2021-04-08 10:18:24 HKT [8233]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8235]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8236]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8236]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.824
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51673
2021-04-08 10:18:24 HKT [8233]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.857
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51668
2021-04-08 10:18:24 HKT [8235]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.835
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51672
2021-04-08 10:18:24 HKT [8237]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8237]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.817
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51674
2021-04-08 10:18:24 HKT [8234]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8234]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.845
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51670
2021-04-08 10:18:24 HKT [8239]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8239]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.801
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51676
2021-04-08 10:18:24 HKT [8232]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8232]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection:

-- 
*Best Regards,*
Jagmohan


Re: could not receive data from client: Connection reset by peer

2021-04-07 Thread Tom Lane
Jagmohan Kaintura  writes:
> We are receiving alot of below error messages in the postgresql logs.

It looks very much like something in your network infrastructure is
dropping connections after 1 hour.  The consistent length-of-connection
log reports are hard to explain any other way.  Check firewalls etc.

regards, tom lane