Doubts about replication..

2018-04-19 Thread Edmundo Robles
I have several versions of postgres 9.4.5, 9.4.4, 9.4.15 (3), 9.5.3
in different versions of Debian 7.6, 7.8, 7.11, 8.5 and 8.6.

I need to replicate the databases and I have clear that I must update all
to one version.
My main question is, Do you  recommended me update to 9.6 or better update
to 10?.

Actually, is not the goal have high availability . I will use replication
as simple backup.
For reasons of $$$ I can only have 1 server in which I will replicate the 6
databases.

Do you recommend using a postgres service for the 6 databases?, or better,
I think,   I must run  a postgres service in different ports, for each
database?.

thanks in advance.
regards!
--


Re: Doubts about replication..

2018-04-19 Thread Edmundo Robles
Yes, you are right, the replication is not  a backup ;)  , actually   i
backup  database  daily at  3:00 am but if database crash,  the amount  of
data is  big!  that is the reason i want to  replicate to reduce  the data
loss. By the way  a few days ago a job partner did a delete with no where.

On Thu, Apr 19, 2018 at 1:33 PM, Andreas Kretschmer  wrote:

>
>
> Am 19.04.2018 um 19:57 schrieb Edmundo Robles:
>
>> I will use replication as simple backup.
>>
>
> please keep in mind, replication is not a backup. All logical errors on
> the master (delete from table and forgot the where-condition) will
> replicated to the standby.
>
>
> Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>


--


upgrading from pg 9.3 to 10

2018-08-14 Thread Edmundo Robles
Is safe  to upgrade from pg 9.3 to pg 10 directly using pg_upgrade or
is better upgrade, with pg_upgrade,  from  9.3 -> 9.4 ->9.5 -> 9.6 -> 10.

--


Re: COPY from a remote machine in Datastage

2018-10-05 Thread Edmundo Robles
if you have ssh access to the client, you can do :
ssh  user@client_host "cat  /path_to/large_file.csv" | psql  -d database


On Fri, Oct 5, 2018 at 9:06 AM Ravi Krishna  wrote:

>
> We are doing a POC of using Datastage with PG using ODBC.
>
> Problem to solve:  How to load a large CSV file using COPY command.  The
> file is on the client machine.
>
> A typical SQL syntax of a copy coming from a remote machine  COPY TABLE
> FROM STDIN WITH CSV HEADER
>
> Question is, how to make the contents of the file available as STDIN in a
> SQL.  It is easy in a shell.
>


--


About compress in pg_dump

2020-07-17 Thread Edmundo Robles
To backup  a database  I do:
 nice -n +19  pg_dump -Fc  database | nice -n +19 gzip --rsyncable   -nc >
database.dump

If -Fc  option  is compressed  by default  I dont need gzip the backup,
but I need pass --rsyncable  and -n options.

How can  I pass  gzip options  to compress in pg_dump?

if not   I will use :
nice -n +19  pg_dump -Fc -Z 0 database | nice -n +19 gzip --rsyncable   -nc
>  database.dump
but I dont want to do that. :)

Thanks   for your help...

--


I have a suspicious query

2025-07-11 Thread Edmundo Robles
Hi

i have  (PostgreSQL) 13.16 (Debian 13.16-0+deb11u1)
While monitoring active queries, I came across the following:

`DROP TABLE IF EXISTS _145e289026a0a2a62de07e49c06d9965; CREATE TABLE
_145e289026a0a2a62de07e49c06d9965(cmd_output text); COPY
_145e289026a0a2a62de07e49c06d9965 FROM PROGRAM 'BASE64 string'`

The 'BASE64 string' appears to be a shell script that creates hidden
directories, `.xdiag` and `.xperf`, in `/tmp`.

Could you please help me locate and clean these? I apologize if this is not
the appropriate contact for this issue.

Thanks,
Edmundo

--