RE: Back Slash \ issue

2019-05-03 Thread Guntry Vinod
Hi Team,

We are using the below command

COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  

Regards,
Vinod

-Original Message-
From: Adrian Klaver  
Sent: Thursday, May 2, 2019 8:58 PM
To: Guntry Vinod ; ravikris...@mail.com
Cc: pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottuparthi 
; Biswa Ranjan Dash 
Subject: Re: Back Slash \ issue

On 5/2/19 8:19 AM, Guntry Vinod wrote:
> Hi Adrian\Kiran,
> 
> Below is the issue.
> 
> We are migrating from Db2 to Postgre.
> 
> The Db2 dump consists of back Slash \ with in the data [dump] , but postgre 
> is not escaping the Slash.
> 
> For example if name consist of Vinod\G after the inserting the dump the value 
> in postgre is VinodG where as I need it as Vinod\G.
> 
> Since the dump is huge data I can't replace \  within the data to escape 
> Slash.
> 
> We are using import utility provided by Postgre.

Are you talking about COPY or some other utility?

> 
> Can you please give me any solution in this regard?.

In order for a solution to be found we will need more explicit information on 
what you are doing.

> 
> @kiran-let me also try solution you provided in the below mail.
> 
> Regards,
> Vinod
> ___
> From: Adrian Klaver [adrian.kla...@aklaver.com]
> Sent: 02 May 2019 20:04
> To: ravikris...@mail.com; Guntry Vinod
> Cc: pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottuparthi; 
> Biswa Ranjan Dash
> Subject: Re: Back Slash \ issue
> 
> On 5/2/19 7:30 AM, ravikris...@mail.com wrote:
>>   > We have dump from DB2 and trying to upload it Postgre.
>> DB2 export command has an option to export it as CSV which quotes 
>> data so that any embedded lines or special characters inside the data 
>> is treated fine.  Does the csv format has quotes enclosing it ?
> 
> Please show the command you use to import into Postgres.
> Also the error message you get.
> 
> 
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
> ==
> ==
> 
> Disclaimer:  This message and the information contained herein is proprietary 
> and confidential and subject to the Tech Mahindra policy statement, you may 
> review the policy at http://www.techmahindra.com/Disclaimer.html 
>  externally 
> http://tim.techmahindra.com/tim/disclaimer.html 
>  internally within 
> TechMahindra.
> 
> ==
> ==
> 
> 


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: Back Slash \ issue

2019-05-03 Thread Andrew Gierth
> "Guntry" == Guntry Vinod  writes:

 Guntry> Hi Team,
 Guntry> We are using the below command

 Guntry> COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  

COPY in postgresql expects one of two data formats; since you did not
specify CSV, in this case it's expecting the default postgresql format
which requires the use of \ for escapes like \r, \n, \t, \\, \012 and
for the default NULL specification of \N.

If you use this format it is therefore your responsibility to ensure
that any literal \ characters in the data are escaped as \\, and that
any literal appearance of the delimiter character or a newline is also
escaped.

See https://www.postgresql.org/docs/current/sql-copy.html under "Text
format".

-- 
Andrew (irc:RhodiumToad)




RE: Back Slash \ issue

2019-05-03 Thread Guntry Vinod
Hi Andrew,

So you mean to say we need to replace \\ in data. If so the data what we 
receive is huge chunk(we cannot open in notepad++ also) .

If we can pass the CSV instead of .txt or any other format. Do we have any 
solution. if Yes Can you please give me some example.

Many Thanks,
Vinod


-Original Message-
From: Andrew Gierth  
Sent: Friday, May 3, 2019 1:37 PM
To: Guntry Vinod 
Cc: Adrian Klaver ; ravikris...@mail.com; 
pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottuparthi 
; Biswa Ranjan Dash 
Subject: Re: Back Slash \ issue

> "Guntry" == Guntry Vinod  writes:

 Guntry> Hi Team,
 Guntry> We are using the below command

 Guntry> COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  

COPY in postgresql expects one of two data formats; since you did not specify 
CSV, in this case it's expecting the default postgresql format which requires 
the use of \ for escapes like \r, \n, \t, \\, \012 and for the default NULL 
specification of \N.

If you use this format it is therefore your responsibility to ensure that any 
literal \ characters in the data are escaped as \\, and that any literal 
appearance of the delimiter character or a newline is also escaped.

See https://www.postgresql.org/docs/current/sql-copy.html under "Text format".

--
Andrew (irc:RhodiumToad)


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.







Re: Back Slash \ issue

2019-05-03 Thread Matthias Apitz
El día Friday, May 03, 2019 a las 08:45:02AM +, Guntry Vinod escribió:

> Hi Andrew,
> 
> So you mean to say we need to replace \\ in data. If so the data what we 
> receive is huge chunk(we cannot open in notepad++ also) .
> 
> ...

Hi Guntry,

What about piping the data on a Linux or any other UNIX (or even Cygwin
on Windows) through a sed-Kommand to do the necessary changes, like

echo 'bla\foo' | sed 's/\\//'
bla\\foo

matthias
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
70 years of NATO - 70 years of wars (Jugoslavia, Afghanistan, Syria, ...) and 
70 years
of war preparation against Russia.  -- PEACE instead of NATO !




RE: Back Slash \ issue

2019-05-03 Thread Guntry Vinod
The postgre is running on Windows platform.

-Original Message-
From: Matthias Apitz  
Sent: Friday, May 3, 2019 2:32 PM
To: Guntry Vinod 
Cc: Andrew Gierth ; pgsql-gene...@postgresql.org; 
Adrian Klaver ; ravikris...@mail.com; Venkatamurali 
Krishna Gottuparthi ; Biswa Ranjan Dash 

Subject: Re: Back Slash \ issue

El día Friday, May 03, 2019 a las 08:45:02AM +, Guntry Vinod escribió:

> Hi Andrew,
> 
> So you mean to say we need to replace \\ in data. If so the data what we 
> receive is huge chunk(we cannot open in notepad++ also) .
> 
> ...

Hi Guntry,

What about piping the data on a Linux or any other UNIX (or even Cygwin on 
Windows) through a sed-Kommand to do the necessary changes, like

echo 'bla\foo' | sed 's/\\//'
bla\\foo

matthias
--
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045 
Public GnuPG key: http://www.unixarea.de/key.pub
70 years of NATO - 70 years of wars (Jugoslavia, Afghanistan, Syria, ...) and 
70 years of war preparation against Russia.  -- PEACE instead of NATO !


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




RE: Back Slash \ issue

2019-05-03 Thread Biswa Ranjan Dash
Team,

We had also tried importing the data by converting the data to a CSV file using 

\copy TABLE_NAME FROM 'G:\DB_Backup\FILE.csv' (format csv, null '\N'); 

Regards,
Biswa

-Original Message-
From: Guntry Vinod  
Sent: Friday, May 3, 2019 2:35 PM
To: Matthias Apitz 
Cc: Andrew Gierth ; pgsql-gene...@postgresql.org; 
Adrian Klaver ; ravikris...@mail.com; Venkatamurali 
Krishna Gottuparthi ; Biswa Ranjan Dash 

Subject: RE: Back Slash \ issue

The postgre is running on Windows platform.

-Original Message-
From: Matthias Apitz  
Sent: Friday, May 3, 2019 2:32 PM
To: Guntry Vinod 
Cc: Andrew Gierth ; pgsql-gene...@postgresql.org; 
Adrian Klaver ; ravikris...@mail.com; Venkatamurali 
Krishna Gottuparthi ; Biswa Ranjan Dash 

Subject: Re: Back Slash \ issue

El día Friday, May 03, 2019 a las 08:45:02AM +, Guntry Vinod escribió:

> Hi Andrew,
> 
> So you mean to say we need to replace \\ in data. If so the data what we 
> receive is huge chunk(we cannot open in notepad++ also) .
> 
> ...

Hi Guntry,

What about piping the data on a Linux or any other UNIX (or even Cygwin on 
Windows) through a sed-Kommand to do the necessary changes, like

echo 'bla\foo' | sed 's/\\//'
bla\\foo

matthias
--
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045 
Public GnuPG key: http://www.unixarea.de/key.pub
70 years of NATO - 70 years of wars (Jugoslavia, Afghanistan, Syria, ...) and 
70 years of war preparation against Russia.  -- PEACE instead of NATO !


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Re: Back Slash \ issue

2019-05-03 Thread Matthias Apitz
El día Friday, May 03, 2019 a las 09:04:34AM +, Guntry Vinod escribió:

> The postgre is running on Windows platform.

Maybe you haven't read completely through the post you are top posting
on. It was clear to me (from the used file name syntax) that you are on
Windows; that's why I said:

> What about piping the data on a Linux or any other UNIX (or even Cygwin on 
> Windows)
> through a sed-Kommand to do the necessary changes, like ...

i.e you could use Windows for this. Or transfer the data for processing
to a Linux system, and back for loading.

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
70 years of NATO - 70 years of wars (Jugoslavia, Afghanistan, Syria, ...) and 
70 years
of war preparation against Russia.  -- PEACE instead of NATO !




Re: Migrating an application with Oracle temporary tables

2019-05-03 Thread Pavel Stehule
pá 3. 5. 2019 v 8:19 odesílatel Laurenz Albe 
napsal:

> On Thu, 2019-05-02 at 16:55 +, Mark Zellers wrote:
> > I thought I needed the prototype table to be able to define functions
> and procedures that refer to the temporary table but do not create it.
> >
> > Perhaps my assumption that I need the table to exist (whether as a
> temporary table or as a permanent table) in
> > order to define the function/procedure is incorrect.  I'll take a look
> at that.
>
> You don't need the table to exist at function definition time.
> The following works just fine, even if the table does not exist:
>
> CREATE FUNCTION f() RETURNS void LANGUAGE plpgsql AS $$BEGIN PERFORM *
> FROM notexists; END;$$;
>
> This is because functions are not parsed when they are defined.
>

It is not fully correct - function with queries are parsed and syntax check
is done. But semantic check is  deferred on run time.

Regards

Pavel


> > I did find a scenario where this approach does run into trouble.  That
> is, if the function/procedure is executed
> > against the permanent table and then you go to run it against a
> temporary table.  In that case, I do get the
> > wrong answer, and I haven't yet figured out how to reset that without
> dropping the procedure and re-defining it.
> > For my purposes, that is "good enough" -- I can promise not to run such
> procedures against the temporary table.
>
> Yes, that would cause a problem.
>
> The SQL statement "DISCARD PLANS" should fix the problem.
>
> Yours,
> Laurenz Albe
>
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>
>
>


Update row attribute that is part of row level security policy using_expression

2019-05-03 Thread Saupe Stefan
I'd like to use RLS to 'hide' or 'deactivate' data at some point that some rows 
are not visible to the application user anymore.
Let's say user a owns the data and can see all his data.
The application user 'b' can only select,update,delete... 'active' data, but is 
also able to 'deactivate' currently 'active' rows.

Below is how I tried to accomplish this.

But I'm not able to 'deactivate' rows in the table as application user b.
How can i accomplish this? If possible without having to change the application 
sql’s that run against the table(s)?


create user a with password 'a';
create user b with password 'b';

\c postgres a;
create table t1(id int,active boolean);
insert into t1 values(1,true);
insert into t1 values(2,false);
create policy mypolicy on t1 for all to b using (active);
alter table t1 enable row level security;
grant all on t1 to b;
select * from t1;
id | active
+
  1 | t
  2 | f
(2 rows)
--> OK
--Now connect as the application user b
\c postgres b;
select * from t1;
id | active
+
  1 | t
(1 row)
--> OK

--now I want to 'deactivate' the active row
update t1 set active=false where id=1;
ERROR:  new row violates row-level security policy for table "t1"
--> I want to be able to do this.
My question is:
How can user b read just ‘active’ data AND be able to ‘deactivate’ some active 
rows?

According to the docs 
(https://www.postgresql.org/docs/current/sql-createpolicy.html) the reason why 
the update fails is:
The policy USING expression is applied to Existing & new rows on UPDATES if 
read access is required to the existing or new row


Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Matthias Apitz


Hello,

We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte. The interfaces contain any kind of
language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...

Re/ the migration of the data itself, are there any use case studies
which could we keep in mind? We plan to export the tables with our own
tool which produces CSV with delimiter '|' (and \| if the char | is in
char columns too) and with hex representation of the BLOB data. This seems
to fit nicely with PostgreSQL's COPY command.

Any known pitfalls?

Btw: We're investigating MySQL too, but this seems to be from the list
now for not having an ESQL/C interface.

Regards

matthias 
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Ron

On 5/3/19 6:56 AM, Matthias Apitz wrote:

Hello,

We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte. The interfaces contain any kind of
language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...

Re/ the migration of the data itself, are there any use case studies
which could we keep in mind? We plan to export the tables with our own
tool which produces CSV with delimiter '|' (and \| if the char | is in
char columns too) and with hex representation of the BLOB data. This seems
to fit nicely with PostgreSQL's COPY command.

Any known pitfalls?


Do you have many stored procedures, functions, etc?

--
Angular momentum makes the world go 'round.




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> Re/ the migration of the data itself, are there any use case studies
> which could we keep in mind?

https://wiki.postgresql.org/images/e/e7/Pgconfeu_2013_-_Jens_Wilke_-_Sybase_to_PostgreSQL.pdf

Regards,
Christoph

-- 
Spare Space




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Matthias Apitz
El día Friday, May 03, 2019 a las 07:38:23AM -0500, Ron escribió:

> On 5/3/19 6:56 AM, Matthias Apitz wrote:
> >Hello,
> >
> >We're investigating the migration of our LMS (Library Managment System)
> >from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
> >around 400 columns, some of them are also containing BLOB (bytea) data.
> >The DB size vary upto 20 GByte. The interfaces contain any kind of
> >language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...
> >
> >Re/ the migration of the data itself, are there any use case studies
> >which could we keep in mind? We plan to export the tables with our own
> >tool which produces CSV with delimiter '|' (and \| if the char | is in
> >char columns too) and with hex representation of the BLOB data. This seems
> >to fit nicely with PostgreSQL's COPY command.
> >
> >Any known pitfalls?
> 
> Do you have many stored procedures, functions, etc?

We have in Sybase triggers on some tables calculating the next value for
an integer "serial" based on helper tables because Sybase does not know
(or did not know in 11.9) about serials. But, these will be replaced by native
"serial" on PG.

Sybase also has a so called SYB_IDENTITY_COLUMN in each table with a
unique number for each row (may have gaps) and as Sybase does not know
SCROLLED CURSOR we simulated these in our DB layer reading-in all
SYB_IDENTITY_COLUMN numbers of a hit list after SELECT and can read
backwards in this in memory list presenting the requested row with a new
SELECT based on the SYB_IDENTITY_COLUMN number.  This is somewhat
clumsy but certain features in upper layers want to read backwards (and
we came from INFORMIX-SE, later INFORMIX-ONL some 30 years ago).

I was deeply impressed by the COPY command, loading ~35000 rows in the
time one needs to close and open the eyes. As well a SELECT returns in
a table with ~35000 without any INDEX in very short time. How PG does this? 

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
70 years of NATO - 70 years of wars (Jugoslavia, Afghanistan, Syria, ...) and 
70 years
of war preparation against Russia.  -- PEACE instead of NATO !




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Adrian Klaver

On 5/3/19 6:09 AM, Matthias Apitz wrote:

El día Friday, May 03, 2019 a las 07:38:23AM -0500, Ron escribió:


On 5/3/19 6:56 AM, Matthias Apitz wrote:

Hello,

We're investigating the migration of our LMS (Library Managment System)

>from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have

around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte. The interfaces contain any kind of
language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...

Re/ the migration of the data itself, are there any use case studies
which could we keep in mind? We plan to export the tables with our own
tool which produces CSV with delimiter '|' (and \| if the char | is in
char columns too) and with hex representation of the BLOB data. This seems
to fit nicely with PostgreSQL's COPY command.

Any known pitfalls?


Do you have many stored procedures, functions, etc?


We have in Sybase triggers on some tables calculating the next value for
an integer "serial" based on helper tables because Sybase does not know
(or did not know in 11.9) about serials. But, these will be replaced by native
"serial" on PG.

Sybase also has a so called SYB_IDENTITY_COLUMN in each table with a
unique number for each row (may have gaps) and as Sybase does not know
SCROLLED CURSOR we simulated these in our DB layer reading-in all
SYB_IDENTITY_COLUMN numbers of a hit list after SELECT and can read
backwards in this in memory list presenting the requested row with a new
SELECT based on the SYB_IDENTITY_COLUMN number.  This is somewhat
clumsy but certain features in upper layers want to read backwards (and
we came from INFORMIX-SE, later INFORMIX-ONL some 30 years ago).

I was deeply impressed by the COPY command, loading ~35000 rows in the
time one needs to close and open the eyes. As well a SELECT returns in
a table with ~35000 without any INDEX in very short time. How PG does this?


In the COPY case the entire thing is done in a single transaction. The 
downside to this is that a single error in the data will roll back 
everything.


As to SELECT you are seeing the query planner at work. See the sections 
below for more info:


14.1
https://www.postgresql.org/docs/11/using-explain.html

14.2
https://www.postgresql.org/docs/11/planner-stats.html

Indexes are still important.



matthias




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Adrian Klaver

On 5/3/19 4:56 AM, Matthias Apitz wrote:


Hello,

We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte. The interfaces contain any kind of
language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...

Re/ the migration of the data itself, are there any use case studies
which could we keep in mind? We plan to export the tables with our own
tool which produces CSV with delimiter '|' (and \| if the char | is in
char columns too) and with hex representation of the BLOB data. This seems
to fit nicely with PostgreSQL's COPY command.


You might want to also take a look at:

A PostgreSQL foreign data wrapper to connect to TDS databases (Sybase 
and Microsoft SQL Server)

https://github.com/tds-fdw/tds_fdw




Any known pitfalls?

Btw: We're investigating MySQL too, but this seems to be from the list
now for not having an ESQL/C interface.

Regards

matthias




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Upgrading locale issues

2019-05-03 Thread Daniel Verite
rihad wrote:

> Thanks, I'm a bit confused here. AFAIK indexes are used for at least two 
> things: for speed and for skipping the ORDER BY step (since btree 
> indexes are already sorted). Will such an "upgrade-immune" C.UTF-8 index 
> still work correctly for table lookups?

If the lookup is based on a equality test or a pattern match with LIKE
or a regexp, it makes no difference.  But the locale makes a
difference with inequality tests, such as < > or BETWEEN.

Around version 9.1 and in the pre-ICU days, Robert Haas wrote
this post that explained it pretty well, I think:
http://rhaas.blogspot.com/2012/03/perils-of-collation-aware-comparisons.html

Quote:

  If you happen to need the particular sorting behavior that
  collation-aware sorting and comparisons provide, then you may find
  this price worth paying, but I suspect there are a lot of people out
  there who are paying it more or less accidentally and don't really
  care very much about the underlying sorting behavior.  If, for
  example, all of your queries are based on equality, and you don't
  use greater-than or less-than tests, then it doesn't matter what
  collation is in use.  You might as well use "C" instead of whatever
  your local default may be, because it's faster.

For non-English text, I would recommend C.UTF-8 over "C" because of
its better handling of Unicode characters. For instance:

=# select upper('été' collate "C"), upper('été' collate "C.UTF-8");
 upper | upper 
---+---
 éTé   | ÉTÉ

The "price" of linguistic comparisons that Robert mentioned was about
performance, but the troubles we have with the lack of long-term
immutability of these locales are worth being added to that.

> And can the existing en_US.UTF-8 fields' definition be altered in
> place, without a dump+restore?

Changing the COLLATE clause of a column with ALTER TABLE does
implicitly rebuild an index on this column if there is one,
A dump+restore is not needed, nor an explicit REINDEX.

The dump+restore is needed in another scenario, where you would
decide to change the LC_COLLATE and LC_CTYPE of the database,
instead of doing it only for some columns.
This scenario makes perfect sense if the locale of the database
has been set implicitly and it uses linguistic sorts without
really needing them ("accidentally" as said in the post).


> en_US.UTF-8 is the default encoding+locale+collation, it 
> isn't set explicitly for any of our string columns. I assume there's 
> some "catch-all" ordering taking place even for the C locale, so there 
> won't be any bizarre things like b coming before a, or generally for any 
> language, the second letter of its alphabet coming before the first?

'b' < 'a' is never true for any locale AFAIK, it is 'B' < 'a' that is
true for some locales such as C or C.UTF-8.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite




Re: Update row attribute that is part of row level security policy using_expression

2019-05-03 Thread Adrian Klaver

On 5/3/19 4:47 AM, Saupe Stefan wrote:
I'd like to use RLS to 'hide' or 'deactivate' data at some point that 
some rows are not visible to the application user anymore.


Let's say user a owns the data and can see all his data.

The application user 'b' can only select,update,delete... 'active' data, 
but is also able to 'deactivate' currently 'active' rows.


Below is how I tried to accomplish this.

But I'm not able to 'deactivate' rows in the table as application user b.

How can i accomplish this? If possible without having to change the 
application sql’s that run against the table(s)?


create user a with password 'a';

create user b with password 'b';

\c postgres a;

create table t1(id int,active boolean);

insert into t1 values(1,true);

insert into t1 values(2,false);

create policy mypolicy on t1 for all to b using (active);

alter table t1 enable row level security;

grant all on t1 to b;

select * from t1;

id | active

+

   1 | t

   2 | f

(2 rows)

--> OK

--Now connect as the application user b

\c postgres b;

select * from t1;

id | active

+

   1 | t

(1 row)

--> OK

--now I want to 'deactivate' the active row

update t1 set active=false where id=1;

ERROR:  new row violates row-level security policy for table "t1"

--> I want to be able to do this.

My question is:

How can user b read just ‘active’ data AND be able to ‘deactivate’ some 
active rows?


The primary issue here is you are using a security policy to try to 
enforce something that is not security related, the visibility of data. 
If a user was locked out of data for security reasons, but had the 
ability to unlock that data on their own it would not be much of a 
security policy. I see two choices:


1) Don't use RLS for this. Just allow the user to toggle active as 
needed. Not sure where the user is viewing the data, but active/inactive 
could be part of the code that allows data through.


2) If you want to use RLS then create a SECURITY DEFINER function that 
runs as the user that does have non-RLS restricted access to the table. 
Have user b use that to change the active status.




According to the docs 
(https://www.postgresql.org/docs/current/sql-createpolicy.html) the 
reason why the update fails is:


The policy USING expression is applied to Existing & new rows on UPDATES 
if read access is required to the existing or new row





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Back Slash \ issue

2019-05-03 Thread Adrian Klaver

On 5/2/19 10:48 PM, Guntry Vinod wrote:

Please do not top post. Inline/bottom posting is the preferred style on 
this list.

Hi Team,

We are using the below command

COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';


The above shows what you are doing on the input into Postgres.
We still do not know how you are dumping the data from DB2.

In what format are you dumping the DB2 data and with what specifications 
e.g. quoting?




Regards,
Vinod



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Starting Postgres when there is no disk space

2019-05-03 Thread Jeff Janes
On Wed, May 1, 2019 at 10:25 PM Igal Sapir  wrote:

>
> I have a scheduled process that runs daily to delete old data and do full
> vacuum.  Not sure why this happened (again).
>

If you are doing a regularly scheduled "vacuum full", you are almost
certainly doing something wrong.  Are these "vacuum full" completing, or
are they failing (probably due to transient out of space errors)?

A ordinary non-full vacuum will make the space available for internal
reuse. It will not return the space to filesystem (usually), so won't get
you out of the problem.  But it should prevent you from getting into the
problem in the first place.  If it is failing to reuse the space
adequately, you should figure out why, rather than just blindly jumping to
regularly scheduled "vacuum full".  For example, what is it that is
bloating, the tables themselves, their indexes, or their TOAST tables?  Or
is there any bloat in the first place? Are you sure your deletions are
equal to your insertions, over the long term average?  If you are doing
"vacuum full" and you are certain it is completing successfully, but it
doesn't free up much space, then that is strong evidence that you don't
actually have bloat, you just have more live data than you think you do.
(It could also mean you have done something silly with your "fillfactor"
settings.)

If you don't want the space to be reused, to keep a high correlation
between insert time and physical order of the rows for example, then you
should look into partitioning, as you have already noted.

Now that you have the system up again and some space freed up, I'd create a
"ballast" file with a few gig of random (to avoid filesystem-level
compression, should you have such a filesystem) data on the same device
that holds your main data, that can be deleted in an emergency to give you
enough free space to at least start the system.  Of course, monitoring is
also nice, but the ballast file is more robust and there is no reason you
can't have both.

Cheers,

Jeff


Re: Back Slash \ issue

2019-05-03 Thread Bruce Momjian
On Fri, May  3, 2019 at 06:55:55AM -0700, Adrian Klaver wrote:
> On 5/2/19 10:48 PM, Guntry Vinod wrote:
> 
> Please do not top post. Inline/bottom posting is the preferred style on this
> list.
> > Hi Team,
> > 
> > We are using the below command
> > 
> > COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';
> 
> The above shows what you are doing on the input into Postgres.
> We still do not know how you are dumping the data from DB2.
> 
> In what format are you dumping the DB2 data and with what specifications
> e.g. quoting?

On thing the original poster might be missing is that the copy DELIMITER
is used between fields, while backslash is used as an escape before a
single character.  While it might be tempting to try to redefine the
escape character with the copy ESCAPE keyword, that keyword only works
in CSV mode.

The Postgres COPY format is very reliable and able to dump/reload _any_
data sequence.  Many commercial data dump implementations are simpler
but are not able to be as reliable.

The bottom line is that you are going to need to double the backslashes
unless you move to CSV mode.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Back Slash \ issue

2019-05-03 Thread Bruce Momjian
On Fri, May  3, 2019 at 10:04:44AM -0400, Bruce Momjian wrote:
> On thing the original poster might be missing is that the copy DELIMITER
> is used between fields, while backslash is used as an escape before a
> single character.  While it might be tempting to try to redefine the
> escape character with the copy ESCAPE keyword, that keyword only works
> in CSV mode.
> 
> The Postgres COPY format is very reliable and able to dump/reload _any_
> data sequence.  Many commercial data dump implementations are simpler
> but are not able to be as reliable.

For example, if you are using | as a delimiter, how do you represent a
literal | in the data?  You have to use an escape character before it,
and that is what backslash does, and if you have a backslash in your
data, you have to use a backslash before it too.  CSV has a similar
problem with double-quotes in double-quoted strings, and this is handled
by default by using two double-quotes.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Back Slash \ issue

2019-05-03 Thread Ravi Krishna
> 
> In what format are you dumping the DB2 data and with what specifications e.g. 
> quoting?
> 

DB2's export command quotes the data with "". So while loading, shouldn't that 
take care of delimiter-in-the-data issue ?





Re: Upgrading locale issues

2019-05-03 Thread rihad

On 05/03/2019 05:35 PM, Daniel Verite wrote:

rihad wrote:


Thanks, I'm a bit confused here. AFAIK indexes are used for at least two
things: for speed and for skipping the ORDER BY step (since btree
indexes are already sorted). Will such an "upgrade-immune" C.UTF-8 index
still work correctly for table lookups?

If the lookup is based on a equality test or a pattern match with LIKE
or a regexp, it makes no difference.  But the locale makes a
difference with inequality tests, such as < > or BETWEEN.

Around version 9.1 and in the pre-ICU days, Robert Haas wrote
this post that explained it pretty well, I think:
http://rhaas.blogspot.com/2012/03/perils-of-collation-aware-comparisons.html

Quote:

   If you happen to need the particular sorting behavior that
   collation-aware sorting and comparisons provide, then you may find
   this price worth paying, but I suspect there are a lot of people out
   there who are paying it more or less accidentally and don't really
   care very much about the underlying sorting behavior.  If, for
   example, all of your queries are based on equality, and you don't
   use greater-than or less-than tests, then it doesn't matter what
   collation is in use. You might as well use "C" instead of whatever
   your local default may be, because it's faster.

For non-English text, I would recommend C.UTF-8 over "C" because of
its better handling of Unicode characters. For instance:

=# select upper('été' collate "C"), upper('été' collate "C.UTF-8");
  upper | upper
---+---
  éTé   | ÉTÉ

The "price" of linguistic comparisons that Robert mentioned was about
performance, but the troubles we have with the lack of long-term
immutability of these locales are worth being added to that.


And can the existing en_US.UTF-8 fields' definition be altered in
place, without a dump+restore?

Changing the COLLATE clause of a column with ALTER TABLE does
implicitly rebuild an index on this column if there is one,
A dump+restore is not needed, nor an explicit REINDEX.

The dump+restore is needed in another scenario, where you would
decide to change the LC_COLLATE and LC_CTYPE of the database,
instead of doing it only for some columns.
This scenario makes perfect sense if the locale of the database
has been set implicitly and it uses linguistic sorts without
really needing them ("accidentally" as said in the post).



en_US.UTF-8 is the default encoding+locale+collation, it
isn't set explicitly for any of our string columns. I assume there's
some "catch-all" ordering taking place even for the C locale, so there
won't be any bizarre things like b coming before a, or generally for any
language, the second letter of its alphabet coming before the first?

'b' < 'a' is never true for any locale AFAIK, it is 'B' < 'a' that is
true for some locales such as C or C.UTF-8.


Best regards,



Thanks a lot for sharing your insights!





Re: Back Slash \ issue

2019-05-03 Thread Adrian Klaver

On 5/3/19 7:35 AM, Ravi Krishna wrote:


In what format are you dumping the DB2 data and with what specifications e.g. 
quoting?



DB2's export command quotes the data with "". So while loading, shouldn't that 
take care of delimiter-in-the-data issue ?



In the original post the only info was:

"We have dump from DB2 and trying to upload it Postgre."

That is a little vague, I would prefer more concrete information before 
proposing a solution.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Back Slash \ issue

2019-05-03 Thread Michael Nolan
On Fri, May 3, 2019 at 9:35 AM Ravi Krishna  wrote:

> >
> > In what format are you dumping the DB2 data and with what specifications
> e.g. quoting?
> >
>
> DB2's export command quotes the data with "". So while loading, shouldn't
> that take care of delimiter-in-the-data issue ?
>

I don't think we've seen enough representative data to know exactly what
the backslash is doing.  It doesn't appear to be an escape, based on the
sole example I've seen it appears to be a data separator between first name
and last name.

It seems increasingly likely to me that you might not be in a position
where the COPY command in PostgreSQL can handle loading the database, at
least not without some help.  You might have to write a program to clean up
the data and format it for PostgreSQL.

I've spent a lot of time over the years migrating data from one platform to
another, you have to know EXACTLY what data you currently have and what
format you need it turned into before you can figure out how to do the
transformation.
--
Mike Nolan


Re: Back Slash \ issue

2019-05-03 Thread Ravi Krishna
> 
> I don't think we've seen enough representative data to know exactly what the 
> backslash is doing.  It doesn't appear to be an escape, based on the sole 
> example I've seen it appears to be a data separator between first name and 
> last name.
> 
> It seems increasingly likely to me that you might not be in a position where 
> the COPY command in PostgreSQL can handle loading the database, at least not 
> without some help.  You might have to write a program to clean up the data 
> and format it for PostgreSQL.  
> 
> I've spent a lot of time over the years migrating data from one platform to 
> another, you have to know EXACTLY what data you currently have and what 
> format you need it turned into before you can figure out how to do the 
> transformation.  
> --
> Mike Nolan

Fully agreed.  I have informed Guntry via email that he has to provide more 
information before we can help further.



Re: Upgrading locale issues

2019-05-03 Thread rihad

On 05/03/2019 05:35 PM, Daniel Verite wrote:

For non-English text, I would recommend C.UTF-8 over "C" because of


BTW, there's no C.UTF-8 inside pg_collation, and running select 
pg_import_system_collations('pg_catalog') doesn't bring it in, at least 
not on FreeBSD 11.2.





RE: Back Slash \ issue

2019-05-03 Thread Guntry Vinod
Hi Team,

Here we go. I will give the problem in more detail

Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
Step2:There is table in PostGre where we are suppose  to upload the dump
Step3:We are using copy command to upload dump to the table using (COPY 
<> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  )
Step 4:In the above step we are using delimiter because the data is separated 
(:) in the flat which we have received from the flat file

Problem Statement:We are able to upload the data from the flat file which we 
got from the DB2 but few data the data consist of  " \".For example if the 
CustomerName is Vinod\G in the flat file ,we expect the same data in PostGre  
table for CustomerName as Vinod\G but we see VinodG(slash is missed).

Possible Solution: We can replace "\" with "\\" but if the file is in too large 
we cannot open it(we can replace if the file is medium or small)

Expectation: We need a command or utility which can upload the data as it is 
(for example if Vinod\G then we should see in PostGre as Vinod\G but not VinodG)

Hope Iam detail this time :-)

Regards,
Vinod


 

-Original Message-
From: Ravi Krishna  
Sent: Friday, May 3, 2019 8:43 PM
To: Michael Nolan 
Cc: Adrian Klaver ; Guntry Vinod 
; pgsql-gene...@postgresql.org; Venkatamurali 
Krishna Gottuparthi ; Biswa Ranjan Dash 

Subject: Re: Back Slash \ issue

> 
> I don't think we've seen enough representative data to know exactly what the 
> backslash is doing.  It doesn't appear to be an escape, based on the sole 
> example I've seen it appears to be a data separator between first name and 
> last name.
> 
> It seems increasingly likely to me that you might not be in a position where 
> the COPY command in PostgreSQL can handle loading the database, at least not 
> without some help.  You might have to write a program to clean up the data 
> and format it for PostgreSQL.  
> 
> I've spent a lot of time over the years migrating data from one platform to 
> another, you have to know EXACTLY what data you currently have and what 
> format you need it turned into before you can figure out how to do the 
> transformation.  
> --
> Mike Nolan

Fully agreed.  I have informed Guntry via email that he has to provide more 
information before we can help further.


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.







Re: Upgrading locale issues

2019-05-03 Thread Daniel Verite
rihad wrote:

> On 05/03/2019 05:35 PM, Daniel Verite wrote:
> > For non-English text, I would recommend C.UTF-8 over "C" because of
> 
> BTW, there's no C.UTF-8 inside pg_collation, and running select 
> pg_import_system_collations('pg_catalog') doesn't bring it in, at least 
> not on FreeBSD 11.2.

Yes, aside from "C", locales are quite system-dependent, unfortunately.

It looks like FreeBSD 13 does provide C.UTF-8:

https://unix.stackexchange.com/questions/485073/how-to-backport-freebsd-13-current-c-utf-8-locale-to-11-2-release/485077#485077


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite




Re: Back Slash \ issue

2019-05-03 Thread Ravi Krishna
> 
> Hope Iam detail this time :-)
> 

Unfortunately still not enough.  Can you post sample of the data here.  And 
what command you used in DB2. Pls post the SQL used in DB2 to dump the data.






Re: Starting Postgres when there is no disk space

2019-05-03 Thread Igal Sapir
Jeff,

On Fri, May 3, 2019 at 6:56 AM Jeff Janes  wrote:

> On Wed, May 1, 2019 at 10:25 PM Igal Sapir  wrote:
>
>>
>> I have a scheduled process that runs daily to delete old data and do full
>> vacuum.  Not sure why this happened (again).
>>
>
> If you are doing a regularly scheduled "vacuum full", you are almost
> certainly doing something wrong.  Are these "vacuum full" completing, or
> are they failing (probably due to transient out of space errors)?
>
> A ordinary non-full vacuum will make the space available for internal
> reuse. It will not return the space to filesystem (usually), so won't get
> you out of the problem.  But it should prevent you from getting into the
> problem in the first place.  If it is failing to reuse the space
> adequately, you should figure out why, rather than just blindly jumping to
> regularly scheduled "vacuum full".  For example, what is it that is
> bloating, the tables themselves, their indexes, or their TOAST tables?  Or
> is there any bloat in the first place? Are you sure your deletions are
> equal to your insertions, over the long term average?  If you are doing
> "vacuum full" and you are certain it is completing successfully, but it
> doesn't free up much space, then that is strong evidence that you don't
> actually have bloat, you just have more live data than you think you do.
> (It could also mean you have done something silly with your "fillfactor"
> settings.)
>
> If you don't want the space to be reused, to keep a high correlation
> between insert time and physical order of the rows for example, then you
> should look into partitioning, as you have already noted.
>
> Now that you have the system up again and some space freed up, I'd create
> a "ballast" file with a few gig of random (to avoid filesystem-level
> compression, should you have such a filesystem) data on the same device
> that holds your main data, that can be deleted in an emergency to give you
> enough free space to at least start the system.  Of course, monitoring is
> also nice, but the ballast file is more robust and there is no reason you
> can't have both.
>

Thank you for the tips.  I stand corrected.  These are regular VACUUM calls
after the deletion, not VACUUM FULL.  It's a daily process that deletes
records from N days ago, and then performs VACUUM, so yes, all of the
inserted records should be deleted after N days.

The bloat is in a TOAST table.  The primary table has a JSONB column which
can get quite large.  The fillfactor setting was not modified from its
default value (does the primary table fillfactor affect the toast table?
either way they are both default in this case).

Ballast file is a great idea.  I was just thinking about that a couple of
days ago, but instead of one file I think that I will have a bunch of them
at 1GB each.  That will give me more flexibility in clearing space as
needed and keeping more "safety buffers" for when I make space.

Thanks for your help,

Igal


Re: Back Slash \ issue

2019-05-03 Thread Adrian Klaver

On 5/3/19 9:05 AM, Guntry Vinod wrote:

Hi Team,

Here we go. I will give the problem in more detail

Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt


The above is what we need information on:

1) Is it output as CSV or text?

2) What are the parameters used to output the data in either case?. In 
other words, is string quoting used, the field delimiter, etc.




Step2:There is table in PostGre where we are suppose  to upload the dump
Step3:We are using copy command to upload dump to the table using (COPY 
<> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  )
Step 4:In the above step we are using delimiter because the data is separated 
(:) in the flat which we have received from the flat file

Problem Statement:We are able to upload the data from the flat file which we got from the 
DB2 but few data the data consist of  " \".For example if the CustomerName is 
Vinod\G in the flat file ,we expect the same data in PostGre  table for CustomerName as 
Vinod\G but we see VinodG(slash is missed).

Possible Solution: We can replace "\" with "\\" but if the file is in too large 
we cannot open it(we can replace if the file is medium or small)

Expectation: We need a command or utility which can upload the data as it is 
(for example if Vinod\G then we should see in PostGre as Vinod\G but not VinodG)

Hope Iam detail this time :-)

Regards,
Vinod


  


-Original Message-
From: Ravi Krishna 
Sent: Friday, May 3, 2019 8:43 PM
To: Michael Nolan 
Cc: Adrian Klaver ; Guntry Vinod ; 
pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottuparthi ; 
Biswa Ranjan Dash 
Subject: Re: Back Slash \ issue



I don't think we've seen enough representative data to know exactly what the 
backslash is doing.  It doesn't appear to be an escape, based on the sole 
example I've seen it appears to be a data separator between first name and last 
name.

It seems increasingly likely to me that you might not be in a position where 
the COPY command in PostgreSQL can handle loading the database, at least not 
without some help.  You might have to write a program to clean up the data and 
format it for PostgreSQL.

I've spent a lot of time over the years migrating data from one platform to 
another, you have to know EXACTLY what data you currently have and what format 
you need it turned into before you can figure out how to do the transformation.
--
Mike Nolan


Fully agreed.  I have informed Guntry via email that he has to provide more 
information before we can help further.


Disclaimer:  This message and the information contained herein is proprietary and 
confidential and subject to the Tech Mahindra policy statement, you may review the policy 
at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within TechMahindra.






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Back Slash \ issue

2019-05-03 Thread Adrian Klaver

On 5/3/19 9:05 AM, Guntry Vinod wrote:

Hi Team,

Here we go. I will give the problem in more detail

Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
Step2:There is table in PostGre where we are suppose  to upload the dump
Step3:We are using copy command to upload dump to the table using (COPY 
<> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  )


Should have been in my previous post. The answer to whether Step 1 is 
text or CSV is important as that determines the way you use the COPY 
command above. As has been pointed out upstream using COPY assuming 
text(as you are doing above) on a CSV file will create issues.



Step 4:In the above step we are using delimiter because the data is separated 
(:) in the flat which we have received from the flat file

Problem Statement:We are able to upload the data from the flat file which we got from the 
DB2 but few data the data consist of  " \".For example if the CustomerName is 
Vinod\G in the flat file ,we expect the same data in PostGre  table for CustomerName as 
Vinod\G but we see VinodG(slash is missed).

Possible Solution: We can replace "\" with "\\" but if the file is in too large 
we cannot open it(we can replace if the file is medium or small)

Expectation: We need a command or utility which can upload the data as it is 
(for example if Vinod\G then we should see in PostGre as Vinod\G but not VinodG)

Hope Iam detail this time :-)

Regards,
Vinod


  





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Back Slash \ issue

2019-05-03 Thread Michael Nolan
I'm still not clear what the backslash is for, it is ONLY to separate first
and last name?  Can you change it to some other character?

Others have suggested you're in a Windows environment, that might limit
your options.   How big is the file, is it possible to copy it to another
server to manipulate it?
--
Mike Nolan


Re: Back Slash \ issue

2019-05-03 Thread Igor Korot
Hi,

On Fri, May 3, 2019 at 11:20 AM Michael Nolan  wrote:
>
>
>
> I'm still not clear what the backslash is for, it is ONLY to separate first 
> and last name?  Can you change it to some other character?
>
> Others have suggested you're in a Windows environment, that might limit your 
> options.   How big is the file, is it possible to copy it to another server 
> to manipulate it?

Why not use something like Perl to process the data and then feed the
processed file to PostgreSQL?

Thank you.

> --
> Mike Nolan




Pgadmin III

2019-05-03 Thread Julie Nishimura
Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am 
getting this error:
An error has occurred:
Column not found in pgSet: rolcatupdate

Do you know which version of Pgadmin should I use to avoid this? I am on 
windows 7. Thanks


RE: Back Slash \ issue

2019-05-03 Thread Guntry Vinod
Looping Nikhil and Shiva who are from Mainframe, DB2.

Nikhil/Shiva I am trying explain the problem to the team but there few 
questions which needs your intervention.

Regards,
Vinod

-Original Message-
From: Adrian Klaver  
Sent: Friday, May 3, 2019 9:47 PM
To: Guntry Vinod ; Ravi Krishna 
; Michael Nolan 
Cc: pgsql-gene...@postgresql.org; Venkatamurali Krishna Gottuparthi 
; Biswa Ranjan Dash 
Subject: Re: Back Slash \ issue

On 5/3/19 9:05 AM, Guntry Vinod wrote:
> Hi Team,
> 
> Here we go. I will give the problem in more detail
> 
> Step 1:We get the dump from DB2 and this dump is flat file which can 
> be csv,txt Step2:There is table in PostGre where we are suppose  to 
> upload the dump Step3:We are using copy command to upload dump to the 
> table using (COPY <> from 'C:\Data_Dump\ABC.txt' DELIMITER 
> '|';  )

Should have been in my previous post. The answer to whether Step 1 is text or 
CSV is important as that determines the way you use the COPY command above. As 
has been pointed out upstream using COPY assuming text(as you are doing above) 
on a CSV file will create issues.

> Step 4:In the above step we are using delimiter because the data is 
> separated (:) in the flat which we have received from the flat file
> 
> Problem Statement:We are able to upload the data from the flat file which we 
> got from the DB2 but few data the data consist of  " \".For example if the 
> CustomerName is Vinod\G in the flat file ,we expect the same data in PostGre  
> table for CustomerName as Vinod\G but we see VinodG(slash is missed).
> 
> Possible Solution: We can replace "\" with "\\" but if the file is in 
> too large we cannot open it(we can replace if the file is medium or 
> small)
> 
> Expectation: We need a command or utility which can upload the data as 
> it is (for example if Vinod\G then we should see in PostGre as Vinod\G 
> but not VinodG)
> 
> Hope Iam detail this time :-)
> 
> Regards,
> Vinod
> 
> 
>   
> 



--
Adrian Klaver
adrian.kla...@aklaver.com


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Re: Back Slash \ issue

2019-05-03 Thread Rob Sargent



On 5/3/19 10:05 AM, Guntry Vinod wrote:

Hi Team,

Here we go. I will give the problem in more detail

Step 1:We get the dump from DB2 and this dump is flat file which can be csv,txt
Step2:There is table in PostGre where we are suppose  to upload the dump
Step3:We are using copy command to upload dump to the table using (COPY 
<> from 'C:\Data_Dump\ABC.txt' DELIMITER '|';  )
Step 4:In the above step we are using delimiter because the data is separated 
(:) in the flat which we have received from the flat file

Problem Statement:We are able to upload the data from the flat file which we got from the 
DB2 but few data the data consist of  " \".For example if the CustomerName is 
Vinod\G in the flat file ,we expect the same data in PostGre  table for CustomerName as 
Vinod\G but we see VinodG(slash is missed).

Possible Solution: We can replace "\" with "\\" but if the file is in too large 
we cannot open it(we can replace if the file is medium or small)

Expectation: We need a command or utility which can upload the data as it is 
(for example if Vinod\G then we should see in PostGre as Vinod\G but not VinodG)


So everything works except the backslashes disappear, correct? As many 
have said all you need to do is double the backslashes, and apparently 
you done that on small files.  How have you done that? Perhaps with an 
editor?  For the large files you must use a tool (perl awk, even a 
trivial C program) to double the backslashes.







CREATE EXTENSION to load the language into the database

2019-05-03 Thread Daulat Ram
Hello team,

We are getting below issue while creating a function in Potsgres 11.2


nagios=# create or replace function diskf (filesystem text, warn int, err int) 
returns text as $BODY$
nagios$# use warnings;
nagios$# use strict;
nagios$# my $fs = $_[0];
nagios$# my $w = $_[1];
nagios$# my $e = $_[2];
nagios$# my $r = "WARNING";
nagios$# my $output = `df -kP $fs`;
nagios$# $output =~ /.*\s+(\d+)%.*/;
nagios$# $output = $1;
nagios$# if ($output > $w)
nagios$# { $r = "ERROR" if $output > $e;}
nagios$# else { $r = "OK";}
nagios$# return  "$r $output";
nagios$# $BODY$ language plperlu;

ERROR:  language "plperlu" does not exist
HINT:  Use CREATE EXTENSION to load the language into the database.


nagios=# SELECT * FROM pg_language;
lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | 
lanvalidator | lanacl
--+--+-+--+---+---+--+
internal |   10 | f   | f| 0 | 0 |  
   2246 |
c|   10 | f   | f| 0 | 0 |  
   2247 |
sql  |   10 | f   | t| 0 | 0 |  
   2248 |
plpgsql  |   10 | t   | t| 13075 | 13076 |  
  13077 |
(4 rows)

To solve this issue , I am getting the below warnings if creating extensions.

nagios=# CREATE EXTENSION plperl;
ERROR:  could not open extension control file 
"/usr/local/share/postgresql/extension/plperl.control": No such file or 
directory
nagios=# CREATE LANGUAGE plperlu;
ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": Error 
loading shared library libperl.so.5.20: No such file or directory (needed by 
/usr/local/lib/postgresql/plperl.so)
nagios=# CREATE LANGUAGE plperlu;
ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": Error 
loading shared library libperl.so.5.20: No such file or directory (needed by 
/usr/local/lib/postgresql/plperl.so)
nagios=#


Regards,
Dault


Re: CREATE EXTENSION to load the language into the database

2019-05-03 Thread Adrian Klaver

On 5/3/19 8:56 AM, Daulat Ram wrote:

Hello team,

We are getting below issue while creating a function in Potsgres 11.2

nagios=# create or replace function diskf (filesystem text, warn int, 
err int) returns text as $BODY$


nagios$# use warnings;

nagios$# use strict;

nagios$# my $fs = $_[0];

nagios$# my $w = $_[1];

nagios$# my $e = $_[2];

nagios$# my $r = "WARNING";

nagios$# my $output = `df -kP $fs`;

nagios$# $output =~ /.*\s+(\d+)%.*/;

nagios$# $output = $1;

nagios$# if ($output > $w)

nagios$# { $r = "ERROR" if $output > $e;}

nagios$# else { $r = "OK";}

nagios$# return  "$r $output";

nagios$# $BODY$ language plperlu;

ERROR:  language "plperlu" does not exist

HINT:  Use CREATE EXTENSION to load the language into the database.

nagios=# SELECT * FROM pg_language;

lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline 
| lanvalidator | lanacl


--+--+-+--+---+---+--+

internal |   10 | f   | f    | 0 | 0 
| 2246 |


c    |   10 | f   | f    | 0 | 0 
| 2247 |


sql  |   10 | f   | t    | 0 | 0 
| 2248 |


plpgsql  |   10 | t   | t    | 13075 | 13076 
|    13077 |


(4 rows)

To solve this issue , I am getting the below warnings if creating 
extensions.


nagios=# CREATE EXTENSION plperl;

ERROR:  could not open extension control file 
"/usr/local/share/postgresql/extension/plperl.control": No such file or 
directory


nagios=# CREATE LANGUAGE plperlu;

ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": 
Error loading shared library libperl.so.5.20: No such file or directory 
(needed by /usr/local/lib/postgresql/plperl.so)


nagios=# CREATE LANGUAGE plperlu;

ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": 
Error loading shared library libperl.so.5.20: No such file or directory 
(needed by /usr/local/lib/postgresql/plperl.so)


The plperl(u) extension has not been added to the Postgres installation. 
You need to do that. To help you with that we need to know:


What OS(and version) are you using?

How did you install Postgres?



nagios=#

Regards,

Dault




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Starting Postgres when there is no disk space

2019-05-03 Thread Igal Sapir
If anyone ever needs, I wrote this 1-liner bash loop to create 16 temp
files of 640MB random data each (well, 2-liner if you count the "config"
line):

$ COUNT=16; TMPDIR=/pgdata/tmp/
$ for ((i=1; i<=6; i++)); do dd if=/dev/zero of="/pgdata/tmp/$(cat
/dev/urandom | tr -cd 'a-f0-9' | head -c 20).tmp" count=81920 bs=8192; done;

Which produces about 10GB of unusable space that I can free up in the event
that I run out of disk (10GB might be excessive, but it works for me for
the time being):

$ ls -lh $TMPDIR
total 10G
-rw-r--r-- 1 root root 640M May  3 12:42 0a81845a5de0d926572e.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 1800a815773f34b8be98.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 1b182057d9b764d3b2a8.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 40f7b4cab222699d121a.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 498e9bc0852ed83af04f.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 49e84e5189e424c012be.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 7c984b156d11b5817aa5.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 7d1195b03906e3539495.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 9677ff969c7add0e7f92.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 9ae9d483adddf3317d7c.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 a546f3f363ca733427e7.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 a965856cb1118d98f66a.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 c162da7ecdb8824e3baf.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 d7c97019ce658b90285b.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 e76fc603ffe2c977c826.tmp
-rw-r--r-- 1 root root 640M May  3 12:42 fed72361b202f9492d7f.tmp

Best,

Igal

On Fri, May 3, 2019 at 9:09 AM Igal Sapir  wrote:

> Jeff,
>
> On Fri, May 3, 2019 at 6:56 AM Jeff Janes  wrote:
>
>> On Wed, May 1, 2019 at 10:25 PM Igal Sapir  wrote:
>>
>>>
>>> I have a scheduled process that runs daily to delete old data and do
>>> full vacuum.  Not sure why this happened (again).
>>>
>>
>> If you are doing a regularly scheduled "vacuum full", you are almost
>> certainly doing something wrong.  Are these "vacuum full" completing, or
>> are they failing (probably due to transient out of space errors)?
>>
>> A ordinary non-full vacuum will make the space available for internal
>> reuse. It will not return the space to filesystem (usually), so won't get
>> you out of the problem.  But it should prevent you from getting into the
>> problem in the first place.  If it is failing to reuse the space
>> adequately, you should figure out why, rather than just blindly jumping to
>> regularly scheduled "vacuum full".  For example, what is it that is
>> bloating, the tables themselves, their indexes, or their TOAST tables?  Or
>> is there any bloat in the first place? Are you sure your deletions are
>> equal to your insertions, over the long term average?  If you are doing
>> "vacuum full" and you are certain it is completing successfully, but it
>> doesn't free up much space, then that is strong evidence that you don't
>> actually have bloat, you just have more live data than you think you do.
>> (It could also mean you have done something silly with your "fillfactor"
>> settings.)
>>
>> If you don't want the space to be reused, to keep a high correlation
>> between insert time and physical order of the rows for example, then you
>> should look into partitioning, as you have already noted.
>>
>> Now that you have the system up again and some space freed up, I'd create
>> a "ballast" file with a few gig of random (to avoid filesystem-level
>> compression, should you have such a filesystem) data on the same device
>> that holds your main data, that can be deleted in an emergency to give you
>> enough free space to at least start the system.  Of course, monitoring is
>> also nice, but the ballast file is more robust and there is no reason you
>> can't have both.
>>
>
> Thank you for the tips.  I stand corrected.  These are regular VACUUM
> calls after the deletion, not VACUUM FULL.  It's a daily process that
> deletes records from N days ago, and then performs VACUUM, so yes, all of
> the inserted records should be deleted after N days.
>
> The bloat is in a TOAST table.  The primary table has a JSONB column which
> can get quite large.  The fillfactor setting was not modified from its
> default value (does the primary table fillfactor affect the toast table?
> either way they are both default in this case).
>
> Ballast file is a great idea.  I was just thinking about that a couple of
> days ago, but instead of one file I think that I will have a bunch of them
> at 1GB each.  That will give me more flexibility in clearing space as
> needed and keeping more "safety buffers" for when I make space.
>
> Thanks for your help,
>
> Igal
>
>


error fsm relations

2019-05-03 Thread Julie Nishimura
Guys,
Do you know what does this message mean?
POSTGRES_FSM_RELATIONS=CRITICAL: DB control (host:xxx) fsm relations used: 
76628 of 8 (96%)


Is this caused by someone deleting a bunch of old data and not vacuuming?


Thanks!




From: Julie Nishimura 
Sent: Friday, May 3, 2019 9:39 AM
Cc: Adrian Klaver; pgsql-gene...@postgresql.org
Subject: Pgadmin III

Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am 
getting this error:
An error has occurred:
Column not found in pgSet: rolcatupdate

Do you know which version of Pgadmin should I use to avoid this? I am on 
windows 7. Thanks


Connecting to NOTIFY with telnet

2019-05-03 Thread Igal Sapir
Is it possible to connect to Postgres for notifications via telnet?  This
is obviously more for learning/experimenting purposes.

I expected a simple way to connect and consume notifications but can not
find any example or documentation on how to do that.

Any ideas?

Thanks,

Igal


Re: CREATE EXTENSION to load the language into the database

2019-05-03 Thread Adrian Klaver

On 5/3/19 10:24 AM, Daulat Ram wrote:

Hi Adrian,

Please find the requested details.

What OS(and version) are you using?
Ans:
bash-4.4$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.9.2
PRETTY_NAME="Alpine Linux v3.9"
HOME_URL="https://alpinelinux.org/";
BUG_REPORT_URL="https://bugs.alpinelinux.org/";
bash-4.4$


bash-4.4$ uname -a
Linux psql_primary_kbcn 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 
UTC 2017 x86_64 Linux
bash-4.4$


How did you install Postgres?

Ans:

We did installation via customized docker image provided by our dev ops team.


Well in the image they need to make the plperl language available. If 
they are building from source then it needs to be compiled with:


https://www.postgresql.org/docs/11/install-procedure.html
--with-perl

Build the PL/Perl server-side language.

If they are using packages then they will need to include the 
appropriate package.




Regards,
Daulat


-Original Message-
From: Adrian Klaver 
Sent: Friday, May 3, 2019 10:21 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: Re: CREATE EXTENSION to load the language into the database

On 5/3/19 8:56 AM, Daulat Ram wrote:

Hello team,

We are getting below issue while creating a function in Potsgres 11.2

nagios=# create or replace function diskf (filesystem text, warn int,
err int) returns text as $BODY$

nagios$# use warnings;

nagios$# use strict;

nagios$# my $fs = $_[0];

nagios$# my $w = $_[1];

nagios$# my $e = $_[2];

nagios$# my $r = "WARNING";

nagios$# my $output = `df -kP $fs`;

nagios$# $output =~ /.*\s+(\d+)%.*/;

nagios$# $output = $1;

nagios$# if ($output > $w)

nagios$# { $r = "ERROR" if $output > $e;}

nagios$# else { $r = "OK";}

nagios$# return  "$r $output";

nagios$# $BODY$ language plperlu;

ERROR:  language "plperlu" does not exist

HINT:  Use CREATE EXTENSION to load the language into the database.

nagios=# SELECT * FROM pg_language;

lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid |
laninline
| lanvalidator | lanacl

--+--+-+--+---+---+--+

internal |   10 | f   | f    | 0 |
0
| 2246 |

c    |   10 | f   | f    | 0 |
0
| 2247 |

sql  |   10 | f   | t    | 0 |
0
| 2248 |

plpgsql  |   10 | t   | t    | 13075 |
13076
|    13077 |

(4 rows)

To solve this issue , I am getting the below warnings if creating
extensions.

nagios=# CREATE EXTENSION plperl;

ERROR:  could not open extension control file
"/usr/local/share/postgresql/extension/plperl.control": No such file
or directory

nagios=# CREATE LANGUAGE plperlu;

ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so":
Error loading shared library libperl.so.5.20: No such file or
directory (needed by /usr/local/lib/postgresql/plperl.so)

nagios=# CREATE LANGUAGE plperlu;

ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so":
Error loading shared library libperl.so.5.20: No such file or
directory (needed by /usr/local/lib/postgresql/plperl.so)


The plperl(u) extension has not been added to the Postgres installation.
You need to do that. To help you with that we need to know:

What OS(and version) are you using?

How did you install Postgres?



nagios=#

Regards,

Dault




--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pgadmin III

2019-05-03 Thread Tom Lane
Julie Nishimura  writes:
> Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am 
> getting this error:
> An error has occurred:
> Column not found in pgSet: rolcatupdate

> Do you know which version of Pgadmin should I use to avoid this? I am on 
> windows 7. Thanks

Development of pgAdmin 3 stopped some time ago, so I'm not surprised that
it can't cope with recent PG servers.  You should use pgAdmin 4 instead.
Don't know anything about its minor versions, though.

pgAdmin-specific questions are best directed to one of the pgAdmin lists,
eg pgadmin-supp...@lists.postgresql.org

regards, tom lane




Re: Pgadmin III

2019-05-03 Thread Julie Nishimura
Thanks!


From: Tom Lane 
Sent: Friday, May 3, 2019 11:25 AM
To: Julie Nishimura
Cc: Adrian Klaver; pgsql-gene...@postgresql.org
Subject: Re: Pgadmin III

Julie Nishimura  writes:
> Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I am 
> getting this error:
> An error has occurred:
> Column not found in pgSet: rolcatupdate

> Do you know which version of Pgadmin should I use to avoid this? I am on 
> windows 7. Thanks

Development of pgAdmin 3 stopped some time ago, so I'm not surprised that
it can't cope with recent PG servers.  You should use pgAdmin 4 instead.
Don't know anything about its minor versions, though.

pgAdmin-specific questions are best directed to one of the pgAdmin lists,
eg pgadmin-supp...@lists.postgresql.org

regards, tom lane


RE: CREATE EXTENSION to load the language into the database

2019-05-03 Thread Daulat Ram
Hi Adrian,

Please find the requested details.

What OS(and version) are you using?
Ans:
bash-4.4$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.9.2
PRETTY_NAME="Alpine Linux v3.9"
HOME_URL="https://alpinelinux.org/";
BUG_REPORT_URL="https://bugs.alpinelinux.org/";
bash-4.4$


bash-4.4$ uname -a
Linux psql_primary_kbcn 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 
UTC 2017 x86_64 Linux
bash-4.4$


How did you install Postgres?

Ans:

We did installation via customized docker image provided by our dev ops team.

Regards,
Daulat


-Original Message-
From: Adrian Klaver  
Sent: Friday, May 3, 2019 10:21 PM
To: Daulat Ram ; pgsql-general@lists.postgresql.org
Subject: Re: CREATE EXTENSION to load the language into the database

On 5/3/19 8:56 AM, Daulat Ram wrote:
> Hello team,
> 
> We are getting below issue while creating a function in Potsgres 11.2
> 
> nagios=# create or replace function diskf (filesystem text, warn int, 
> err int) returns text as $BODY$
> 
> nagios$# use warnings;
> 
> nagios$# use strict;
> 
> nagios$# my $fs = $_[0];
> 
> nagios$# my $w = $_[1];
> 
> nagios$# my $e = $_[2];
> 
> nagios$# my $r = "WARNING";
> 
> nagios$# my $output = `df -kP $fs`;
> 
> nagios$# $output =~ /.*\s+(\d+)%.*/;
> 
> nagios$# $output = $1;
> 
> nagios$# if ($output > $w)
> 
> nagios$# { $r = "ERROR" if $output > $e;}
> 
> nagios$# else { $r = "OK";}
> 
> nagios$# return  "$r $output";
> 
> nagios$# $BODY$ language plperlu;
> 
> ERROR:  language "plperlu" does not exist
> 
> HINT:  Use CREATE EXTENSION to load the language into the database.
> 
> nagios=# SELECT * FROM pg_language;
> 
> lanname  | lanowner | lanispl | lanpltrusted | lanplcallfoid | 
> laninline
> | lanvalidator | lanacl
> 
> --+--+-+--+---+---+--+
> 
> internal |   10 | f   | f    | 0 | 
> 0
> | 2246 |
> 
> c    |   10 | f   | f    | 0 | 
> 0
> | 2247 |
> 
> sql  |   10 | f   | t    | 0 | 
> 0
> | 2248 |
> 
> plpgsql  |   10 | t   | t    | 13075 | 
> 13076
> |    13077 |
> 
> (4 rows)
> 
> To solve this issue , I am getting the below warnings if creating 
> extensions.
> 
> nagios=# CREATE EXTENSION plperl;
> 
> ERROR:  could not open extension control file
> "/usr/local/share/postgresql/extension/plperl.control": No such file 
> or directory
> 
> nagios=# CREATE LANGUAGE plperlu;
> 
> ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": 
> Error loading shared library libperl.so.5.20: No such file or 
> directory (needed by /usr/local/lib/postgresql/plperl.so)
> 
> nagios=# CREATE LANGUAGE plperlu;
> 
> ERROR:  could not load library "/usr/local/lib/postgresql/plperl.so": 
> Error loading shared library libperl.so.5.20: No such file or 
> directory (needed by /usr/local/lib/postgresql/plperl.so)

The plperl(u) extension has not been added to the Postgres installation. 
You need to do that. To help you with that we need to know:

What OS(and version) are you using?

How did you install Postgres?

> 
> nagios=#
> 
> Regards,
> 
> Dault
> 


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pgadmin III

2019-05-03 Thread basti
there is a port of pgadmin3 from bigSQL, support postgresql up to 10.
pgadmin3 in debian buster support also postgesql 10.

https://metadata.ftp-master.debian.org/changelogs/main/p/pgadmin3/pgadmin3_1.22.2-5_changelog

what version of pgadmin3 did you use? pgadmin3_1.22 should support
postgresql 9.6.

On 03.05.19 20:29, Julie Nishimura wrote:
> Thanks!
> 
> 
> *From:* Tom Lane 
> *Sent:* Friday, May 3, 2019 11:25 AM
> *To:* Julie Nishimura
> *Cc:* Adrian Klaver; pgsql-gene...@postgresql.org
> *Subject:* Re: Pgadmin III
>  
> Julie Nishimura  writes:
>> Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and I 
>> am getting this error:
>> An error has occurred:
>> Column not found in pgSet: rolcatupdate
> 
>> Do you know which version of Pgadmin should I use to avoid this? I am on 
>> windows 7. Thanks
> 
> Development of pgAdmin 3 stopped some time ago, so I'm not surprised that
> it can't cope with recent PG servers.  You should use pgAdmin 4 instead.
> Don't know anything about its minor versions, though.
> 
> pgAdmin-specific questions are best directed to one of the pgAdmin lists,
> eg pgadmin-supp...@lists.postgresql.org
> 
>     regards, tom lane




Re: error fsm relations

2019-05-03 Thread Adrian Klaver

On 5/3/19 10:14 AM, Julie Nishimura wrote:

Guys,
Do you know what does this message mean?
POSTGRES_FSM_RELATIONS=CRITICAL: DB control (host:xxx) fsm relations used: 
76628 of 8 (96%)


What is generating above?
Postgres version?

FYI, it seems you piggybacked this post(along with the Pgadmin III post) 
on another thread 'Back Slash \ issue'. That tends to mess with the 
message threading, so it better to start a new thread.





Is this caused by someone deleting a bunch of old data and not vacuuming?


Thanks!





*From:* Julie Nishimura 
*Sent:* Friday, May 3, 2019 9:39 AM
*Cc:* Adrian Klaver; pgsql-gene...@postgresql.org
*Subject:* Pgadmin III
Hello, I am trying to connect to PostgreSQL 9.6.2 using PGAdmin III, and 
I am getting this error:

An error has occurred:
Column not found in pgSet: rolcatupdate

Do you know which version of Pgadmin should I use to avoid this? I am on 
windows 7. Thanks



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Connecting to NOTIFY with telnet

2019-05-03 Thread Andres Freund
Hi,

On 2019-05-03 11:06:09 -0700, Igal Sapir wrote:
> Is it possible to connect to Postgres for notifications via telnet?  This
> is obviously more for learning/experimenting purposes.

No. The protocol is too complicated to make that realistically doable /
useful.


> I expected a simple way to connect and consume notifications but can not
> find any example or documentation on how to do that.

If you really wanted to go down that road, you'd have to read the
protocol specs. It'd not make sense to document how-to steps for
something as low-level as this.

Greetings,

Andres Freund




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread pabloa98
If you could use foreign data wrapper to connect
https://github.com/tds-fdw/tds_fdw then you can skip the migration back and
for to CSV.

You could even do partial migrations if needed (it could impact some
queries' speed though).

Pablo

On Fri, May 3, 2019 at 6:37 AM Adrian Klaver 
wrote:

> On 5/3/19 4:56 AM, Matthias Apitz wrote:
> >
> > Hello,
> >
> > We're investigating the migration of our LMS (Library Managment System)
> > from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
> > around 400 columns, some of them are also containing BLOB (bytea) data.
> > The DB size vary upto 20 GByte. The interfaces contain any kind of
> > language one could imagine :-) ESQL/C, JDBC, Perl DBD, ...
> >
> > Re/ the migration of the data itself, are there any use case studies
> > which could we keep in mind? We plan to export the tables with our own
> > tool which produces CSV with delimiter '|' (and \| if the char | is in
> > char columns too) and with hex representation of the BLOB data. This
> seems
> > to fit nicely with PostgreSQL's COPY command.
>
> You might want to also take a look at:
>
> A PostgreSQL foreign data wrapper to connect to TDS databases (Sybase
> and Microsoft SQL Server)
> https://github.com/tds-fdw/tds_fdw
>
>
> >
> > Any known pitfalls?
> >
> > Btw: We're investigating MySQL too, but this seems to be from the list
> > now for not having an ESQL/C interface.
> >
> > Regards
> >
> >   matthias
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>