Newly Created Source DB Table Not Reflecting into Destination Foreign Tables

2018-12-11 Thread ramsiddu007
Dear Professionals,
   I hope you are doing well. I am looking *Foreign
Data Wrappers*. In this way I have created a data wrapper in destination
database. But at the time of creating wrapper, server and user mappings,
source database having only 1 table. After creating wrapper.. those things.
I have fired a select query which is extract all the data from table in
source database table, in destination database.
   This time I got whole data from source database with
the help of FDW. But this time I have created one more table in source
database, but this table not came into foreign table in destination
database. For this is there any other steps. Please let me know.

For the Kind request to all, please share any notes or pdf's or links
related to FDW, it will help to gain in depth knowledge for me.

I hope you will.



-- 
*Best Regards:*
Ramanna Gunde

*Don't complain about the HEAT,*

*PLANT A .*


Re: pg_restore fails due to foreign key violation

2018-12-11 Thread Olga Vingurt
Tom Lane  wrote:

> Hm.  In theory, that truncation failure in itself shouldn't have caused a
> problem --- autovacuum is just trying to remove some empty pages, and if
> they don't get removed, they'd still be empty.  However, there's a problem
> if the pages are empty because we just deleted some recently-dead tuples,
> because the state of the pages on-disk might be different from what it
> is in-memory.


It indeed looks like that was exactly the issue.
The error we saw in the event log happened only once and mentioned the
specific table we had issues with.
We had rows in the table which should have been deleted due to foreign key
constraint (ON DELETE CASCADE configured for the foreign key) and when I
tried to select one of the rows by using the column with the foreign key
nothing returned in the query so I guess the matching index was missing the
rows.

In the short term, what you need to do is figure out what caused the
> permission failure.  The general belief among pgsql-hackers is that
> shoddy antivirus products tend to cause this, but I don't know details.
>

There is no antivirus on the Windows server. As it happened only once (in a
few years we installed on the server) and we don't have any additional info
why PostgreSQL got "Permission denied" error we will hope for the best i.e.
that we won't get into this situation again.
Thanks a lot for the help!

Regards,
Olga


Code for getting particular day of week number from month

2018-12-11 Thread Mike Martin
Hi
For a particular sequence I needed to do (schedule 2nd monday in month for
coming year) I created the following query

select to_char(min(date::date) + interval '1 week','DD/MM/')  date
--gets first date for day of month (monday in this case) then adds week and
finally formats it to desired date string

from generate_series(
  '2018-12-01'::date,
--start date
  '2020-12-01'::date,
--end date
  '1 day'::interval
) date

where extract(dow from date) =1
--sets day of week
GROUP BY (extract(year from date)*100)+extract(month from date)
--groups by month and year
ORDER BY cast(min(date) as date)
--sets order back to date

I couldn't see anything on google so thought I'd share it

Mike


finding out what's generating WALs

2018-12-11 Thread Chris Withers

Hi All,

With a 9.4 cluster, what's the best way to find out what's generating 
the most WAL?


I'm looking after a multi-tenant PG 9.4 cluster, and we've started 
getting alerts for the number of WALs on the server.
It'd be great to understand what's generating all that WAL and what's 
likely to be causing any problems.\


More generally, what's number of WALs is "too much"? check_postgres.pl 
when used in nagios format only appears to be able to alert on absolute 
thresholds, does this always make sense? What's a good threshold to 
alert on?


cheers,

Chris



Re: Newly Created Source DB Table Not Reflecting into Destination Foreign Tables

2018-12-11 Thread Adrian Klaver

On 12/11/18 1:44 AM, ramsiddu007 wrote:

Dear Professionals,
                            I hope you are doing well. I am looking 
*Foreign Data Wrappers*. In this way I have created a data wrapper in 
destination database. But at the time of creating wrapper, server and 
user mappings, source database having only 1 table. After creating 
wrapper.. those things. I have fired a select query which is extract all 
the data from table in source database table, in destination database.
                        This time I got whole data from source database 
with the help of FDW. But this time I have created one more table in 
source database, but this table not came into foreign table in 
destination database. For this is there any other steps. Please let me know.


Not enough information to make a guess. Need:

1) Postgres version

2) The FDW you are using and to what destination database e.g. 
www.postgresql.org/docs/11/postgres-fdw.html to Postgres.


3) The configuration settings for the wrapper and the server and user 
mapping.




For the Kind request to all, please share any notes or pdf's or links 
related to FDW, it will help to gain in depth knowledge for me.


I hope you will.


--
_*Best Regards:*_
Ramanna Gunde

*Don't complain about the HEAT,*

***PLANT A .*




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: finding out what's generating WALs

2018-12-11 Thread Achilleas Mantzios

On 11/12/18 4:00 μ.μ., Chris Withers wrote:

Hi All,

With a 9.4 cluster, what's the best way to find out what's generating the most 
WAL?

I'm looking after a multi-tenant PG 9.4 cluster, and we've started getting 
alerts for the number of WALs on the server.
It'd be great to understand what's generating all that WAL and what's likely to 
be causing any problems.\



One way is to keep snapshots of pg_stat_user_tables and then try to identify 
spikes based on the various _tup fields.
Another way is to take a look in your archive (where you keep your archived wals), try to identify a period where excessive wals were generated and then use 
https://www.postgresql.org/docs/11/pgwaldump.html to see what's in there.


More generally, what's number of WALs is "too much"? check_postgres.pl when used in nagios format only appears to be able to alert on absolute thresholds, does this always make sense? What's a good 
threshold to alert on?




Regarding you wals in pg_wal,  a good threshold could be anything more than a 
e.g. 10% increase from wal_keep_segments with a trend to go up. If this number 
goes up chances are something bad is happening.


cheers,

Chris




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Newly Created Source DB Table Not Reflecting into Destination Foreign Tables

2018-12-11 Thread Adrian Klaver

On 12/11/18 7:28 AM, ramsiddu007 wrote:

Please reply to list also.
Ccing list.


Thanks for giving reply.
1. Postgres Version: 11

2. Below Databases are in Single Server:
  Source Database (TestDB1):
---
create table dept(deptno smallint, dname character varying(50), location 
character varying(50));
      insert into dept values (10, 'Product - Development', 
'Hyderabad'), (20, 'Product - Sales', 'Pune'), (30, 'Product - 
Marketing', 'Bangalore');


Destination Database (TestDB2):
-
create extension postgres_fdw;

CREATE SERVER fdw_hr
  FOREIGN DATA WRAPPER postgres_fdw
  OPTIONS (dbname 'TestDB1', host '192.168.52.25', port '5432');

CREATE USER MAPPING for postgres
SERVER fdw_hr
OPTIONS (user 'postgres', password 'Qazwsx@12');

IMPORT FOREIGN SCHEMA "public" FROM SERVER fdw_hr INTO public;

Now dept table in foreign tables tree view in destination database.

I have ran below query in destination database
select * from dept;

Good. Above query getting data.

After that i have created emp table in Destination Database like below

CREATE TABLE employee (empid int, eame character varying(20), deptno 
smallint);


insert into employee values (101, 'Einstein', 10), (102, 'Saleem Ali', 
20), (103, 'Adison', 30);


after that emp table not came in foreign table tree view in destination 
database.


https://www.postgresql.org/docs/11/sql-importforeignschema.html

"By default, all tables and views existing in a particular schema on the 
foreign server are imported"


The emp table did not exist when you did the initial IMPORT FOREIGN 
SCHEMA. You will need to import it using either CREATE FOREIGN TABLE or
IMPORT FOREIGN SCHEMA. NOTE: For IMPORT FOREIGN SCHEMA you can exclude 
the existing table using:


EXCEPT ( table_name [, ...] )



I have done those things only, no configurations done.








--
Adrian Klaver
adrian.kla...@aklaver.com



Search path & functions in temporary schemas

2018-12-11 Thread jose luis pillado
Hi all,

I was trying to mock a function. So I followed the instructions in this

 thread.

I created a function with the same name as the existing one in different
schema, and I updated the search path adding that schema at the beginning.

This solution worked with a real schema, but it did not with a temporary
one.

Code working with a real schema:

> SHOW SEARCH_PATH; -- public



CREATE OR REPLACE FUNCTION public.get_random_string()
> RETURNS TEXT LANGUAGE SQL AS $$
> SELECT 'real'::text;
> $$;

SELECT get_random_string(); -- real



CREATE SCHEMA mock;
> CREATE OR REPLACE FUNCTION mock.get_random_string()
> RETURNS TEXT LANGUAGE SQL AS $$
> SELECT 'mock'::text;
> $$;

SELECT get_random_string(); -- real



SET SEARCH_PATH = mock, public;
> SELECT get_random_string(); -- mock


Code not working with a temporary schema:

> SHOW SEARCH_PATH; -- public



CREATE OR REPLACE FUNCTION public.get_random_string()
> RETURNS TEXT LANGUAGE SQL AS $$
> SELECT 'real'::text;
> $$;
> SELECT get_random_string(); -- real



SELECT nspname FROM pg_namespace WHERE oid = pg_my_temp_schema(); --
> pg_temp_12



CREATE OR REPLACE FUNCTION pg_temp_12.get_random_string()
> RETURNS TEXT LANGUAGE SQL AS $$
> SELECT 'mock'::text;
> $$;
> SELECT get_random_string(); -- real



SET SEARCH_PATH = pg_temp_12, public;
> SELECT get_random_string(); -- real


Is there any way to make this work?

Thanks,
Jose


Fwd: Code for getting particular day of week number from month

2018-12-11 Thread Francisco Olarte
On Tue, Dec 11, 2018 at 2:10 PM Mike Martin  wrote:
> For a particular sequence I needed to do (schedule 2nd monday in month for 
> coming year) I created the following query

nice, but a little brute force.

Is this what you are trying to do:

$  select d::date as month_starts, to_char(date_trunc('week',d-'1
day'::interval)::date+14,'-MM-DD Day') as "2nd_monday" from
generate_series('2018-12-01'::date, '2020-12-01'::date,'1
month'::interval ) months(d);

Explanation:
generate_series for 1st day of each month.
1.- substract a day to get LAST day of previous month.
2.- truncate to week, which happily for us sends it to monday on my
locale ( YMMV ).
3.- Now you have LAST monday of PREVIOUS month, just go forward as
many weeks as needed.

If other DOW is needed, say wednesday, adjust substraction in previous
phase ( i.e., last wednesday of NOVEMBER is 2 days AFTER last MONDAY
before november 28 (two days BEFORE end of november ), If I'm doing
the math right get it right, so you would use something like:

date_trunc('week', -- this truncates to mondays so
 d  -- currrent month start.
  -'1 day'::interval  -- last month end
  -'2 day'::interval  -- diff from used day and the ones date_trunc returns.
)::date-- back to dates so we can use integer for lazy typers.
+2  -- restore the 2 days  we took before,
+14 -- and add a couple of weeks.

This is the tricky part, as date_trunc rounds down you have to play a
bit with where it rounds.

And then, 2nd MONDAY of december is 14 days AFTER last monday of november.

You count from the end of the previous month because date_trunc goes
down, if you have a function "rounding dates up" it would be much
easier.

Results:

 month_starts |  2nd_monday
--+--
 2018-12-01   | 2018-12-10 Monday
 2019-01-01   | 2019-01-14 Monday
 2019-02-01   | 2019-02-11 Monday
 2019-03-01   | 2019-03-11 Monday
 2019-04-01   | 2019-04-08 Monday
 2019-05-01   | 2019-05-13 Monday
 2019-06-01   | 2019-06-10 Monday
 2019-07-01   | 2019-07-08 Monday
 2019-08-01   | 2019-08-12 Monday
 2019-09-01   | 2019-09-09 Monday
 2019-10-01   | 2019-10-14 Monday
 2019-11-01   | 2019-11-11 Monday
 2019-12-01   | 2019-12-09 Monday
 2020-01-01   | 2020-01-13 Monday
 2020-02-01   | 2020-02-10 Monday
 2020-03-01   | 2020-03-09 Monday
 2020-04-01   | 2020-04-13 Monday
 2020-05-01   | 2020-05-11 Monday
 2020-06-01   | 2020-06-08 Monday
 2020-07-01   | 2020-07-13 Monday
 2020-08-01   | 2020-08-10 Monday
 2020-09-01   | 2020-09-14 Monday
 2020-10-01   | 2020-10-12 Monday
 2020-11-01   | 2020-11-09 Monday
 2020-12-01   | 2020-12-14 Monday
(25 rows)

Francisco Olarte.



Re: Newly Created Source DB Table Not Reflecting into Destination Foreign Tables

2018-12-11 Thread ramsiddu007
Ok, I will check. Thanks a lot.

On Tue, 11 Dec 2018 at 21:16, Adrian Klaver 
wrote:

> On 12/11/18 7:28 AM, ramsiddu007 wrote:
>
> Please reply to list also.
> Ccing list.
>
> > Thanks for giving reply.
> > 1. Postgres Version: 11
> >
> > 2. Below Databases are in Single Server:
> >   Source Database (TestDB1):
> > ---
> > create table dept(deptno smallint, dname character varying(50), location
> > character varying(50));
> >   insert into dept values (10, 'Product - Development',
> > 'Hyderabad'), (20, 'Product - Sales', 'Pune'), (30, 'Product -
> > Marketing', 'Bangalore');
> >
> > Destination Database (TestDB2):
> > -
> > create extension postgres_fdw;
> >
> > CREATE SERVER fdw_hr
> >   FOREIGN DATA WRAPPER postgres_fdw
> >   OPTIONS (dbname 'TestDB1', host '192.168.52.25', port '5432');
> >
> > CREATE USER MAPPING for postgres
> > SERVER fdw_hr
> > OPTIONS (user 'postgres', password 'Qazwsx@12');
> >
> > IMPORT FOREIGN SCHEMA "public" FROM SERVER fdw_hr INTO public;
> >
> > Now dept table in foreign tables tree view in destination database.
> >
> > I have ran below query in destination database
> > select * from dept;
> >
> > Good. Above query getting data.
> >
> > After that i have created emp table in Destination Database like below
> >
> > CREATE TABLE employee (empid int, eame character varying(20), deptno
> > smallint);
> >
> > insert into employee values (101, 'Einstein', 10), (102, 'Saleem Ali',
> > 20), (103, 'Adison', 30);
> >
> > after that emp table not came in foreign table tree view in destination
> > database.
>
> https://www.postgresql.org/docs/11/sql-importforeignschema.html
>
> "By default, all tables and views existing in a particular schema on the
> foreign server are imported"
>
> The emp table did not exist when you did the initial IMPORT FOREIGN
> SCHEMA. You will need to import it using either CREATE FOREIGN TABLE or
> IMPORT FOREIGN SCHEMA. NOTE: For IMPORT FOREIGN SCHEMA you can exclude
> the existing table using:
>
> EXCEPT ( table_name [, ...] )
>
> >
> > I have done those things only, no configurations done.
> >
>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
*Best Regards:*
Ramanna Gunde

*Don't complain about the HEAT,*

*PLANT A .*


Re: Search path & functions in temporary schemas

2018-12-11 Thread Tom Lane
jose luis pillado  writes:
> This solution worked with a real schema, but it did not with a temporary
> one. ...
> Is there any way to make this work?

The temp schema is intentionally excluded from the search path for
functions and operators, because otherwise it's just too easy to
trojan-horse things.  If you really want to create and call a
temp function, you have to schema-qualify its name when you call it.

To make that a bit less messy, you can use "pg_temp" as an alias
for your session's temp schema, rather than having to find out which
numbered temp schema you're really using.

regards, tom lane



Re: Errors with schema migration and logical replication — expected?

2018-12-11 Thread Mike Lissner
Reupping this since it was over the weekend and looks like a bug in logical
replication. My problems are solved, but some very weird things happened
when doing a schema migration.

On Sun, Dec 9, 2018 at 5:48 PM Mike Lissner 
wrote:

> On Sun, Dec 9, 2018 at 12:42 PM Adrian Klaver 
> wrote:
>
>>
>> 1) Using psql have you verified that NOT NULL is set on that column on
>> the publisher?
>>
>
> Yes, on the publisher and the subscriber. That was my first step when I
> saw the log lines about this.
>
> 2) And that the row that failed in the subscriber is in the publisher
>> table.
>>
>
> Yep, it's there (though it doesn't show a null for that column, and I
> don't know how it ever could have).
>
>
>> 3) That there are no NULL values in the publisher column?
>>
>
> This on the publisher:
>
> select * from search_docketentry where recap_sequence_number is null;
>
> returns zero rows, so yeah, no nulls in there (which makes sense since
> they're not allowed).
>
> Whatever the answers to 1), 2) and 3) are the next question is:
>>
>> 4) Do you want/need recap_sequence_number to be NOT NULL.
>>
>
> Yes, and indeed that's how it always has been.
>
> a) If not then you could leave things as they are.
>>
>
> Well, I was able to fix this by briefly allowing nulls on the subscriber,
> letting it catch up with the publisher, setting all nulls to empty strings
> (a Django convention), and then disallowing nulls again. After letting it
> catch up, there were 118 nulls on the subscriber in this column:
>
>
> https://github.com/freelawproject/courtlistener/issues/919#issuecomment-445520185
>
> That shouldn't be possible since nulls were never allowed in this column
> on the publisher.
>
>
>> b) If so then you:
>>
>> 1) Have to figure out what is sending NULL values to the column.
>>
>> Maybe a model that has null=True set when it shouldn't be?
>>
>
> Nope, never had that. I'm 100% certain.
>
>
>> A Form/ModelForm that is allowing None/Null?
>>
>
> Even if that was the case, the error wouldn't have shown up on the
> subscriber since that null would have never been allowed in the publisher.
> But anyway, I don't use any forms with this column.
>
>
>> Some code that is operating outside the ORM e.g. doing a
>>direct query using from django.db import connection.
>>
>
> That's an idea, but like I said, nothing sends SQL to the subscriber (not
> even read requests), and this shouldn't have been possible in the publisher
> due to the NOT NULL constraint that has *always* been on that column.
>
>  2) Clean up the NULL values in the column in the subscriber
>> and/or publisher.
>>
>
> There were only NULL values in the subscriber, never in the publisher.
> Something is amiss here.
>
> I appreciate all the responses. I'm scared to say so, but I think this is
> a bug in logical replication. Somehow a null value appeared at the
> subscriber that was never in the publisher.
>
> I also still have this question/suggestion from my first email:
>
> > Is the process for schema migrations documented somewhere beyond the
> above?
>
> Thank you again,
>
> Mike
>
>


Re: Errors with schema migration and logical replication — expected?

2018-12-11 Thread Adrian Klaver

On 12/11/18 2:21 PM, Mike Lissner wrote:
Reupping this since it was over the weekend and looks like a bug in 
logical replication. My problems are solved, but some very weird things 
happened when doing a schema migration.


On Sun, Dec 9, 2018 at 5:48 PM Mike Lissner 
mailto:mliss...@michaeljaylissner.com>> 
wrote:


On Sun, Dec 9, 2018 at 12:42 PM Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:


1) Using psql have you verified that NOT NULL is set on that
column on
the publisher?


Yes, on the publisher and the subscriber. That was my first step
when I saw the log lines about this.

2) And that the row that failed in the subscriber is in the
publisher table.


Yep, it's there (though it doesn't show a null for that column, and
I don't know how it ever could have).

3) That there are no NULL values in the publisher column?


This on the publisher:

select * from search_docketentry where recap_sequence_number is null;

returns zero rows, so yeah, no nulls in there (which makes sense
since they're not allowed).

Whatever the answers to 1), 2) and 3) are the next question is:

4) Do you want/need recap_sequence_number to be NOT NULL.


Yes, and indeed that's how it always has been.

a) If not then you could leave things as they are.


Well, I was able to fix this by briefly allowing nulls on the
subscriber, letting it catch up with the publisher, setting all
nulls to empty strings (a Django convention), and then disallowing
nulls again. After letting it catch up, there were 118 nulls on the
subscriber in this column:


So recap_sequence_number is not actually a number, it is a code?



I appreciate all the responses. I'm scared to say so, but I think
this is a bug in logical replication. Somehow a null value appeared
at the subscriber that was never in the publisher.

I also still have this question/suggestion from my first email:

 > Is the process for schema migrations documented somewhere beyond
the above?


Not that I know of. It might help, if possible, to detail the steps in 
the migration. Also what program you used to do it. Given that is Django 
I am assuming some combination of migrate, makemigrations and/or sqlmigrate.




Thank you again,

Mike




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Importing tab delimited text file using phpPgAdmin 5.1 GUI

2018-12-11 Thread s400t
To Adrian:Your question: "The original encoding was Win-10 (Japanese) 
correct?"Let me answer this way:Yes, I created the file using used Win 10 (J)'s 
Excel (2016).When I saved the file as tab delimited text, it seems it was saved 
as ANSI because when I opened it using notepad, I could see it was ANSI.I then 
changed the encoding to UTF-8 using the notepad    
But I gave up on importing using the phpPgAdmin. Over the weekend, I found a 
way using PHP.
Here is a snippet: (1) reading file and creating a 2D array:$fileRead = 
fopen($file, 'r');$row = 1;$twoDarray = array();while (($line = 
fgetcsv($fileRead, "\t")) !== FALSE) {if($row == 1){ $row++; continue; } //skip 
header     $line = implode(" ",$line). "\n";    $twoDarray[] = 
$line;}fclose($fileRead);
(2)$con=pg_connect("host=$host...");
if (!$con) { die("...");}
if (pg_copy_from($con, $tableName, $twoDarray) !== FALSE) {  print 
"Success!";}else{ print "Failed!"; } pg_close($con);
//--
Oh, yes, I had to convert the tab delimited text file to UTF-8 encoding. For 
this purpose the notepad was enough.
Some version of Excel seem to offer the option to save file with UTF-8 
encoding, but the one I am using does not have that option.
Time to move ahead.
Thanks!



 - Original Message -
 From: Adrian Klaver 
 To: s4...@yahoo.co.jp; rob stone ; 
"pgsql-general@lists.postgresql.org"  
 Date: 2018/12/8, Sat 06:35
 Subject: Re: Importing tab delimited text file using phpPgAdmin 5.1 GUI
   
On 12/7/18 9:04 AM, s4...@yahoo.co.jp wrote:
> I didn't specify any schema, so it was created in public schema.
> The error message also says "public"...
> //--
> ERROR: column "rec_id" of relation "spec" does not exist
> LINE 1: INSERT INTO "public"."spec" ("rec_id","title_c...
> //--
> 
> Output of the \d spec:
> 
> 
>                   Table "public.spec"
>             Column           |          Type           | Modifiers
> +-+---
>   rec_id                     | character varying(32)   | not null
>   title_category             | character varying(255)  |
>   doctype                    | character varying(255)  |
>   ... goes on like this for other columns.
> 
> What are you trying to see in the output of \d spec?

My basic procedure in troubleshooting is starting from the known and 
working out to the unknown. So my questions about the schema(s) and the 
table definition where to establish a know starting point. Also a common 
issue that hit this list are multiple versions(across schemas) of an 
object in a database and code hitting the wrong version. One of the 
signs of that being error messages of the form you got.


> 
> I don't understand what you mean by the import code is trying to insert 
> in to wrong version of the table.
> I visually checked the left side "menu like" structure of the 
> phpPgAdmin- there is no other table of that name.

See above.

> 
> You mentioned that quoted identifiers are not the issue.
> This prompted me to test the process in a table with a few columns and 
> ascii characters.
> Immediately it was clear that quoted identifiers were not to blame.
> 
> I found that I got that error when I change encoding of the tab 
> delimited file to UTF-8.
> Because my data contains non-ascii characters, if I don't use UTF-8, I 
> get this error.
> 
> ERROR:  invalid byte sequence for encoding "UTF8": 0x82
> 
> 
> ... and I read somewhere that if I open the text file in notpad and save 
> it with UTF-8 encoding, I can get rid of the error. (When inserting 
> using pyDev (psycopg2)/Eclipse, that does get rid of the error...

Notepad is not a text editor to use in general and in particular for 
data transformation work. It has limited knowledge of the text format. 
If you need to do that on Windows use Wordpad or better yet Notepad++:

https://notepad-plus-plus.org/ 

> 
> That's why I changed encoding.
> 
> And now I am stuck with this error.
> 
> But at least, now I am not blaming phpPgAdmin :)
> Thanks for the lead.
> 
> BTW, both server and client encoding of my pg db are UTF8.

The original encoding was Win-10 (Japanese) correct?

> 
> testdb=# SHOW SERVER_ENCODING;
>   server_encoding
> -
>   UTF8
> (1 row)
> 
> testdb=# SHOW CLIENT_ENCODING;
>   client_encoding
> -
>   UTF8
> (1 row)
> 
> testdb=#
> 
> 
>    - Original Message -
>    *From:* Adrian Klaver 
>    *To:* s4...@yahoo.co.jp; rob stone ;
>    "pgsql-general@lists.postgresql.org"
>    
>    *Date:* 2018/12/7, Fri 23:47
>    *Subject:* Re: Importing tab delimited text file using phpPgAdmin
>    5.1 GUI
> 
>    On 12/7/18 12:28 AM, s4...@yahoo.co.jp  wrote:
>      > Hello Adrian, Rob!
>      >
>      > Thank you for the comments.
>      >
>      > Oh, yes, I forgot to mention that I am using Pos