Problem creating a database

2018-10-15 Thread Joshua White
Hi all,

I'm hoping someone can point me in the right direction. I've got a
PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
have full admin rights on this machine, so I can access logs, etc.

Recently I attempted to create a new database in this cluster. The command
succeeds, but when I try to connect to the new database, I get a "could not
open file" error:

psql: FATAL:  could not open file "base/618720/2610": No such file or
directory

It has been some time since I set up the database, so I don't know how long
ago this became an issue. I can't seem to find any other instances of this
problem online either. The logs are not helpful - even on the highest debug
setting, I only see the "connection authorized" then the fatal "could not
open file" error.

The data directory is on a separate disk array to the OS. Recently checked
it and there are no disk errors.

Any thoughts or ideas would be much appreciated.

Kind Regards,
Joshua


Re: Problem creating a database

2018-10-15 Thread Joshua White
Thanks for the suggestion - plenty of disk space left (several hundred
gigabytes free).

Kind Regards,
Joshua White

On Tue, 16 Oct 2018 at 15:03, Ben Madin  wrote:

> Do you have adequate disk space left on your array?
>
> cheers
>
> Ben
>
>
> On 15 October 2018 at 17:46, Joshua White  wrote:
>
>> Hi all,
>>
>> I'm hoping someone can point me in the right direction. I've got a
>> PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
>> have full admin rights on this machine, so I can access logs, etc.
>>
>> Recently I attempted to create a new database in this cluster. The
>> command succeeds, but when I try to connect to the new database, I get a
>> "could not open file" error:
>>
>> psql: FATAL:  could not open file "base/618720/2610": No such file or
>> directory
>>
>> It has been some time since I set up the database, so I don't know how
>> long ago this became an issue. I can't seem to find any other instances of
>> this problem online either. The logs are not helpful - even on the highest
>> debug setting, I only see the "connection authorized" then the fatal "could
>> not open file" error.
>>
>> The data directory is on a separate disk array to the OS. Recently
>> checked it and there are no disk errors.
>>
>> Any thoughts or ideas would be much appreciated.
>>
>> Kind Regards,
>> Joshua
>>
>>
>
>
> --
>
> [image: Ausvet Logo] <https://www.ausvet.com.au/>
>
> Dr Ben Madin
>
> BVMS MVPHMgmt PhD MANZCVS GAICD
> Managing Director
> Mobile:
> +61 448 887 220 <+61448887220>
> E-mail:
> b...@ausvet.com.au
> Website:
> www.ausvet.com.au
> Skype: benmadin
> Address:
> 5 Shuffrey Street
> Fremantle, WA 6160
> Australia
>


Re: Problem creating a database

2018-10-16 Thread Joshua White
Thanks for the tip. I've checked and the on-disk file behind "pg_index"
still exists.

I have existing databases in this cluster that I'd prefer not to drop and
recreate if possible.

I've tried dropping and recreating the new database I want to use, but each
time get the same type of error.

Kind Regards,
Joshua White

On Mon, 15 Oct 2018 at 21:13, Laurenz Albe  wrote:

> Joshua White wrote:
> > I'm hoping someone can point me in the right direction. I've got a
> PostgreSQL 10 server
> > instance on CentOS 6, which I set up and manage. I have full admin
> rights on this machine,
> > so I can access logs, etc.
> >
> > Recently I attempted to create a new database in this cluster. The
> command succeeds,
> > but when I try to connect to the new database, I get a "could not open
> file" error:
> >
> > psql: FATAL:  could not open file "base/618720/2610": No such file or
> directory
> >
> > It has been some time since I set up the database, so I don't know how
> long ago this
> > became an issue. I can't seem to find any other instances of this
> problem online either.
> > The logs are not helpful - even on the highest debug setting, I only see
> the
> > "connection authorized" then the fatal "could not open file" error.
> >
> > The data directory is on a separate disk array to the OS. Recently
> checked it and
> > there are no disk errors.
> >
> > Any thoughts or ideas would be much appreciated.
>
> Looks like the file backing the "pg_index" table is gone.
>
> Can you check if the file exists in the data directory or not?
>
> It's hard to determine what happened, but something has been
> eating your data.  As it is, your best option would be to
> drop the database and recreate it from a backup.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: Problem creating a database

2018-10-16 Thread Joshua White
On Tue, 16 Oct 2018 at 18:58, Laurenz Albe  wrote:

> Please don't top post.
>
> Joshua White wrote:
> > On Mon, 15 Oct 2018 at 21:13, Laurenz Albe 
> wrote:
> > > Joshua White wrote:
> > > > I'm hoping someone can point me in the right direction. I've got a
> PostgreSQL 10 server
> > > > instance on CentOS 6, which I set up and manage. I have full admin
> rights on this machine,
> > > > so I can access logs, etc.
> > > >
> > > > Recently I attempted to create a new database in this cluster. The
> command succeeds,
> > > > but when I try to connect to the new database, I get a "could not
> open file" error:
> > > >
> > > > psql: FATAL:  could not open file "base/618720/2610": No such file
> or directory
> > > >
> > > > It has been some time since I set up the database, so I don't know
> how long ago this
> > > > became an issue. I can't seem to find any other instances of this
> problem online either.
> > > > The logs are not helpful - even on the highest debug setting, I only
> see the
> > > > "connection authorized" then the fatal "could not open file" error.
> > > >
> > > > The data directory is on a separate disk array to the OS. Recently
> checked it and
> > > > there are no disk errors.
> > > >
> > > > Any thoughts or ideas would be much appreciated.
> > >
> > > Looks like the file backing the "pg_index" table is gone.
> > >
> > > Can you check if the file exists in the data directory or not?
> >
> > Thanks for the tip. I've checked and the on-disk file behind "pg_index"
> still exists.
> >
> > I have existing databases in this cluster that I'd prefer not to drop
> and recreate if possible.
> >
> > I've tried dropping and recreating the new database I want to use, but
> each time get the same type of error.
>
> "pg_index" initially uses file 2610.
> That may of yourse change if you rewrite the table.
>
> Try the following as OS user "postgres":
>oid2name -d  -f 2610
> Then you can see which table is associated to that file.
>
> Anyway, your database seems to be quite wrecked, and you'd probably need
> an expert to save what can be saved.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>
Sorry about the top-posting - default behaviour of my email client.

./oid2name -d postgres -f 2610 returns no tables.

Four of my six custom databases within the cluster return "pg_index" as the
table. The other two return nothing.

I guess this is fast becoming a situation where I'd be better off to start
over and restore the entire cluster from backups. Not ideal, but might have
to be done. At least I can take the opportunity to replace CentOS 6 with 7
if I do that.

Kind Regards,
Joshua.


Re: Problem creating a database

2018-10-16 Thread Joshua White
On Tue, 16 Oct 2018 at 19:13, Colin Coles 
wrote:

> On 15/10/2018 10:46, Joshua White wrote:
> > Hi all,
> >
> > I'm hoping someone can point me in the right direction. I've got a
> > PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
> > have full admin rights on this machine, so I can access logs, etc.
> >
> > Recently I attempted to create a new database in this cluster. The
> command
> > succeeds, but when I try to connect to the new database, I get a "could
> not
> > open file" error:
> >
> > psql: FATAL:  could not open file "base/618720/2610": No such file or
> > directory
> >
> > It has been some time since I set up the database, so I don't know how
> long
> > ago this became an issue. I can't seem to find any other instances of
> this
> > problem online either. The logs are not helpful - even on the highest
> debug
> > setting, I only see the "connection authorized" then the fatal "could not
> > open file" error.
> >
> > The data directory is on a separate disk array to the OS. Recently
> checked
> > it and there are no disk errors.
> >
> > Any thoughts or ideas would be much appreciated.
> >
> > Kind Regards,
> > Joshua
>
> Hi Joshua,
>   I have a distant recollection of a similar problem on CentOS, solved
> by raising the mount point further up the fs hierarchy. i.e. I think I
> ended up having to have the disk partition mounted on /var/lib before it
> would work properly, after initially trying it on /var/lib/pgsql/data.
> If you're using 10 your path will probably be different, but I think it
> boiled down to the ownership of the underlying mount point.
>
> Cheers,
> Colin.
>
>
Hi Colin,

Thanks for the tip - I'll have a look into the mount point ownership as
well.

Kind Regards,
Joshua.


Re: Problem creating a database

2018-10-16 Thread Joshua White
>
> > > > Recently I attempted to create a new database in this cluster. The
>> command succeeds,
>> > > > but when I try to connect to the new database, I get a "could not
>> open file" error:
>> > > >
>> > > > psql: FATAL:  could not open file "base/618720/2610": No such file
>> or directory
>> > > >
>> > >
>> > > Looks like the file backing the "pg_index" table is gone.
>> > >
>> > > Can you check if the file exists in the data directory or not?
>> >
>> > Thanks for the tip. I've checked and the on-disk file behind "pg_index"
>> still exists.
>> >
>> > I have existing databases in this cluster that I'd prefer not to drop
>> and recreate if possible.
>> >
>> > I've tried dropping and recreating the new database I want to use, but
>> each time get the same type of error.
>>
>> "pg_index" initially uses file 2610.
>>
>> Anyway, your database seems to be quite wrecked, and you'd probably need
>> an expert to save what can be saved.
>>
>
>
So you're right about it being seriously corrupted somehow. All my custom
databases seem to work fine. I have data checksums and amcheck enabled and
haven't found any issues using the query below.

SELECT bt_index_check(index => c.oid), c.relname, c.relpages
FROM pg_index i
JOIN pg_opclass op ON i.indclass[0] = op.oid
JOIN pg_am am ON op.opcmethod = am.oid
JOIN pg_class c ON i.indexrelid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE am.amname = 'btree'
-- Don't check temp tables, which may be from another session:
AND c.relpersistence != 't'
AND i.indisready AND i.indisvalid
ORDER BY c.relpages DESC;

However, when I went to connect to template1, it failed:

# \connect template1
FATAL:  could not open file "base/1/2610": No such file or directory
Previous connection kept

If template1 is corrupt, that would explain why I am unable to create new
databases.

Guess it's time to rebuild.


Re: Does idle sessions will consume more cpu and ram? If yes,how to control them

2018-12-19 Thread Joshua White
>
> In my application, the idle sessions are consuming cpu and ram. refer the
> ps command output.
>

If you connect to the database, does select * from pg_stat_activity() show
a lot of idle connections?


Re: Does idle sessions will consume more cpu and ram? If yes,how to control them

2018-12-19 Thread Joshua White
On Thu, 20 Dec 2018 at 14:35, Pavel Stehule  wrote:

> čt 20. 12. 2018 v 2:41 odesílatel Ron  napsal:
>
>> On 12/19/18 7:27 PM, Michael Paquier wrote:
>> [snip]
>> > Each backend stores its own copy of the relation cache, so if you have
>> > idle connections which have been used for other work in the past then
>> > the memory of those caches is still around.  Idle connections also have
>> > a CPU cost in Postgres when building snapshots for example, and their
>> > entries need to be scanned from a wider array, but usually the relation
>> > cache bloat is a wider problem.
>>
>> So it's best to kill connections that have been idle for a while?
>>
>
> sure - one hour idle connection is too old.
>

I'd also assess closing the connection from the client end once its task is
done - that would reduce the number of idle connections in the first place.