If anyone ever needs, I wrote this 1-liner bash loop to create 16 temp
files of 640MB random data each (well, 2-liner if you count the "config"
line):
$ COUNT=16; TMPDIR=/pgdata/tmp/
$ for ((i=1; i<=6; i++)); do dd if=/dev/zero of="/pgdata/tmp/$(cat
/dev/urandom | tr -cd 'a-f0-9' | head -c 20).tmp
Jeff,
On Fri, May 3, 2019 at 6:56 AM Jeff Janes wrote:
> On Wed, May 1, 2019 at 10:25 PM Igal Sapir wrote:
>
>>
>> I have a scheduled process that runs daily to delete old data and do full
>> vacuum. Not sure why this happened (again).
>>
>
> If you are doing a regularly scheduled "vacuum full
On Wed, May 1, 2019 at 10:25 PM Igal Sapir wrote:
>
> I have a scheduled process that runs daily to delete old data and do full
> vacuum. Not sure why this happened (again).
>
If you are doing a regularly scheduled "vacuum full", you are almost
certainly doing something wrong. Are these "vacuu
Right. I managed to start up Postgres by symlinking the following
directories to a new mount: pg_logical, pg_subtrans, pg_wal, pg_xact.
I then created a new tablespace on the new mount, set it to be the default
tablespace, and moved some of the smaller (about 30GB) tables to it. That
allowed me
Assuming you get the database back online, I would suggest you put a
procedure in place to monitor disk space and alert you when it starts to
get low.
--
Mike Nolan
To get the cluster up and running, you only need to move a GB or two.
On 5/1/19 9:24 PM, Igal Sapir wrote:
Thank you both. The symlink sounds like a very good idea. My other disk
is 100GB and the database is already 130GB so moving the whole thing will
require provisioning that will take more
Thank you both. The symlink sounds like a very good idea. My other disk
is 100GB and the database is already 130GB so moving the whole thing will
require provisioning that will take more time. I will try the symlinks
first. Possibly moving some tables to a tablespace on the other partition
to m
Best optionCopy/move the entire pgdata to a larger space. It may also
be enough to just move the WAL (leaving a symlink) freeing up the 623M but
I doubt it since VACUUM FULL occurs in the same table space and can need an
equal amount of space (130G) depending on how much it can actually free u
On Thu, 2 May 2019 at 12:07, Igal Sapir wrote:
> I mounted an additional partition with 100GB, hoping to fix the bloat with a
> TABLESPACE in the new mount, but how can I do anything if Postgres will not
> start in the first place?
You could move the pg_wal directory over to the new partition a
I have Postgres running in a Docker container with PGDATA mounted from the
host. Postgres consume all of the disk space, 130GB [1], and can not be
started [2]. The database has a lot of bloat due to much many deletions.
The problem is that now I can not start Postgres at all.
I mounted an additi
10 matches
Mail list logo