On 2/11/19 4:44 PM, Rich Shepard wrote:
Running postgresql-10.5 on Slackware-14.2.
A table has a column defined as
Column | Type | Collation | Nullable | Default
next_contact | date | | |
In a .sql file to insert rows in this table psql has a problem when there's
no
On 2/11/19 5:30 PM, Rich Shepard wrote:
On Mon, 11 Feb 2019, Tom Lane wrote:
An empty string is not a null.
Tom,
I understand this yet thought that empty strings and numeric fields were
accepted. Guess I need to review this.
You've got ADOS (All Databases are Oracle Syndrome).
--
Angular
On 2/11/19 5:44 PM, Rich Shepard wrote:
On Mon, 11 Feb 2019, Ron wrote:
You've got ADOS (All Databases are Oracle Syndrome).
Interesting as I've never bought, used, or seen anything from Oracle. Guess
it's transmitted by errent bits.
It's easily transmitted via toile
On 2/15/19 4:04 PM, Bruce Klein wrote:
[snip]
I'm glad Microsoft is trying though
If Steve "Linux is a cancer" Ballmer were dead, he's be spinning in his grave...
--
Angular momentum makes the world go 'round.
On 2/19/19 5:02 PM, Julie Nishimura wrote:
Hello, we are almost out of space on our main data directory, and about to
introduce new directory to our cluster. We cannot use multiple physical
disks as a single volume, so we are thinking about creation new tablespace.
Our current data_directory sh
On 2/20/19 6:16 PM, Adrian Klaver wrote:
On 2/20/19 2:22 PM, Tiffany Thang wrote:
Hi Adrian,
I managed to backup my table in parallel using -Fd but I'm back to my
original issue where I could not restore the table to a different schema.
For example,
I would like to backup testuser1.mytable an
m clear enough!
Feel free to ask for clarification or add new elements to go further on!
Hope this helps,
Have a nice day,
Thomas
Le mer. 20 févr. 2019 à 21:37, Ron <mailto:ronljohnso...@gmail.com>> a écrit :
On 2/19/19 5:02 PM, Julie Nishimura wrote:
Hello, we are almost out of sp
On 2/26/19 7:51 PM, Martin Mueller wrote:
I run Postgres 10.5. I understand that there is something called
tablefunc and it includes a crosstab function. On Stack Overflow I learn
that you import this function. But from where and how? The Postgres
documentation is quite clear and intelligib
amounts of data, and
analyzing it in a myriad of ways. It requires "administration from the
command line", as you're discovering.
Sincerely,
Ron
--
Angular momentum makes the world go 'round.
On 2/27/19 12:43 PM, Joe Conway wrote:
On 2/27/19 11:49 AM, Peter J. Holzer wrote:
On 2019-02-27 10:42:12 -0500, Tom Lane wrote:
Luca Ferrari writes:
On Wed, Feb 27, 2019 at 12:33 PM Julien Rouhaud wrote:
You can see most obvious reasons at
https://bugzilla.redhat.com/show_bug.cgi?id=124747
On 2/27/19 3:15 PM, Julie Nishimura wrote:
Hello everybody, I am new to postgresql environment, but trying to get up
to speed.
Can you please share your experience on how you can automate refreshment
of dev environment on regular basis (desirably weekly), taking for
consideration some of prod d
On 2/28/19 5:47 AM, Thomas Güttler wrote:
Hi experts,
where would you suggest someone to **not** use PostgreSQL?
1. Small embedded systems. SQLite is great for that.
2. Easy-to-implement Master-Master replication. (The Percona fork of MySQL
does that really well, if you can handle MySQL's
On 2/28/19 4:53 PM, Michael Lewis wrote:
[snip]
Would a sixth option be to re-create the column as array type
Codd is spinning in his grave...
--
Angular momentum makes the world go 'round.
On 2/28/19 7:53 PM, Michael Lewis wrote:
On Thu, Feb 28, 2019 at 4:57 PM Ron <mailto:ronljohnso...@gmail.com>> wrote:
On 2/28/19 4:53 PM, Michael Lewis wrote:
[snip]
Would a sixth option be to re-create the column as array type
Codd is spinning in his grave...
On 2/28/19 8:00 PM, Julie Nishimura wrote:
Hello,
We set up our hot standby by putting master into archive mode and issuing
pg_basebackup command, and we are about to start our hot standby instance.
What are the things you suggest for validation (assuming it will come up
cleanly) and replicati
On 2/28/19 10:26 PM, Michael Lewis wrote:
Arrays are -- by definition -- not atomic, and so they fundamentally
break the model that relational databases are founded upon. If you
want to be a good database designer, don't use arrays.
Thanks. I was reading about Codd after your last
On 3/1/19 2:17 AM, Nicola Contu wrote:
Hello,
we are trying to move our postgres 10.5 instance to 11.2
We are trying to restore a DB in a pre-production env but we get an error
on the pg_restore command
[root@STAGING]# /usr/local/pgsql11.2/bin/pg_restore -v -U postgres -L
/data/cofinder_res
On 3/6/19 1:45 AM, Rob Sargent wrote:
[snip]
This construct had been working until recent changes but I cannot relate
the message to any deformity in the current schema or code.
Any pointers appreciated.
What were the recent changes?
--
Angular momentum makes the world go 'round.
On 3/7/19 1:54 AM, Julie Nishimura wrote:
Hello psql friends,
We need to migrate only 6 months worth of data from one instance to
another. What would be the easiest way to do it? In Oracle, I would set up
dblink. What about postgresql?
postgres_fdw
--
Angular momentum makes the world go 'rou
On 3/7/19 2:58 PM, Bill Haught wrote:
[snip]
Like why would anyone choose Winbloz *if* you don't *have* too, I have no
idea.
1999 wants it's insult back.
--
Angular momentum makes the world go 'round.
named pg_bulkload.
https://www.postgresql.org/docs/9.6/sql-copy.html
https://www.postgresql.org/docs/8.3/sql-copy.html
On 3/7/19 7:53 PM, Julie Nishimura wrote:
Thank you Ron! What if dev environment is on 9.6, but prod is on version
8.3? Will posgtres_fdw still be the right option?
Sent from my iPho
Yes, for whole tables (even sets of tables) "pg_dump --table=" is good at
that. Even better, you can run the 9.6 pg_dump against the 8.3 database and
get parallelism with "--jobs".
On 3/7/19 8:11 PM, Julie Nishimura wrote:
Ron, thanksagain. In case if I need to migrate
On 3/8/19 9:20 PM, Tom Lane wrote:
Bruce Momjian writes:
On Sat, Mar 9, 2019 at 02:13:57AM +, Julie Nishimura wrote:
Is there a way to reconstruct DDL for creation of existing databases? For
example, I need to create dev environment for some (not all) of our existing
databases, and I woul
The Percona fork of MySQL makes active-active clustering very simple to set up.
On 3/12/19 9:10 AM, Benedict Holland wrote:
MySQL isn't ACID. Postgresql is a full-featured database that doesn't
allow injection. It is very safe and secure. Also, the way that PostgreSQL
has a much better user man
On 3/12/19 3:19 PM, Christopher Browne wrote:
On Tue, 12 Mar 2019 at 12:53, Benedict Holland
wrote:
I am not saying it is not well documented. I am saying that it isn't ACID
compliant, which it isn't, as they document.
I *love* the notion of being able to roll back DDL, but it has long
been c
On 3/13/19 9:28 AM, Thomas Güttler wrote:
Some days ago I asked "Where **not** to use PostgreSQL?" on this mailing
list.
Now I realized: Nobody talked about Blobs.
I guess most people do not store Blobs in PostgresSQL.
Where do you store Blobs?
(In my case Blobs are PDF/image files with size
On 3/15/19 6:44 PM, Adrian Klaver wrote:
On 3/15/19 4:37 PM, Sathish Kumar wrote:
Hi All,
I have created a read only user to perform select statements on our
database but whenever we create new tables on the database this user is
unable to view it unless I grant select again for this table. I
On 3/23/19 8:58 AM, Adrian Klaver wrote:
On 3/22/19 11:59 PM, Igor Korot wrote:
Hi, ALL,
I tried to follow an instructions at
https://www.postgresql.org/ftp/binary/v9.6.1/solaris/solaris11/i386/
in the README but I received following:
Off hand I would say the user you are running as does not h
On 3/22/19 9:50 PM, Igor Korot wrote:
Hi
On Sat, Mar 23, 2019 at 9:25 AM Ron wrote:
On 3/23/19 8:58 AM, Adrian Klaver wrote:
On 3/22/19 11:59 PM, Igor Korot wrote:
Hi, ALL,
I tried to follow an instructions at
https://www.postgresql.org/ftp/binary/v9.6.1/solaris/solaris11/i386/
in the
On 3/24/19 1:42 AM, Frank wrote:
Hi all
As I understand it, a general rule of thumb is that you should never
create a physical column if the data could be derived from existing
columns. A possible reason for breaking this rule is for performance reasons.
I have a situation where I am consid
On 3/24/19 3:05 AM, Frank wrote:
On 2019-03-24 9:25 AM, Ron wrote:
On 3/24/19 1:42 AM, Frank wrote:
Hi all
As I understand it, a general rule of thumb is that you should never
create a physical column if the data could be derived from existing
columns. A possible reason for breaking this
On 3/24/19 3:45 AM, Ron wrote:
[snip]
In every DBMS that I've used, the lside (left side) needs to be static
(not "a" static) instead of variable (like a function).
Thanks to Chris Travers for reminding me that the word is "immutable", not
"static".
--
On 3/25/19 8:15 AM, Frank wrote:
On 2019-03-24 2:41 PM, Peter J. Holzer wrote:
On 2019-03-24 10:05:02 +0200, Frank wrote:
Many thanks to Peter et al for their valuable insights. I have learned a lot.
> So the important part here is not whether data is added, but whether
> data is changed. Sur
On 3/28/19 3:23 AM, Sameer Kumar wrote:
[snip]
You can write a query with upper function:
select * from emp where upper(ename)=upper('aaa');
That's a guaranteed table scan.
--
Angular momentum makes the world go 'round.
On 3/28/19 3:33 AM, Steve Atkins wrote:
On Mar 28, 2019, at 8:29 AM, Ron wrote:
On 3/28/19 3:23 AM, Sameer Kumar wrote:
[snip]
You can write a query with upper function:
select * from emp where upper(ename)=upper('aaa');
That's a guaranteed table scan.
Unless you have an
Go up a level from
https://zypp.postgresql.org/10/suse/sles-12-x86_64/repoview/postgresql10-server.html
to
https://zypp.postgresql.org/10/suse/sles-12-x86_64/repoview/letter_p.group.html
then get all the packages you need.
On 4/1/19 2:42 AM, Ankit Trivedi wrote:
Hello,
We have tried to inst
/"so we need to stop our application until the export/import is completed."/
Why?
On 4/1/19 9:47 AM, Sathish Kumar wrote:
Hi Adrian,
We are exporting live table data to a new database, so we need to stop our
application until the export/import is completed. We would like to
minimise this down
On 4/1/19 12:37 PM, Glenn Schultz wrote:
All,
The query below is designed to insert into a table. This works when I
have a single loan which I insert. However, if remove the part of the
where clause of a single loan the insert does not work. The table fnmloan
is a large table with 500mm + r
On 4/3/19 12:50 AM, Perumal Raj wrote:
Hi ALL
We are planning to reclaim unused space from 9.2 Version postgres Cluster,
Method : VACUUM FULL
Does *every* table have *so much* free space that it's impractical to just
let the files just get refilled by normal usage?
DB Size : 500 GB
Expec
On 4/3/19 8:18 AM, Steven Lembark wrote:
Trying to find a way of moving a large table between databases
in the same cluster. There is not sufficient space to copy the
contents -- the dedicated tablespace that fits the beastie is
on an 80% full disk.
Given that the two databases live in the same
On 4/3/19 8:39 AM, Steven Lembark wrote:
On Wed, 3 Apr 2019 08:33:54 -0500
Ron wrote:
On 4/3/19 8:18 AM, Steven Lembark wrote:
Trying to find a way of moving a large table between databases
in the same cluster. There is not sufficient space to copy the
contents -- the dedicated tablespace
On 4/3/19 3:45 PM, Perumal Raj wrote:
Hi Stephen
Thanks for the response ,
Version : 9.2
We never ran VACUUM FULL in the past, All we are doing just manual vacuum
( Weekly ) .
Based on the Observation ( test run ) , we were able to reclaim 150 GB out
of 500 GB .
We are heading to a planned
On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
hi
Respected international pgsql team
pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non superuser(ravi)
pershing=> insert into hyd (id,name) values('2','delhi');
INSERT 0 1
here data inserted
pershing=#
You'd think the implicit SELECT perm of that table for the explicit use of
UPDATE would be covered by GRANT UPDATE.
On 4/4/19 7:25 AM, Patrick FICHE wrote:
Hi,
If I’m not wrong, UPDATE requires SELECT permission as the UPDATE
statement needs to read the data to be updated.
So, you should p
On 4/5/19 3:43 AM, Rob Northcott wrote:
I’ve had a couple of customers complaining of slow searches and doing some
testing last night it seems to be much slower on the live server than on
my test setup.
It’s quite a messy query built up by the search code, with lots of joins
and subqueries.
Rob,
pg_dump/restore gets rid of all the dead space, and you should *always* run
an ANALYZE after pg_restore, since pg_restore doesn't populate the
statistics tables.
On 4/5/19 4:35 AM, Rob Northcott wrote:
Hi Ron,
Thanks for that. I did just run analyse and vacuum on the live dat
Hi,
In 9.6, does such a thing exist? (We just restored a VM from snapshot and I
want to verify the cluster sanity.)
Thanks
--
Angular momentum makes the world go 'round.
On 4/5/19 2:10 PM, Nicholas Magann wrote:
I'm a student at the University of Arizona. My current course is having us
pick a product and do a Security Assessment on it. Part of the assessment
requires us to obtain any authorization necessary before doing the
assessment. Does PostgreSQL require o
On 4/5/19 5:45 PM, Kevin Wilkinson wrote:
on 10.2, we're seeing very high cpu usage when doing an update statement
on a relatively small table (1GB). one of the updated columns is text,
about 1k bytes. there are four threads doing similar updates concurrently
to the same table (but different ro
On 4/6/19 6:50 PM, Tom Lane wrote:
senor writes:
[snip]
The --link option to pg_upgrade would be so much more useful if it
weren't still bound to serially dumping the schemas of half a million
tables.
To be perfectly blunt, if you've got a database with half a million
tables, You're Doing It
Look also at pg_stat_all_tables.n_dead_tup for tables which are candidates
for vacuuming.
On 4/10/19 11:49 PM, Perumal Raj wrote:
Thanks Kevin for the inputs,
In my Case there are 500+ Tables and biggest chunk 30GB ( Table only) +
its indexes.
So i have created 6 batches and executed in par
On 4/11/19 12:24 PM, Tom Lane wrote:
Alvaro Herrera writes:
On 2019-Apr-11, rihad wrote:
2019-04-11 19:39:44.450844500 tuples: 19150 removed, 2725811 remain, 465 are
dead but not yet removable
What Jeff said. This vacuum spent a lot of time, only to remove miserly
19k tuples, but 2.7M dea
Hi,
PCI auditors have mandated that our databases (all running v9.6) send an
email when certain users log in. Thus, I've been searching for how to do
this, but without much luck.
https://www.postgresql.org/message-id/flat/20170720204733.40f2b7eb.nagata%40sraoss.co.jp
This long thread from a
On 4/11/19 9:12 PM, Tom Lane wrote:
Ron writes:
PCI auditors have mandated that our databases (all running v9.6) send an
email when certain users log in. Thus, I've been searching for how to do
this, but without much luck.
PAM is the usual suggestion
Can you be more specific? (All
On 4/11/19 9:52 PM, Tom Lane wrote:
Ron writes:
On 4/11/19 9:12 PM, Tom Lane wrote:
PAM is the usual suggestion
Can you be more specific?
I'm suggesting that you use PAM auth
https://www.postgresql.org/docs/current/auth-pam.html
and then configure the email behavior on the PAM
On 4/12/19 3:11 PM, Paul van der Linden wrote:
Hi,
For my process, I needed to drop all the tables in a tablespace except one
which I truncated.
After that I would have expected to have a couple of KB max in that
folder, but there was about 200GB in it.
Did you vacuum afterwards?
There we
On 4/13/19 7:28 PM, Schneider, Jeremy wrote:
On Apr 11, 2019, at 19:52, Tom Lane wrote:
Ron writes:
I bet requests like this will start to make it onto the beaten path.
Meh. I'm not that excited about inventing our own versions of wheels
that already exist, especially when there'
On 4/14/19 4:05 AM, Peter J. Holzer wrote:
On 2019-04-13 22:22:16 -0500, Ron wrote:
In our case, another looming Auditor requirement is to be able to instantly
kick off -- or at least send a warning email -- when certain roles log in
from unapproved IP addresses or programs. For example
On 4/16/19 4:22 AM, Tim Kane wrote:
So I have a situation where I would like to modify a field that is
currently a domain type over a varchar(9)
Specifically:
CREATE DOMAIN old_type AS varchar(9)
This isn't ideal, let's just say.. legacy.
I wish to modify this type.. ideally to a text type w
On 4/16/19 9:28 AM, Adrian Klaver wrote:
On 4/16/19 7:19 AM, Ron wrote:
On 4/16/19 4:22 AM, Tim Kane wrote:
So I have a situation where I would like to modify a field that is
currently a domain type over a varchar(9)
Specifically:
CREATE DOMAIN old_type AS varchar(9)
This isn't ideal,
On 4/16/19 9:42 AM, Tom Lane wrote:
Adrian Klaver writes:
I suspect the OP wants the type to text with a CHECK constraint to allow
for increasing the length of field values in the future by just changing
the CHECK setting. If that is the case would changing the type to text
and then adding a CH
On 4/18/19 2:14 AM, Andreas Kretschmer wrote:
Am 18.04.19 um 08:52 schrieb rihad:
Hi. Say there are 2 indexes:
"foo_index" btree (foo_id)
"multi_index" btree (foo_id, approved, expires_at)
foo_id is an integer. Some queries involve all three columns in their
WHERE clauses, some in
On 4/18/19 8:45 AM, Gavin Flower wrote:
On 19/04/2019 01:24, Ron wrote:
On 4/18/19 2:14 AM, Andreas Kretschmer wrote:
[snip]
(Prefix compression would obviate the need for this question. Then your
multi-column index would be *much* smaller.)
True, but a multi column index will still be
On 4/18/19 11:43 AM, Vikas Sharma wrote:
Hi,
I have come across a query that a developer wrote to update a few rows in
table, the query did update the two desired rows but also updated the
rest of the table with the column value as 'false'.
Update tableA set col1 = null and col2 in (1,2);
T
On 4/21/19 1:46 PM, Adrian Klaver wrote:
On 4/21/19 9:35 AM, Daulat Ram wrote:
Hello Team,
We are getting below error while migrating pg_dump from Postgresql 9.6 to
Postgresql 11.2 via pg_restore in docker environment.
90d4c9f363c8:~$ pg_restore -d kbcn "/var/lib/kbcn_backup19"
pg_restore:
On 4/21/19 3:58 PM, Adrian Klaver wrote:
On 4/21/19 1:42 PM, Ron wrote:
On 4/21/19 1:46 PM, Adrian Klaver wrote:
On 4/21/19 9:35 AM, Daulat Ram wrote:
Hello Team,
We are getting below error while migrating pg_dump from Postgresql 9.6
to Postgresql 11.2 via pg_restore in docker environment
On 4/24/19 3:17 PM, pabloa98 wrote:
Hello
Sadly today we hit the 1600 columns limit of Postgresql 11.
How could we add more columns?
Note: Tables are OK. We truly have 2400 columns now. Each column
represents a value in a matrix.
We have millions of rows so I would prefer not to transpose e
On 4/24/19 3:22 PM, Adrian Klaver wrote:
On 4/24/19 1:17 PM, pabloa98 wrote:
Hello
Sadly today we hit the 1600 columns limit of Postgresql 11.
How could we add more columns?
Note: Tables are OK. We truly have 2400 columns now. Each column
represents a value in a matrix.
Not sure how hit 16
On 4/24/19 5:55 PM, Alvaro Herrera wrote:
On 2019-Apr-24, pabloa98 wrote:
Regarding to (2), We are good by adding a patch and recompile a patched
version for our server databases.
But we are open on helping to add thousands of columns support as a
compile-time parameter if there are other peop
On 4/28/19 3:21 PM, senor wrote:
Hi All,
I'm looking for advice for optimizing the pg_dump portion of "pg_upgrade
--link". Since this schema only dump can't take advantage of parallel
processing with jobs I'm looking for any preparation or configuration
settings that can improve speed.
9.2 to 9
On 5/1/19 11:39 AM, Julien Rouhaud wrote:
On Wed, May 1, 2019 at 6:27 PM Chuck Martin wrote:
I need help figuring out why a query is not returning the records I expect it
to. I'm searching on a DateTime column (timestamp without time zone - not
nullable). The query includes:
AND event.Prim
To get the cluster up and running, you only need to move a GB or two.
On 5/1/19 9:24 PM, Igal Sapir wrote:
Thank you both. The symlink sounds like a very good idea. My other disk
is 100GB and the database is already 130GB so moving the whole thing will
require provisioning that will take more
On 5/3/19 6:56 AM, Matthias Apitz wrote:
Hello,
We're investigating the migration of our LMS (Library Managment System)
from Sybase ASE 15.7 to PostgreSQL 10.6. The used database in field have
around 400 columns, some of them are also containing BLOB (bytea) data.
The DB size vary upto 20 GByte.
On 5/5/19 12:20 PM, Andreas Kretschmer wrote:
Am 05.05.19 um 18:47 schrieb Sathish Kumar:
Is there a way to speed up the importing process by tweaking Postgresql
config like maintenance_workmem, work_mem, shared_buffers etc.,
sure, take the dump in custom-format and use pg_restore with -j
.
On 5/6/19 2:47 PM, Igal Sapir wrote:
but I want to instill confidence in them that anything they do with SQL
Server can be done with Postgres.
Right off the top of my head, here are some things you can't (easily and
trivially) do in Postgres:
- Transparent Data Encryption
- Block level full,
On 5/7/19 1:52 PM, David Gauthier wrote:
Hi:
I'm going to be requesting a PG instance supported by an IT team in a
large corp. They will be creating the server as a VM. We will be loading
the DB using scripts (perl/dbi) on linux, possibly using bulk loading
techniques if that's required. Q
No, I'm asking about the application.
On 5/7/19 3:51 PM, David Gauthier wrote:
>>Home-rolled application, or third party?
Are you asking about how they do VMs ?
They already provide PG v9.6.7 , so I gather they're not averse to
supporting PG DBs.
On Tue, May 7, 201
On 5/8/19 5:42 AM, Prashant Hunnure wrote:
Dear Team,
I am working on postgres database version 9.3 is the part of opengeo suite
and now my running database become hacked by someone. In the current
situation I'm able to view my database under Pgadmin III but unable to
view the tables, functio
On 5/8/19 3:30 PM, Sandeep Saxena wrote:
[snip]
@Ravi
My company is trying to avoid another vendor lockin too , thats why we are
bit skeptical on going to EDB as once we start using their Oracle
compatability
feature then it will be very difficult to move to community addition again.
What ab
https://github.com/pgaudit/pgaudit might help. It's packed for install by
the Postgres team, (Of course, it's too late for existing accounts.)
On 5/9/19 4:11 PM, Kumar, Virendra wrote:
Thanks Chris!
Since PostgreSQL still have to have those accounts even if we authenticate
it externally we
On 5/14/19 3:59 PM, Rich Shepard wrote:
On Tue, 14 May 2019, Adrian Klaver wrote:
A file level backup or database dump?
Adrian,
File level. April 20th is my most recent database dump because I forgot to
run it last Friday afternoon.
Note that referring to file level copies as dumps can be
On 5/16/19 4:36 AM, nigel.ander...@gmx.com wrote:
Hi,
I've just inherited an ancient install of 9.1.17 after our tech guy left,
on what turns out to be a rapidly dying server and being a total newb to
PostgreSQL (and not much more advanced on Linux) I'm a little stuck on the
way ahead.
I've ma
On 5/18/19 2:27 PM, Peter J. Holzer wrote:
On 2019-05-18 10:49:53 -0700, David G. Johnston wrote:
On Saturday, May 18, 2019, Peter J. Holzer wrote:
On 2019-05-16 08:48:51 -0700, David G. Johnston wrote:
> On Thu, May 16, 2019 at 8:31 AM Daulat Ram
wrote:
>
>
>
On 5/18/19 3:49 PM, Peter J. Holzer wrote:
On 2019-05-18 15:19:22 -0500, Ron wrote:
On 5/18/19 2:27 PM, Peter J. Holzer wrote:
On 2019-05-18 10:49:53 -0700, David G. Johnston wrote:
You don’t perform math on a hash
That's not generally true. Hashes are used for further computatio
On 5/18/19 5:39 PM, Peter J. Holzer wrote:
On 2019-05-18 17:14:59 -0500, Ron wrote:
On 5/18/19 3:49 PM, Peter J. Holzer wrote:
On 2019-05-18 15:19:22 -0500, Ron wrote:
On 5/18/19 2:27 PM, Peter J. Holzer wrote:
On 2019-05-18 10:49:53 -0700, David G. Johnston wrote
On 5/19/19 5:43 AM, Peter J. Holzer wrote:
[snip]
But please be aware that I answered your question 'How is it "using
math" to use a hash key?', not 'How are hash indexes in PostgreSQL
implemented?'. So my answer covered the most simple and generic
implementation.
I understand.
--
Angular mome
On 5/20/19 4:14 PM, Will Hartung wrote:
[snip]
Also to note, I tried just loading the table with no indexes, and I was
getting a solid 22MB/s via iostat of just raw data load (just to proof
that I/O system, while certainly not extraordinary, was functional).
I think you answered this earlier,
On 5/20/19 4:48 PM, Will Hartung wrote:
On May 20, 2019, at 2:36 PM, Ron wrote:
I think you answered this earlier, but does the same stalling happen when
indexes are dropped?
No, the data loads fine.
The way I originally stumbled upon this was that I had off loaded the data for
some other
On 5/20/19 5:43 PM, Will Hartung wrote:
On May 20, 2019, at 2:55 PM, Ron wrote:
And it pathologically loads even when there's just a PK on the numeric field?
Yea, that works fine.
Drop all indexes, load data, recreate indexes?
No, I use the incremental load as it gives a much b
On 5/20/19 6:51 PM, Will Hartung wrote:
On May 20, 2019, at 4:27 PM, Ron <mailto:ronljohnso...@gmail.com>> wrote:
I'm confused. You wrote above that loading without indexes and with just
the PK works just fine; if you *really* need it loaded in Aurora or
production, just dr
On 5/20/19 7:21 PM, Will Hartung wrote:
On May 20, 2019, at 5:15 PM, Ron wrote:
Are there a sufficiently small number of elements in each traits object that
you can do something like this, on the UNINDEXED table?
SELECT traits->element1, traits->element2, count(*)
from eis_entry
gr
On 5/21/19 12:27 PM, Rich Shepard wrote:
On Tue, 21 May 2019, Michael Lewis wrote:
For each row-
Insert into organizations table if the record does not exist, returning ID.
Insert into people using that ID.
Michael,
The org_id will not exist until I run the insert script.
Else, load all th
On 5/22/19 4:18 PM, Perumal Raj wrote:
Hi All,
We have recently migrated postgres DB to out of of server ( Centos 6.9 ) .
Both Source and Target versions of OS/DB are same . Also Configuration is
Apple-Apple.
But We started seeing lot of process name 'migration' at OS Level in new
server whi
On 5/24/19 1:15 PM, Ken Lacrosse wrote:
Is there any way in postgresql to have a "snippet" of SQL code which you
could apply to all tables. Something you could add which would ensure
that every table always has a Created, Changed and Deleted column for
example. Sort of like a C include I su
On 5/24/19 1:27 PM, Adrian Klaver wrote:
On 5/24/19 11:15 AM, Ken Lacrosse wrote:
Is there any way in postgresql to have a "snippet" of SQL code which you
could apply to all tables. Something you could add which would ensure
that every table always has a Created, Changed and Deleted column for
On 5/25/19 11:49 AM, Jānis Pūris wrote:
Hello,
I'm working with a 9.4-bdr cluster and want to move away from BDR tech all
together. So my idea was to follow instructions on
http://bdr-project.org/docs/stable/ to first strip the node from BDR
making it into "regular" node and then moving the d
On 5/29/19 8:26 AM, Tom Lane wrote:
PegoraroF10 writes:
We like to use With to insert, update and return some value to user. But some
informations of those related tables are not available on that time, is that
a bug ?
No, see the "WITH Clause" section of the SELECT reference page:
The p
On 5/31/19 11:17 AM, Sathish Kumar wrote:
Hi Team,
We have a database and keep creating new tables for the requirement.
Every time we have to grant readonly permission to the new tables which
are created for the db user. Instead is there a way to inherit privileges.
Basically, we have a reado
On 6/4/19 7:19 PM, Adrian Klaver wrote:
On 6/4/19 3:29 PM, Lou wrote:
Hi everyone,
Is it possible to convert a boolean yes or no field to hold a one letter
string? For example, the strings: 's' 'f' 'p' 'e'
To start off, I just need to convert true to 's'. false will have to be
manually c
On 6/5/19 9:48 AM, Pavan Kumar wrote:
Hello Experts,
Is it possible to configure multiple backup repositories in pgbackrest
tool in one server?
I am getting few issues with that.
https://pgbackrest.org/command.html#command-backup
3.4.9
Repository Path Option (--repo-path)
Path where backu
701 - 800 of 1621 matches
Mail list logo