On 5/14/20 12:37 AM, Santhosh Kumar wrote:
Can you please help me understand, why the following news is published
in "postgresql" with an encouraging message acknowledging BDR as an open
source?
In my opinion it is not a bright idea to not have support for any
product. Support is an inde
IMO a database of this size should only be backed up in s3. pgbackrest
has support for backup to s3.
Why should the backup land in S3, and not local somewhere?
Any good reason why one should pay for the additional storage and
transfer costs?
Good question. The key point in my statement was "db of this size".
The problem with local backup is that space is not infinite. If your
business requ
>
> The database/schema per tenant solution can be tedious when you want to
> modify something on the structure and you have numerous tenants.
> Therefore I used the "tables with tenant_id" version in a similar situation
> but with a slight twist. One of the biggest issue of this solution is tha
Oracle is losing market share consistently and irreversibly for the last 4-5
yrs. It is not due to migration to open source
RDBMS, but also due to the fact that now there are many alternatives to RDBMS
for data storage. Until about 10-15 yrs back,
if the application has to store data, then RDBM
>
> Generally speaking, I discourage having lots of databases under one PG
> cluster for exactly these kinds of reasons. PG's individual clusters
> are relatively lightweight, after all.
>
Plus PG does not directly support cross database queries using 3 part name,
something
sqlserver excels
>
> Eh, that's something that I think we should be looking at supporting, by
> using FDWs, but I haven't tried to figure out how hard it'd be.
>
How good will that be in performance.
In db2 you can do it using dblinks and that kills performance. isn't FDW
something like dblink.
The cool part
>
> If the data size is more than 6TB, which approach better?
Do you require cross tenants queries? If yes, then schemas are a better
solution.
>> Plus PG does not directly support cross database queries using 3 part name,
>> something
>> sqlserver excels at.
>Maybe because SQL server does not have real databases but schemas instead ?
>This sucks security wise.
SQLServer has real databases with its own transaction log files. You can
r
> That's what I understood as well, but I'm creating those concurrently
> WITHIN one and the same session and transaction. :-)
Did I interpret this as "two different sessions via application threads within
the same transactions of PG".
Does the thread create its own PG session for each thread or
>
> But if you want to log in with encrypted password and someone can grab
> it from the file not sure what the difference is from grabbing the plain
> text one if they both end up logging the user in?
Exactly. saved me the trouble of typing this.
On 7/13/2020 4:52 AM, Rama Krishnan wrote:
For example, two transactions are going to increase the amount on the
same account by $100 . The first transaction reads the current value
($1000) and then the second transaction reads the same value. The first
transaction increases the amount (thi
>
> The main problem here is that "Amazon Aurora" is not PostgreSQL.
> If I understand Amazon's documentation, what you are using is
> officially named "Amazon Aurora with PostgreSQL Compatibility",
> and that sums is up quite nicely: Aurora is a database engine
> developed at Amazon - and it's in
Both can handle concurrent writes. auto-increment is nothing but serial or
sequence cols and they can handle unique concurrent request. That is why
sometimes you may have gaps.UUID is not only unique, but is also unique across
space. You can have two different databases generate UUID at the sa
This is the problem. A and B were developed for Oracle where SELECT does not
open a transaction. We moved them to PG
and now we have to very accurately add COMMITs without breaking the flow. It is
quite a complex thing. I hoped we can
avoid that.
Interesting. Are you telling the Oracle vers
>>Interesting. Are you telling the Oracle version of the code had no
>>intermittent COMMIT and relied on one final COMMIT at the end. Even
>>in Oracle developers must have planned for commit since a long running
>>open transaction can lead to “snapshot too old” error.
>Yes, I am saying just
This is assuming other sessions change the same block your session is trying to
read.
===
It's been a while since I worked with Oracle as a developer. But my
understanding
is that even a read-only transaction, like the one you described above, requires
a point in time consistent image of th
I can’t for the life of me imagine how you arrived at this. SQLite is very
capable indeed.
>Its dialect of SQL is (deliberately) very similar to Postgres, featuring such
>niceties as
>recursive CTEs and window functions, and it can handle heavy use and
>multi-terabyte
> databases if you need (cf
>I haven't replaced the broken View2 yet. >Hope someone can point me to some
>>further investigation.
Did you look at the actual definition of view2. Like all RDBMS PG materializes
the Ddl as it existed at the time of creation and converts it into an in line
sql.
There is a possibility it ma
> My understanding is that when CONCURRENTLY is specified, Postgres implements
> the refresh as a series of INSERT, UPDATE,
> and DELETE statements on the existing view. So the answer to your question is
> no, Postgres doesn’t create another table and
> then swap it.
The INSERTS/UPDATE/DELETE ha
>
> Experience shows that global index in Oracle lead to problems when dropping a
> partition. rebuilding an index, or other such nice administrative stuff,
> often leading to unnecessarily long downtimes.
>
>
I think Oracle fixed it later by allowing asynchronous update of global index
afte
>
>
> I have a table in an analytics database (Postgres 12.3), that gathers data
> continuously. It is at 5B rows, with an average row size of 250 bytes. The
> table has five indexes, on bigint and varchar columns, all with keys of one
> or two columns.
>
> There are currently frequent update
In SQLServer each db has its own data file and transaction log file and hence
can be copied the way you described.
> Limitation of Backup utility. Transfer to another server was done
> successfully. When we want to re-copy to original server,
>we have to delete the original db in the original se
I am planning to switch to a web based tool to read this mailing list. While
reading is easy via web, how do I post a reply from web.I recollect there use
to be a website from where one can reply from web.
thanks
>Is there a central place where i can get postgres blogs as they are
>written by different blog sites, e.g. can google news app be tuned
to just bring postgres blogs only?
I use https://planet.postgresql.org/
> Everyone is free to use whatever he/she wants. For me a we based MUA
> would be the worst thing ever.
Oh well. I have created a seperate email account for this to keep the clutter
out.
thanks all who took time to reply to this.
>The CPU and RAM are normal even on 300 jobs ( only 1-4% of consumption)
>but I don’t understand one thing here that if max_connections is set to 700
>then
>why I am not able to connect the db. As the running jobs (300) are lesser than
>half of max_connections.
Please paste the error message
>There is no error message, when I try to connect the database while
>running vacuumdb with 300 jobs, it gets stuck.
But you mentioned max connection which now seems to be a red herring.
Based on your description, the impression I got is that you are getting
"sorry, too many clients already" e
select count(1) from snapshotlist where id not in (select id from q);
count
---
0
(1 row)
Doesn't this usually happen if q.id contains NULL. That is as per ANSI
standard.
I've really only ever worked in web development. 90+% of web
developers regard doing anything at all clever in the database with
suspicion.
One common argument they use is that if you write your business logic in
stored procedure, you are locked to that database since stored procedure
languag
No. PostgreSQL may remove a dead row, but a dead row is by definition
no longer visible, so it wouldn't be found by a query.
I am wondering whether it is a good practice to use CTID in a where
clause. years ago when I use to code in Informix, using ROWID as a
generic substitute for primar
LANGUAGE 'edbspl'
This is the root cause of your issue.
You are not using PGSQL, but EDB version of it which is compatible with Oracle
PL/SQL.
Any idea why select on this table does not yield any output for a user who
otherwise can get output for other information_schema tables. Does this table
require any special privilege compared to other tables of
information_schema.Thanks
https://www.postgresql.org/docs/current/infoschema-table-constraints.html
"The view table_constraints contains all constraints belonging to tables
that the current user owns or has some privilege other than SELECT on."
thank you. that explains.
AWS Aurora based on PG 13
I am writing a sproc to copy a schema into another. Here is the relevant
portion of the code.
Basically I want to commit after every table is created. In big schemas with
hundreds of table I do not want to run entire operation in one transaction.
I am getting error a
> You can commit in a loop, but not in BEGIN / END block that has an exception
> handler:> that creates a subtransaction for the duration of the BEGIN / END.
The reason I have to deal with error exception is that I want to ignore failure
on a table and move on to next table.
I thought I can tric
AWS Aurora based on PG 13.
Large partitioned table of 5+ billion rows and 7TB in size.
ALTER TABLE abc ADD CONSTRAINT fk_123 FOREIGN KEY (a,b,c) REFERENCES xyz(1,2,3);
It seems this is not parallelized. Is there a way. Or directly going into each
partition is the only way ( not even sure it is po
Our issue is that it takes 20hrs to index the full table. Hopefully we can add
FK in multiple child partitions concurrently, otherwise doing it per partition
offers no advantage from performance pov.
Need to test. Hopefully PG should not lock the referred table during the first
build, stopping c
on a diff note, is the word memoize inspired from Perl Module memoize which use
todo the same thing.
Aurora PG based on PG 13.4
Our create concurrent index on a very large partitioned table (5 billion rows)
waits in Lock: Speculative token.Never seen this error in PG. Google search
also shows nothing.
Is this Aurora thingy ? If this is PG related I can provide full details.
Well it is Aurora.
https://www.cnbc.com/2018/10/23/amazon-move-off-oracle-caused-prime-day-outage-in-warehouse.html
>
> Since the article was almost content-free I not would use it on either side
> of the argument. The only thing I pulled from it was Amazon changed databases
> and hit the learning curve. That will happen in either direction.
I agree but this is the key:
"Savepoints are an important database
>
> Is it so hard to accept commercial databases have advantages?
> I find that not one bit surprising.
>
> I've used PG since 90's and it's no secret the "big guys" beat PG on certain
> workloads.
>
In my previous workplace where they tested EDB to replace PG, they found all
PL/SQL based co
e to handle the pressure, slowing down the overall database performance, the
report said."
>
> Again, pretty much content-free. For all you know some application was
> creating savepoints, needlessly:
>
> https://www.postgresql.org/docs/10/static/sql-savepoint.html
>
> and not cleaning up afte
> Again, pretty much content-free. For all you know some application was
> creating savepoints, needlessly:
> https://www.postgresql.org/docs/10/static/sql-savepoint.html
I have hardly used savepoints in any application, but if I understand it
correctly, isn't it something which is typically
>
> Amazon's web store may be a (mostly) stateless application, that doesn't mean
> their back end applications are.
>
Oh yes. There is nothing in that article which suggests that the root cause of
the outage was in the web based apps.
As you indicated, their back end may be the source of the
>>Does it mean that GIN is a very good choice for low cardinality columns.
>Not necessary. There is other index which also don’t keep column value in an
>every leaf. Hash, for instance.
Well I asked about GIN's usefulness for low cardinality. Good to know that
Hash can also be considered.
BT
Your best bet in mac is to use docker.
On Thursday, October 25, 2018 Pratik Parikh wrote:
Thanks, I'll check them out. But what I am trying to produce is a zip
distribution fo Mac is x similar to the one available on postgresql download
site. Homebrew works but it ties the libpg to prefix if
I have a project to develop a script/tool to copy data from DB2 to PG. The
approach I am thinking is
1. Export data from db2 in a text file, with, say pipe as delimiter.
2. Load the data from the text file to PG using COPY command.
In order to make it faster I can parallelize export and load wi
> I've never used it, but there is this in case it's helpful:
> https://github.com/dalibo/db2topg/
I looked into it. I thought it is a schema convertor plus data load. In other
words,
it is one of those one time migration script. What I need is a constant
refresh.
We plan to use it daily
Per documentation unlogged tables are not crash safe and PG will truncate it
when it restarts after a crash. Does this apply to even read only unlogged
tables.
For example:
On Monday I load data into unlogged tables.
Then from Tue onwards the table is only read by application.
On Fri morning
> There is no such thing as a "read only" table in PostgreSQL. All tables are
> read/write no matter that frequency of either event. There is nothing >
> inherently special about "no writes for 4 days" and "no writes for 10
> seconds" that would allow for a distinction to be made. There cou
PG 10.5
I loaded 133 million rows to a wide table (more than 100 cols) via COPY. The
table has
no index at this time. Since I am the only user I don't see any other activity.
Now when I run select count(*) on the table where I just loaded data, it runs
for ever,
more than 10min and still runnin
Must be something to do with Vaccum as the second time I ran the SQL, it did
not consume WAL logs.
>That represents setting the yes-this-row-is-committed hint bits on the
>newly loaded rows. The first access to any such row will set that bit,
>whether it's a select or a VACUUM or whatever.
yes now I recollect reading this in a blog. Thanks Tom.
> As long as you don’t have page checksums turned on,
> you can prevent this by turning off wal_log_hints.
I did not run initdb. How to find out which parameter were used with initdb.
For page checksums
to be on, it must have been run with -k option.
Our wal_log_hints is left at default
> select data_page_checksum_version from pg_control_init()
returned 1. So we have page_checksum turned on, and wal_log_hints off.
>Haven't tried it myself, but you may be able to connect the DB2 database
>to your PostgreSQL cluster using this FDW module:
>https://github.com/wolfgangbrandl/db2_fdw
>Looks like db2_fdw is DB2 LUW only though, so you might be out of luck
>if your DB2 is on IBM i (or z ;-)
As the thread indica
>
> I apologize for top posting, Google hid all of the other stuff.
>
It is only me who thinks that when it comes to destroying email as a
communication tool, no one did a better job than effing gmail.
Well your information needs some update.
- On AIX, IBM had no issues selling Oracle, a rival to DB2.
- IBM Global Services, a consulting unit was the single biggest sales force for
Oracle Installations outside
Oracle. In other words, they ended up using Oracle for projects done by
IGM-GS more
>
> Andrew Smith schrieb am 16.11.2018 um 11:01:
>> Are there any core features at the
>> moment that are Linux only?
>
> JIT, introduced in Postgres 11, comes to mind
>
A better question should be, are there any production users of PG on Windows :-)
> Yes, it is becoming increasingly difficult to persuade gmail etc. that> you
> are not a spammer if you run your own mail server. If you
> have any> interesting headers suggesting exactly what they disliked about my
> message,> could you please forward them off-list? Thanks.
>
>
It is for t
I am running explain analyze cost on a SQL which reads from two large
tables (122mil and 37 mil). The query is an UPDATE SQL where we use
derives table in the from clause and then join it back to the table
being updated.
The explain analyze cost itself is taking forever to run. It is running
for t
> Please do not hijack other threads by replying to a message and
> changing> the subject. Just send a new mail to
> pgsql-general@lists.postgresql.org, or whatever list you want
> to send an> email to.
>
I am truly sorry and this will not be repeated. I was just lazy.
I guess this would break
Version: PG 10.6 on AWS Linux.
I am trying to create an index on function date_trunc('month',timestamp)
PG is complaining that the function must be marked as IMMUTABLE. So I
assume that date_trunc is not marked as immutable.
Definition of immutable from PG documentation
==
Thanks all. I forgot the TZ part.
Glen,
I think your question can be posted here for a better response:
https://forums.aws.amazon.com/forum.jspa?forumID=227
Original Message
On Thu, Dec 20, 2018, at 3:57 PM, Glenn Schultz wrote:
>
> I have a Postgres database of about 1.5 terabytes on amazon aurora.
On Thu, Dec 27, 2018, at 5:23 PM, Bhavin Gandhi wrote:
> Hello,
> I'm trying to understand for a given unlogged table of a specific size
> and # of rows, if I do "alter table" on it to convert it to logged
> table, is there a performance difference between 9.5, 9.6 and 10? in
> other words are the
>
> pgadmin 4 is nothing to do with the PostgreSQL project itself, it's just a
> third party client.
>
> There are many other third-party clients listed here -
> https://wiki.postgresql.org/wiki/PostgreSQL_Clients -
> most of them probably better than pgadmin4.
Agreed. I use dbeaver and it is
> I m not sure other clients are able to read from WAL buffer, therefore
> i m not sure the data is available to other clients at that specific
> point in time.
No. On the standby the buffer cache has to be populated with the
updates before other client sessions can read it. AFAIK other client
s
Sorry I misunderstood. The term "read consistency" is generally used
either in the context of isolation level or in the context of slaves.
> We don't have standby instance, as I have mentioned we are using just
> one instance of postgres serving local clients running on the same
> machine, do you
> The table is huge and it takes a lot of time to add the INDEX and the FOREIGN
> KEY although all values are NULL.
> Considering that the new DepartmentId column is NULL for all rows at this
> point, is there a way to make the INDEX and FOREIGN KEY creation run faster?
In your script to creat
If this one appears in the list, then it means the problem is with AOL.
Are there any plans to support PG on WSL ? Just curious.
Not able to produce this with PG 11.1
If col1 is any type other than boolean, the update statement fails in syntax.
If col1 is boolean, then it updated it correctly. In other words
update col1 = NULL
and col2 in (1,2)
is treated same as
update col1 = NULL
where col2 in (1,2)
Also
Oh wait. I see that it in both cases it did update correct target rows, but
the value of col1 for non matching rows
is different. In the first case (and col2), the non matching rows also got
updated.
So yes, same behavior like yours.
> Sent: Thursday, April 18, 2019 at 2:36 PM
> From:
> The above is not the same format as OP's query:
>
> Update tableA set col1 = null and col2 in (1,2);
I did include set in the sql. I typed it wrong here.
>
> In what format are you dumping the DB2 data and with what specifications e.g.
> quoting?
>
DB2's export command quotes the data with "". So while loading, shouldn't that
take care of delimiter-in-the-data issue ?
>
> I don't think we've seen enough representative data to know exactly what the
> backslash is doing. It doesn't appear to be an escape, based on the sole
> example I've seen it appears to be a data separator between first name and
> last name.
>
> It seems increasingly likely to me that you
>
> Hope Iam detail this time :-)
>
Unfortunately still not enough. Can you post sample of the data here. And
what command you used in DB2. Pls post the SQL used in DB2 to dump the data.
IMO you are using the slowest tool to import.
Just one quick question: Why can't you take cluster backup using any of the
tools available and then drop all
unwanted databases after you import the cluster.
pg_basebackup will do a good job.
> I was wondering if anyone has any tips that are specific for SQL Server
> users? Best features? Known issues? Common rebuttals?
Are you talking about SS to PG migration.
Generally SQLServer shops use SS specific functions and T-SQL heavily since
they provide very good functionality.
For e
more:
1. No db level backup/restore in PG, at least no easy way.
2. No cross db query.
> I think the main "gotcha" when I moved from SQL Server to Postgres was
> I didn't even realize the amount of in-line t-sql I would use to just get
> stuff done
> for ad-hoc analysis.
T-SQL is an exceptionally powerful SQL based language. Add to it, the many
functions
SS has. I recently had
> EDB or Aurora to Open source Postgres assuming we dont use AWS services OR
> would you suggest to move to Community version from the start by taking
> support/consultancy
> from other companies like 2nd quadrant and etc?
EDB is mainly attractive to Oracle shops who want Oracle compatibility.
>@Ravi
>My company is trying to avoid another vendor lockin too , thats why we are bit
>skeptical
>on going to EDB as once we start using their Oracle compatability
>feature then it will be very difficult to move to community addition again.
As far as I know, vendor lock in applies only if
what does ls /usr/lib64/perl5/vendor_perl/auto/DBD/Pg/Pg.so return?
If there is no such file, then it means that on the dev server perl DBD was
installed differently
than prod. I am assuming LD_LIBRARY_PATH is also correctly set.
apologize if my advise is wrong. It has been eons since I worked w
> and perl DBD also installed but it cant load that is the issue please advise
> me.
what is the output of LD_LIBRARY_PATH on both dev and prod
>
> ==> ldd /usr/lib64/perl5/vendor_perl/auto/DBD/Pg/Pg.so
> linux-vdso.so.1 => (0x7fffddd8f000)
> libpq.so.5 => /usr/lib64/perl5/CORE/libpq.so.5 (0x7f5ecdbd6000)
I may be wrong, but is the above path in the
>
> Note - if am taking same prod single database backup and restore in new
> cluster no use for us and it will take more time.
> so business and team they need every 3 weeks for restore in dev server one
> single database and cant we do it in pg_dump and restore .
> They want using pgbackrest t
>
>
> Not , am saying we have the daily backup and full backup in prod server only
> and there is one database like a4 the db size is 1.5TB.
> so am not restore again in prod .
> Am taking directly single backup restore in dev its means in dev server only
> restore the database in new cluster.
--- Original Message
--
> On May 14, 2019, at 9:06 AM, Rob Sargent wrote:
>
>
> Which part confused you Ravi?
Same as you, this one
"Am taking directly single backup restore in dev its means in dev server only
restore the databa
>
> I was thinking of asynchonously cleaning it up rather than blocking
> DROP/DETACH ... which means you need to keep state somewhere. I don't
> think blocking DROP/DETACH is valuable -- a global index that blocks
> DROP/DETACH until the index is clean serves no useful purpose. (You
> could thi
On 6/14/19 10:01 AM, Tiemen Ruiten wrote:
LOG: checkpoint starting: immediate force wait
Does it mean that the DB is blocked until the completion of checkpoint.
Years ago
Informix use to have this issue until they fixed around 2006.
> More generally: I find this complaint a little confusing. We did not
> consider reporting the "show row contents" DETAIL to the client to be a
> security hazard when it was added, because one would think that that's
> just data that the client already knows anyway. I'd be interested to see
> a
PG does not have a concept of differential backup since it does not track block level changes. Pgbackrest has implemented a different backup using timestamp of last update in data files. Not sure whether it works in windows.--Sent from phone.From: Rajmohan Masa Sent: Thursday, April 27, 2023, 7:25
This looks like early vs late binding problem, also seen in other products. When you prepare the sql, the optimizer
has no way of knowing the values which is going to be supplied in future. So it is possible that at the time of preparing
PG settles on a plan and uses it for all values, regardles
Truncate is not delete + vaccum.
It creates a new empty table , followed by rename of the existing table to the
new empty table and finally dropping of the old table.
On May 28, 2021 at 7:05 AM, Vijaykumar Jain
wrote:
Yes,
I too see growth when text type is used, but not when int or even
I am not sure about that
"It creates a new empty table , followed by rename of the existing table to the new
empty table and finally dropping of the old table."
You mean table is re-created with new oid?
I don't think oid changes, but the file relnode on the disk changes. So let me
rephrase i
this is a very interesting case. Atul keep us posted.
101 - 200 of 204 matches
Mail list logo