Hi,Is there a SQL query or a database parameter setting that I can use from an external application to determine if the PostgreSQL database is on cloud (like on Amazon RDS or IBM cloud) or on a non-cloud on-prem environment?Thanks,Rashmi
Igal Sapir schrieb am 07.05.2019 um 07:58:
> GIS is a good feature but it's a niche feature, so while I'll mention
> it with extensions I am looking for more general-purpose comparisons
> and areas where Postgres is as-good or better than SQL Server.
I have a comparison of various DBMS products on
For me, another very useful featureset in Postgres is the extensive set of
datatypes and functions, including the strong JSONB support.
Also, i would focus on the widespread support of Postgresql by services
such as Amazon, Google, Heroku,
Another place to focus on would be the really extensive l
Tony,
On Mon, May 6, 2019 at 10:35 PM Tony Shelver wrote:
> I have to agree on the geospatial (GIS) features.
> I converted from SQL Server to Postgresql for our extended tracking
> database. The SS geospatial feature set doesn't seem nearly as robust or
> complete or perfoirmant as that suppli
Ron,
On Mon, May 6, 2019 at 12:54 PM Ron wrote:
> On 5/6/19 2:47 PM, Igal Sapir wrote:
> > but I want to instill confidence in them that anything they do with SQL
> > Server can be done with Postgres.
>
> Right off the top of my head, here are some things you can't (easily and
> trivially) do in
Brent,
On Mon, May 6, 2019 at 1:44 PM Brent Wood wrote:
> Hi Igal,
>
> One relevant comment I found interesting a couple of years ago...
>
> A New Zealand Govt agency was installing an institutional GIS system
> (several thousand potential users). It supported different back-end spatial
> databa
I have to agree on the geospatial (GIS) features.
I converted from SQL Server to Postgresql for our extended tracking
database. The SS geospatial feature set doesn't seem nearly as robust or
complete or perfoirmant as that supplied by PostGIS.
The PostGIS ecosystem of open source / 3rd party tools
> On Apr 27, 2019, at 12:55 AM, Andrew Gierth
> wrote:
>
> Obvious solution:
>
> create function uuid_keys(mapData jsonb) returns uuid[]
> language plpgsql immutable strict
> as $$
>begin
> return array(select jsonb_object_keys(mapData)::uuid);
>end;
> $$;
>
> create index on
Hi All,
Is there any other option to restore faster using psql, since I have to
export it as plain text dump.
--format=plain
Only plain SQL format is supported by Cloud SQL.
I cannot use pgrestore option for plain text sql dump restore.
On Mon, May 6, 2019, 6:35 PM Andreas Kretschmer
wrote:
On Mon, May 6, 2019 at 2:49 PM Adam Brusselback
wrote:
> I think the main "gotcha" when I moved from SQL Server to Postgres was I
> didn't even realize the amount of in-line t-sql I would use to just get
> stuff done for ad-hoc analysis. Postgres doesn't have a good way to emulate
> this. DO bloc
Ravi Krishna schrieb am 06.05.2019 um 23:56:
I recently had to write an equivalent of UNPIVOT.
UNPIVOT is actually quite easy with Postgres:
https://blog.sql-workbench.eu/post/unpivot-with-postgres/
Thomas
> I think the main "gotcha" when I moved from SQL Server to Postgres was
> I didn't even realize the amount of in-line t-sql I would use to just get
> stuff done
> for ad-hoc analysis.
T-SQL is an exceptionally powerful SQL based language. Add to it, the many
functions
SS has. I recently had
I think the main "gotcha" when I moved from SQL Server to Postgres was I
didn't even realize the amount of in-line t-sql I would use to just get
stuff done for ad-hoc analysis. Postgres doesn't have a good way to emulate
this. DO blocks cannot return resultsets, so short of creating a function
and
neeraj kumar writes:
> We are using PG 10.6. We have one cron job that queries pg_stat_activity
> table to find out how many queries are running longer than X minutes and
> generate metrics.
> Query look like this :
> SELECT * FROM pg_stat_activity WHERE state='active'
> After some days, this qu
more:
1. No db level backup/restore in PG, at least no easy way.
2. No cross db query.
Hi Igal,
One relevant comment I found interesting a couple of years ago...
A New Zealand Govt agency was installing an institutional GIS system (several
thousand potential users). It supported different back-end spatial databases.
Previous installs of this system for other clients had used MS S
Mitar writes:
> When migrating from MongoDB to PostgreSQL one thing which just
> surprised me now is that I cannot store NaN/Infinity in JSON fields. I
> know that standard JSON restricts those values, but they are a very
> common (and welcome) relaxation. What are prospects of this
> restriction
On 5/6/19 2:47 PM, Igal Sapir wrote:
but I want to instill confidence in them that anything they do with SQL
Server can be done with Postgres.
Right off the top of my head, here are some things you can't (easily and
trivially) do in Postgres:
- Transparent Data Encryption
- Block level full,
Hi!
When migrating from MongoDB to PostgreSQL one thing which just
surprised me now is that I cannot store NaN/Infinity in JSON fields. I
know that standard JSON restricts those values, but they are a very
common (and welcome) relaxation. What are prospects of this
restriction being lifted? It is
Ravi,
On Mon, May 6, 2019 at 12:28 PM Ravi Krishna wrote:
> > I was wondering if anyone has any tips that are specific for SQL Server
> users? Best features? Known issues? Common rebuttals?
>
> Are you talking about SS to PG migration.
>
> Generally SQLServer shops use SS specific functions a
> I was wondering if anyone has any tips that are specific for SQL Server
> users? Best features? Known issues? Common rebuttals?
Are you talking about SS to PG migration.
Generally SQLServer shops use SS specific functions and T-SQL heavily since
they provide very good functionality.
For e
Next month I'll be making a presentation about Postgres to a SQL Server
crowd in L.A. at their SQL Saturday event.
I was wondering if anyone has any tips that are specific for SQL Server
users? Best features? Known issues? Common rebuttals?
Thanks,
Igal
On Wed, May 1, 2019 at 3:09 PM Thomas Munro wrote:
> As discussed over on -hackers[1], I think it's worth pursuing that
> though. FWIW I've proposed locale versioning for FreeBSD's libc[2].
> The reason I haven't gone further with that yet even though the code
> change has been accepted in princi
On Sat, May 4, 2019 at 9:32 AM Bernard Quatermass <
toolsm...@quatermass.co.uk> wrote:
> Apologies if this isn’t the right place for this.
>
> I have created a helper daemon “jpigd”, FastCGI JSON Postgresql Gateway
>
> A tool to aid the elimination of CGI scripts on web servers and moving all
> th
On Mon, May 6, 2019 at 6:05 AM Arup Rakshit wrote:
SELECT MAX(id) FROM chinese_price_infos; max 128520(1 row)
SELECT nextval('chinese_price_infos_id_seq'); nextval - 71164(1
row)
Not sure how it is out of sync. How can I fix this permanently. I ran
vacuum analyze verbose;
On 06/05/2019 12:10, Arup Rakshit wrote:
Hi,
Thanks for your reply. It is automatic, my app don’t creates ID, it
delegates it to the DB. I am using Ruby on Rails app, where we use
Postgresql.
Well, I'm only throwing out wild guesses, but another possibility is
that rows were loaded manually i
Hi,
Thanks for your reply. It is automatic, my app don’t creates ID, it delegates
it to the DB. I am using Ruby on Rails app, where we use Postgresql.
docking_dev=# \d chinese_price_infos;
Table "public.chinese_price_infos"
Column|Type
On 06/05/2019 12:05, Arup Rakshit wrote:
Every time I try to insert I get the error:
docking_dev=# INSERT INTO "chinese_price_infos" ("item_code",
"price_cents", "unit", "description", "company_id", "created_at",
"updated_at") VALUES ('01GS10001', 6000, 'Lift', 'Shore Crane
Rental', '9ae3f8b8-8f
Every time I try to insert I get the error:
docking_dev=# INSERT INTO "chinese_price_infos" ("item_code", "price_cents",
"unit", "description", "company_id", "created_at", "updated_at") VALUES
('01GS10001', 6000, 'Lift', 'Shore Crane Rental',
'9ae3f8b8-8f3f-491c-918a-efd8f5100a5e', '2019-05-06
Am 05.05.19 um 19:26 schrieb Ron:
On 5/5/19 12:20 PM, Andreas Kretschmer wrote:
Am 05.05.19 um 18:47 schrieb Sathish Kumar:
Is there a way to speed up the importing process by tweaking
Postgresql config like maintenance_workmem, work_mem, shared_buffers
etc.,
sure, take the dump in cust
30 matches
Mail list logo