Re: existence of a savepoint?

2018-05-29 Thread Brian Dunavant
On Sun, May 27, 2018 at 6:04 PM, Stuart McGraw  wrote:

> Is there some way to to test if a savepoint of a given name
> exists?  Or better yet, the number of stacked savepoints of
> that name?
>
>
This is kind of backwards, but you can determine if a savepoint by a
certain name exists by trying to release it.   You can do so without
damaging the transaction by creating a savepoint immediately beforehand,
but this will cause you to lose the state of the named savepoint.  Example
below.

=# begin;
BEGIN
=# insert into brian.test values (1);
INSERT 0 1
=# savepoint target_point;
SAVEPOINT
=# insert into brian.test values (2);
INSERT 0 1
=# savepoint buffer;
SAVEPOINT
=# release target_point;
RELEASE <-- savepoint existed
=# commit;
COMMIT


=# begin;
BEGIN
=# insert into brian.test values (3);
INSERT 0 1
=# savepoint buffer;
SAVEPOINT
=# release target_point;
ERROR:  no such savepoint<- savepoint did not exist
=# rollback to buffer;
ROLLBACK
=# select * from brian.test;
 x
---
 3

In theory you could do two savepoints "target_point" and
"target_point_test" and check for the _test version to maintain your
transaction states and be able to test for it once.

BEGIN
=# insert into brian.test values (1);
INSERT 0 1
=# savepoint target_point;
SAVEPOINT
=# savepoint target_point_test;
SAVEPOINT
=# insert into brian.test values (2);
INSERT 0 1
=# savepoint buffer;
SAVEPOINT
=# release target_point_test; <- check if target_point exists
RELEASE
=# rollback to target_point;  <----- rollback to it
ROLLBACK



Cheers,

-Brian Dunavant


Re: sql questions

2018-07-20 Thread Brian Dunavant
On Fri, Jul 20, 2018 at 4:27 AM  wrote:

>
> b) can a sql function return the count of affected rows of some query?
> create function merge_names(int, int) returns void as
> $_$
> update namelinks set nid = $2 where nid = $1;
> -- want the affected rows of the above query
> delete from names where nid = $1
> -- return result here
> $_$
> language sql;
>
>
Yes.  You can do this in pure SQL by using CTEs like the following
example.

with myupdate as (
   update test set a = 4 where a = 1
   returning a
),
mydelete as (
   delete from testnames where nid = 1
)
select count(1) from myupdate;

You can then just wrap a function around this.  Full test case below.

-- Create test tables
create table test ( a integer );
insert into test values (1),(1),(3);
create table testnames ( nid integer );
insert into testnames values (1);

-- Update, delete, and return the number of updates in a single statement
create function test_names(integer, integer) returns bigint as
$_$
 with myupdate as (
   update test set a = $2 where a = $1
   returning a
 ),
 mydelete as (
   delete from testnames where nid = $1
 )
 select count(1) from myupdate
$_$
language sql;

-- Run it
# select test_names(1,4);
 test_names

  2
(1 row)

-- Verify results
=# select * from test;
 a
---
 3
 4
 4
(3 rows)

=# select * from testnames;
 nid
-
(0 rows)


Re: Making "invisible" characters visible ? (psql)

2019-11-20 Thread Brian Dunavant
On Wed, Nov 20, 2019 at 9:16 AM stan  wrote:

>
> How can i make these "invisible" characters visible?
>
>
>
In psql, by default it displays nulls as nothing.  You can specify what
they should display as with:

\pset null ''

Chances are those are all nulls, and will now display as whatever you set
them to.  If they aren't, let us know.


Re: performance of loading CSV data with COPY is 50 times faster than Perl::DBI

2020-01-31 Thread Brian Dunavant
You can use COPY over DBI.

https://metacpan.org/pod/DBD::Pg#COPY-support

On Fri, Jan 31, 2020 at 2:03 PM Pavel Stehule 
wrote:

>
>
> pá 31. 1. 2020 v 19:25 odesílatel Matthias Apitz 
> napsal:
>
>>
>> Hello,
>>
>> Since ages, we transfer data between different DBS (Informix, Sybase,
>> Oracle, and now PostgreSQL) with our own written tool, based on
>> Perl::DBI which produces a CSV like export in a common way, i.e. an
>> export of Oracle can be loaded into Sybase and vice versa. Export and
>> Import is done row by row, for some tables millions of rows.
>>
>> We produced a special version of the tool to export the rows into a
>> format which understands the PostgreSQL's COPY command and got to know
>> that the import into PostgreSQL of the same data with COPY is 50 times
>> faster than with Perl::DBI, 2.5 minutes ./. 140 minutes for around 6
>> million rows into an empty table without indexes.
>>
>> How can COPY do this so fast?
>>
>
> Probably there are more reasons
>
> 1. probably DBI implementation is not too effective (for Postgres), maybe
> because COPY is not fault tolerant
> 2. postgres has not implicit plan cache, so every INSERT planned again and
> again
> 3. COPY bypass planner and executor and it has very effective network
> communication
> 4. with COPY you have a sure so autocommit is disabled.
>
> Regards
>
> Pavel
>
>
>
>>
>> matthias
>>
>> --
>> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
>> +49-176-38902045
>> Public GnuPG key: http://www.unixarea.de/key.pub
>>
>>
>>


Re: Listen/Notify feedback

2020-07-11 Thread Brian Dunavant
One aspect is if there is no one listening when a notify happens, the
message is lost (e.g. no durability).   If this is important to you, it can
be addressed by writing the messages to a table as well when you NOTIFY,
and the listener deletes messages after they are processed.  On connection
the listener can query the table to catch up on any missed messages, or
messages that were mid-process during a crash.  This is trickier with more
than one listener.   This isn't a whole lot more efficient than just using
the table alone, but it saves you from having to poll so better response
times.

On Sat, Jul 11, 2020 at 8:58 AM Rita  wrote:

> I am investigating various pub/sub tools such as ActiveMQ, Rabbit, Redis,
> etc.I came across Postgresql Listen/Notify and was easily able to write
> code to listen to messages. For the people who have been using this for a
> while: what are its downsides, things to consider when writing good code
> that use pub/sub, how do you deal with large messages, can I have
> subscribers listen to replica nodes?
>
> Thanks
> --
> --- Get your facts first, then you can distort them as you please.--
>


Re: Inserting many rows using "with"

2020-09-11 Thread Brian Dunavant
On Fri, Sep 11, 2020 at 1:31 PM Harmen  wrote:

> Hi all,
>
> Are there any known problems with this strategy? Are they any other
> methods of
> inserting lots of records in a nicer way?
>

I do this all the time with insert and it's wonderful.  It can get tricky
if you need to do UPDATEs.

You can avoid the unnest and just use value rows directly if you want
(which can be easier to read)l

 with ids(id) as
( values (1),
  (2),
  (3)
) select id from ids;

Further, you can use this technique with multiple CTEs to insert into
completely different tables all in the same query saving lots of round
trips.


> Alternatives I know of, and the only options I could find documented:
> - create a huge custom insert statement
> - use COPY, but that doesn't work with 'ON CONFLICT' as far as I can see
>

COPY is great for large amounts of data.  If it's a huge amount of data,
you can load it into a temp table with COPY, and then use your ON CONFLICT
insert to implement your logic.


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Brian Dunavant
On Thu, Apr 1, 2021 at 10:49 AM Glen Huang  wrote:

> If I decide to replace all my transaction code with CTE, will I shoot
> myself in the foot down the road?
>

I do this all the time and makes code way cleaner.   It's very
straightforward with inserts queries.  When you deal with updates/deletes,
things can be trickier.  I usually leave these in a transaction if there is
any concern.

They can also be hard for future programmers that may not understand SQL.
Make sure you comment your queries for maintainability long term.

I have yet to regret replacing a transaction with a CTE over the past
decade.  YMMV


Re: Is replacing transactions with CTE a good idea?

2021-04-01 Thread Brian Dunavant
On Thu, Apr 1, 2021 at 11:06 AM Glen Huang  wrote:

> Care to expand why they are tricker? I presume they run the risk of being
> referenced more than once?
>

There are lots of gotchas. It's also been a few years since I dug deep into
this, so some of this may have changed in more recent versions.

* Changes in a CTE aren't visible to later CTEs since they haven't happened
yet.   Often times people are updating a table and then doing further
things and can hit situations they weren't expecting.

db=> create table foo ( a integer primary key );
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey"
for table "foo"
CREATE TABLE
db=> insert into foo values ( 1 );
INSERT 0 1
db=> with upt as ( update foo set a = 2 ) insert into foo values (1);
ERROR:  duplicate key value violates unique constraint "foo_pkey"
DETAIL:  Key (a)=(1) already exists.

* Unless you reference between the CTEs to force ordering, CTEs can happen
in any order, which can cause things to get out of the order people
expected.

* Just like you can cause deadlocks between threads in a transaction, you
can do the same thing by shoving all those statements into a single CTE
query.


Re: The tragedy of SQL

2021-09-14 Thread Brian Dunavant
On Tue, Sep 14, 2021 at 1:54 PM Raymond Brinzer 
wrote:

>
> So, the affection I have for SQL is due to it being a gateway to a
> great idea; my frustration is that it's a bottleneck in getting to
> that same idea.
>
>
I have the opposite perspective.  As a dev/manager, SQL is much more
powerful at getting data storage from abstract concept, into a usable
structure, than any programming language I've had the (mis)fortune of
using.   I've long since lost count of the massive volume of other people's
code (especially ORMs) I've removed and replaced by updating SQL statements
to do all the logic, and return me exactly what I want.  And typically this
also comes with a (sometimes massive) performance gain.

I've managed many a programmer that would complain that SQL is too hard and
they don't want to learn it, but had no problem spending days learning the
ORM of the month that "saves them time" and writing complex inscrutable
monstrosities with them.

Could SQL be better?  Absolutely.  But in terms of bang-for-my-buck, I
feel learning SQL has saved me more clock-time, and improved my
productivity/value probably more than any other individual language in my
career.


Re: "two time periods with only an endpoint in common do not overlap" ???

2021-10-15 Thread Brian Dunavant
Think of it this way.  When someone says they have a meeting from 1-2 and
another from 2-3, do those meetings overlap?  They do not, because we're
actually saying the first meeting is from 1:00 through 1:59:59.9.   The
Postgres date ranges are the same way.   The starting point is inclusive,
but the ending time is exclusive.   So [1:00,2:00), and [2:00,3:00), do not
overlap.

On Fri, Oct 15, 2021 at 2:16 PM Ron  wrote:

> On 10/15/21 8:59 AM, Adrian Klaver wrote:
> > On 10/15/21 06:52, Ron wrote:
> >> On 10/14/21 7:02 PM, Adrian Klaver wrote:
> >> [snip]
> >>> or the third example in the docs:
> >>>
> >>> SELECT (DATE '2001-02-16', DATE '2001-12-21') OVERLAPS
> >>>(DATE '2001-10-30', DATE '2002-10-30');
> >>> Result: true
> >>> SELECT (DATE '2001-02-16', INTERVAL '100 days') OVERLAPS
> >>>(DATE '2001-10-30', DATE '2002-10-30');
> >>> Result: false
> >>> SELECT (DATE '2001-10-29', DATE '2001-10-30') OVERLAPS
> >>>(DATE '2001-10-30', DATE '2001-10-31');
> >>> Result: false
> >>
> >> Why /don't/ they overlap, given that they share a common date?
> >
> > Per the docs:
> >
> > https://www.postgresql.org/docs/current/functions-datetime.html
> >
> > " Each time period is considered to represent the half-open interval
> start
> > <= time < end, unless start and end are equal in which case it
> represents
> > that single time instant."
> >
> > Which I read as
> >
> > (DATE '2001-10-29', DATE '2001-10-30') ends at '2001-10-29'
> >
> > and
> >
> > (DATE '2001-10-30', DATE '2001-10-31') starts at DATE '2001-10-30'
> >
> > so no overlap.
>
> I was afraid you were going to say that.  It's completely bizarre, but
> seems
> to be a "thing" in computer science.
>
> --
> Angular momentum makes the world go 'round.
>
>
>