>I've been looking at the code for normal object deletion in an attempt
>to get the same behaviour for bulk delete. It seems like there is a
>lot of logic dedicated to maintaining referential integrity that the
>database could be doing (and would probably do more efficiently).

It's not just the referential integrety - even though that's
complicated enough, with ISPs not allways installing newest versions of
databases - but the .delete() overloading, too. If your class overloads
.delete() to do specific stuff, that code will not be called when you
do bulk deletes.

Actually I am not really sure we can do fully symmetric (in the way
that they do exactly the same things as single object deletes) deletes.

BTW: the bulk delete stuff can be made more complete database-wise by
using the delete sql clause to construct deletes against related tables
(and doing updates against related tables).

If you have a maste rand a slave table and have a bulk delete like:

Master.objects.delete(name__contains='foo')

this will become the primary delete statement:

delete from master where name like '%foo%'

and this would be turned into the following statement for the related
table:

delete from slave where slave.master_id in (select id from master where
name like '%foo%')

Same with updates (turning around the master/slave relation in that we
now don't delete related objets, but do a set null):

update slave set master_id = NULL where master_id in (select id from
master where name like '%foo%')

Something along these lines (first collecting all related tables, then
building the needed update and delete statements and last doing the
actual table data delete) might be a way to at least get bulk-delete up
to par with multiple object deletes, while not losing too much
performance. It's still far from perfect, as you actually run the inner
query for the master table multiple times, though. And we still have
the clash in semantics with regard to the overloaded .delete() method.

bye, Georg

Reply via email to