I also dislike bulk_save() for the same reasons.
I feel like bulk_update makes the most of sense given it has a signature
similar to bulk_create where an iterable of model instances must be passed
we're really just performing an update.
To the bulk_update and update is the natural analogous to
Bikeshed time.
I'm also against bulk_save for the same reason that it implies save().
bulk_update sounds okay to me, update() is indeed already a 'bulk'
operation but it could be claimed this is doing a 'bulk' amount of update
operations.
bulk_update_fields also sounds good, the longer method na
My original reasoning was that Queryset.update() already bulk updates rows,
so the bulk prefix seems a bit redundant here (how do you bulk something
that already does something in bulk?). .save() however operates on a single
object, so the bulk prefix seems more appropriate and easier to understand
Hi Curtis,
very good remarks. I would make sense to provide the best possible preset
for such a middleware. That could even be content-type sensitive.
I would however add any settings to overwrite the preset. I believe if
someone want to mingle with those things, one should inherit and override.
Good Point Aymeric,
it will probably not always be a good idea. I would require some kind of
configuration to define which response content-type should be encoded by
which algorithm.
It might also be interesting if you could use pre-compressed full page
caching. Which I think does currently not
Hi Adam,
no I would not take any of his code. I would certainly contact him as a
reviewer or co-author of a possible patch.
In general I would like to consolidate the two middlewares into a single
response compression middleware that support multiple algorithms. All
except gzip would require u