On 03/26/2012 06:34 AM, Byron Ruth wrote:
Sure I guess you _could_ take a generic signals approach to handling all modify operations, but for each operation context is important. The state of the object(s) matters when performing operations. The `pre_save` and `post_save` signals for example have a `created` flag which allows for filtering out the new objects versus existing ones. Likewise, having a distinct pair of delete signals ensures you don't make the mistake of performing operations on objects that are now deleted.
The pre/post_modify signal would have the context available. (parameters action='save/delete/bulk_create/update' and action_args). It would be kind of all the current signals compressed into one. If that is a good design or not is a good question. I think it would be useful, but I would not be surprised if other core developers have different opinion of such a signal.
The reason the generic signal would be useful is that more often than not you are interested in _all_ the data modifying operations done to a model, not just in the save or delete or update subset. Hence, one mount point for all the operations. What is done in the signal could be very different for the .update() case than for the .save() case, but on the other hand there are cases where the generic signal handler would reduce the amount of work. For search engine reindexing (Haystack for example), you would just do reindex(modified_objects) for all cases except delete. Similar for cache invalidation. So, such a signal could simplify some common operations.
Having signals for all data modifying operations is in my opinion important. It is of course possible to achieve this without generic signals by just adding pre/post update / bulk_create signals. The bulk_create signal is problematic because you do not have the auto-pk values available even in post_bulk_create signal. For PostgreSQL and Oracle having the PKs would be possible using the "RETURNING ID" syntax. SQLite could be hacked to support returning the IDs (there are no concurrent transactions, hence you know what the IDs will be, or at least I think this is the case). But for MySQL I do not know of any way of returning the IDs. Except for locking the table for the duration of the insert...
- Anssi -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.