Re: Problem with randrange in django/middleware/csrf.py
After all it was a misconfiguration of my system and not a problem of Python or Django. Somehow the special files /dev/random and /dev/urandom got screwed up. I suppose it was the outcome of a bad update of the udev package on my Archlinux system. When I recreated the node of /dev/urandom manually the random.SystemRandom().randrange returned all numbers instantly Thank you for pointing me to the right direction. Daniel -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: LOGIN_URL, LOGOUT_URL, LOGIN_REDIRECT_URL and SCRIPT_NAME.
On Thu, Jul 8, 2010 at 3:40 PM, Russell Keith-Magee wrote: > Personally, I see this as a case of explicit vs implicit. > > As currently defined, LOGIN_URL points to the login URL. Period. > > Under the proposed patch, the onus is on every possible script to > ensure that the script prefix has been set correctly. WSGI will do > this by default, but WSGI scripts aren't the only consumers of Django > code. Personally, I spend almost as much time on background processing > scripts for sites I support as I do on pages served via HTTP. > > So - is it more confusing to require that a settings file explcitly > define the full URL, or to expect every script to configure itself to > populate the magic SCRIPT_NAME variable? Jacob's position (and I find > myself agreeing with this position) is that it's less confusing to > require the settings file to be explicit. > (Apologies if I've trimmed some context, I hate untrimmed replies) I can understand this position, but this could also be considered a case of 'implicit/explicit' vs DRY. I have one way of specifying URLs: they go into my urlpatterns. The requirement to have to repeat the same URLs in my settings.py is tedious, and requires me to change multiple places if I modify the urlpatterns - the canonical definition of the URL. My login/logout urlpatterns are also named 'login' and 'logout'. I'm also glad you brought up background processing. One of the common things we do in the background is to process new data and generate emails for our users. These emails should link directly to the website, and to do this requires three additional bits of information that we can't get from the urlconf - the protocol to access via, the host name and port, the local path on the host. These are deployment specific and (effectively) mean that all scripts must populate some magic variables regardless. Again, it would be nice if I didn't have to repeat the local path in four places - LOGIN_URL, LOGOUT_URL, LOGIN_REDIRECT_URL, HTTP_LOCAL_PATH The LOCAL_PATH is taken care of by $MAGIC (get_script_prefix()) when using wsgi/fcgi - so reverse('login') returns a different value when used within the context of a request than it does from a background script. Ideally, we can solve both these issues in several backwardly compatible steps. Define new settings: HTTP_HOST HTTP_PROTOCOL HTTP_LOCAL_PATH: These define the local part of the deployment location - could be combined into one parameter. AUTH_URLS_USE_NAMED_PATTERNS=False Based upon the setting of AUTH_URLS_USE_NAMED_PATTERNS, the various places that use the LOG*_URL settings would instead look for named views called 'login', 'logout' and 'post_login'. The reverse function would be changed to create a complete absolute URI, regardless of the context it is called from. The logic here would be something along the lines of prepending get_script_prefix() if set, or HTTP_LOCAL_PATH otherwise. The reverse function would also gain an additional parameter, specifying whether to generate a fully qualified URL, defaulting to the old behaviour. This could then be used in situations where we know we want an fully qualified URL, eg generating links to be emailed. In release+1, we can cause setting LOG*_URL to raise a PendingDeprecationWarning, and turn AUTH_URLS_USE_NAMED_PATTERNS to True by default. In release+2, we drop support for LOG*_URL If you're interested in seeing what a solution like this would look like, I could make a patch. Cheers Tom -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Comet support in Django?
Hi guys. I failed to find any serious documentation on how to do comet (AKA HTTP push) on Django. Obviously Django does not (and will not?) support this out-of-the-box due to the way Django is deployed, and naturally you will need to use an external server (orbited and twisted come to mind). But what is the easiest way to do this? Shouldn't Django supply some wrapper for comet functionality? - Yuval -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Proposal: {% include_partial %} template tag
Something related, that we could really use is passing not just variables to the include, but also blocks. I tried to implement a template tag for this, but it doesn't work together with how Django replaces blocks in the extended template at compile time instead of during the renderering. I would like to do: ## in the main template: ... {% include "decorator.html" %} {% block "content" %} ... {% endblock %} {% endinclude %} ... ## In decorator.html {% block content %}{% endblock %} ## So, the main templates includes decorator.html, but replaces the inner block "content", by the block it passes to the include. The decorator pattern, wraps the input in some nodes. There are use cases where this is required to keep the templates DRY. I think only a few templating languages are able to do this. The .NET framework supports the design pattern pretty well, as far as I remember. The alternative in this particular example is to use two include tags "before.html" and "after.html", but is ugly because the opening and closing html tag are separated over different files. -- Jonathan On 8 juin, 19:47, Gregor Müllegger wrote: > Also +1 from me for extending the include tag instead of having a new one. > > Bye default it should keep its behaviour and use the current context > for the included template. Marco's use of a new, clean context > (demonstrated with the snippet below) is also possible to support. > > {% if label %} > {{ label }} > {% else %} > > You can just pass in an empty string, like one of the following three > examples: > > {% include "part.html" with label= title=obj.title %} > {% include "part.html" with label="" title=obj.title %} > {% include "part.html" with "" as label and obj.title as title %} > > (I don't want to propose the implementation of all three syntaxes. I > just want to demonstrate that all possible syntaxes can handle Marco's > usecase.) > > -- > Servus, > Gregor Müllegger > > 2010/6/8 burc...@gmail.com : > > > > > I'd suggest to change both include and with/blocktrans syntax into > > more programmer-friendly style: > > > {% include "part.html" title=obj.title|capfirst main_class="large" %} > > > This is both more dense, and from quick grasp you can see where are > > the delimiters ("as" is not so good for this). > > > Also I think we need an argument to tell that outer context is passed > > inside. > > > On Tue, Jun 8, 2010 at 11:30 PM, Gonzalo Saavedra > > wrote: > >> I'm +1 on the optional "with" parameter for {% include %}. -1 on > >> adding a new tag for this. > > >> I also use {% with %}{% include %} a lot in templates but we should > >> follow with/blocktrans syntax for consistency: > > >> {% include "part.html" with obj.title|capfirst as title and "large" > >> as main_class %} > > >> A related proposal for the "with" tag: It'd be nice to support more > >> than one variable definition (as blocktrans does): > > >> {% with "a" as var1 and "b" as var2 %}...{% endwith %} > > >> The current solution is nesting "with" tags, which is not very pretty. > > >> gonz. > > >> 2010/6/8 Marco Louro : > >>> Gabriel, > > >>> I only made that decision because I didn't see the need to have whole > >>> context, and the only time I have needed it was because of the {% > >>> csrf_token %}. This is just my use-case, but I understand that other > >>> people might want to use it differently. I don't think it makes much > >>> of a difference, a clean context may avoid some collisions from time > >>> to time, but it may have bigger drawbacks for other people. > > >>> Hi Jeliuc, > > >>> No, I don't. > > >>> On Jun 7, 7:59 pm, Gabriel Hurley wrote: > Extending the include tag seems like a fantastic idea! I end up > writing the {% with %}{% include %} combo all the time for my reusable > template snippets. > > However, I feel like selectively clearing the context inside a > template tag is asking for trouble and/or confusion. It also sounds > like it goes against Django's "templates require no knowledge of > programming" principle. While I can see how you might run into context > name collisions in a *very* large or complicated project, the right > solution there seems like it ought to be to clean up your context and/ > or templates outside of the template itself... Even in projects with > dozens of installed apps (both my own and third-party ones mixed > together) I've never had that problem where two minutes of tweaking > couldn't fix it for good. > > I'm certainly not saying you don't have a use case for it, or that it > wouldn't be extremely helpful to you. Just that having a tag that > clears the context sounds fishy to me... > > All the best, > > - Gabriel > > On Jun 7, 10:52 am, Marco Louro wrote: > > > I'd prefer extending the {% include %} tag actually, but didn't of > > that in the first place. > >> [...] > > >> -- > >> You r
Re: MySQL index hints
On Thu, 2010-07-08 at 15:58 -0500, Alex Gaynor wrote: > On Thu, Jul 8, 2010 at 3:51 PM, Simon Riggs wrote: > > On Mon, 2010-07-05 at 00:59 -0700, Simon Litchfield wrote: > >> > If you can come up with answers to these points, I might get > >> > interested. 1 and 2 are fairly trivial; I can think of some obvious > >> > answers for 3, but 4 is the big problem, and will require some > >> serious > >> > research and consideration. > >> > >> Well, I'm glad you like the with_hints() approach. Items 1-3 are easy. > >> Re 4 though, every db does it differently. In practice, most don't > >> need hints like MySQL does , because their query optimisers do a > >> much better job. > > > > The big problem I see is that hints are simply the wrong approach to > > handling this issue, which I do see as an important one. > > > > The SQL optimizer can't work out how you're going to handle the queryset > > if all you mention is the filter(). SQL being a set-based language the > > optimizer won't know whether you wish to retrieve 0, 1 or 1 million > > rows. In many cases it actively avoids using the index for what it > > thinks will be larger retrievals. > > > > That's categorically untrue. One of the major functions of an > optimizer is too try to figure out the approximate result size so it > can better establish index vs. data cost. Perhaps I need to explain some more, since what I've said is correct. The optimizer does work out the number of rows it thinks it will access; whether you retrieve all of those rows is a different and important issue. For example, if we have a 1 million row table called X with a column called TwoValues. In TwoValues there are 2 values, value=1 and value=2. There are 999,999 rows with value=1. An index is built on TwoValues. If I then issue this query: SELECT * FROM X WHERE TwoValues = 1; then the optimizer will deduce that I will access 999,999 rows out of a million and its best strategy is to avoid using an index. If I issue the same query for value 2 then it will retrieve 1 row and hence use the index. In most cases the application won't want to bring back all 999,999 rows, though the optimizer doesn't know that unless we tell it. If we assume that we actually only want 10 rows then the situation is open for change to this form of SQL SELECT * FROM X WHERE TwoValues = 1 LIMIT 10; which will use the index, so demonstrating that the optimizer is designed to offer an appropriate plan when presented with the full info. Slicing provides the full information for the use case and a hint should not be required to allow index use. (The situation is more complex in the case of parameterised prepared statements and in the case of stale statistics, though neither case is important here). -- Simon Riggs www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Training and Services -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
natural keys and dumpdata
Hi, I am trying to use the 'natural keys' feature of django to make a sort of "future proof" fixture loading possible. By "future proof" I mean that I want a site administrator to be able to add new objects to database tables where I will provide initial data. But I also want to be able to add new data at a later date, without overwriting the data they added inbetween. This is rather impossible without natural keys, for I cannot know the maximum ID of any primary key of tables that they have added data to. However with natural keys I can do away with recording the numerical ID of an object so this should be possible. While implementing this it turned out that ./manage.py dumpdata -- natural is almost exactly what I want, except for the fact that it still outputs the primary key for my objects. I see no reason for it to do this since I really do not care about the exact primary keys anymore. With the patch linked below[1] I have successfully used dumpdata and loaddata for a .json export of my tables. Of course I would like to see something like this accepted, but this is of course a sort of "feature request". And maybe I'm not considering an application that people still have to predetermine their auto-generated primary keys, even while dumping using --natural etc. So is this way off? Useful? Please let me know :) Regards, --Stijn [1]: I can't seem to attach stuff using the google groups webinterface, so find it for a limited time here: http://sandcat.nl/~stijn/tmp/django-naturalkeys-nopk.diff -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Comet support in Django?
On 9 July 2010 13:25, Yuval Adam wrote: > But what is the easiest way to do this? Shouldn't Django supply some > wrapper for comet functionality? > I imagine this is really out of the scope of Django but I might be missing something. I'm no expert and haven't done much more than read about comet - I've seen this thought it looks straight forward. http://www.rkblog.rk.edu.pl/w/p/django-and-comet/ -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: natural keys and dumpdata
On Fri, Jul 9, 2010 at 8:47 AM, Stijn Hoop wrote: > This is rather impossible without natural keys, for I cannot know the > maximum ID of any primary key of tables that they have added data to. > However with natural keys I can do away with recording the numerical > ID of an object so this should be possible. the best solution isn't natural keys (which are so very seldom a good idea); what you need is UUIDs. there are a couple of wontfix'd tickets on this theme, mostly because the feeling that this doesn't belong to core when it can easily be added by a custom fieldclass. two implementations: http://djangosnippets.org/snippets/1262/ http://www.davidcramer.net/code/420/improved-uuidfield-in-django.html -- Javier -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: New admin feature: Delete and replace with existing object
i'll reply one by one * What if you have multiple models referring to Author? Do you assume that every related model will be updated the same way? * What if you have multiple foreign keys from Blog to Author? Do you assume that every foreign key on a single model will be updated in the same way? i dont' think this is an issue, we do not need to reinvent the wheel. django uses get_deleted_objects to retrieve all related objects (deleted_objects, perms_needed) = get_deleted_objects((obj,), opts, request.user, self.admin_site) we can use this feature, and then do a function that cycle and update, or for a more efficient alghoritm we can use a queryset update. but maybe it's better to call method save on every single object beacause the save method could be customized. * What if you use admin actions to delete N authors? Do we assume that every one of the authors will be replaced with the same single substitute? I think yes, beacuse as you said before, a mutiple user choose would be not so user friendly. but we can define an attr, something like admin.ModelAdmin.replace_multiple_objects = True if True, for every object will be created a replacer field... but i think this is difficult to archive, and not so smart... I think that this behaviour is indipendent from the model. and i don't think there are different interpretations of this feature. let me explain: you are going to delete an object. django is saying to you: "in order to preserve database integrity, i have to delete all this object." do you want to delete them or you want to substitute with another one? replacing the object anywhere in the db keeps the database integrity safe. i don't see many hooks here... what kind of hooks do you have in mind? like define a list of models in the admin class that will be substituted? --- i also realized that we can handle this with a single field replace = forms.ModelChoiceField(queryset = qs, empty_label = ugettext("Do not substitute, delete all related objects")) if no object is selected, django deletes all the related objects, otherwise it will'substitute them. thanks Russ, but how to involve more people in this discussion, and take the discussion to a next step? -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: Proposal: {% include_partial %} template tag
Hi Jonathan, I don't believe you really need that complicated structure: extends -> includes -> extends. If I were you and had this problem, I'd rewrite it to something more simple. However, you can use {% block %} with my http://github.com/buriy/django-containers : :: file.html :: {% render "greetings.html" %} {% part greetings %}{% block greet %}Hi{% endblock %}{% endpart %} {% part object %}{% block obj %}man{% endblock %}{% endpart %} {% endrender %} :: second.html :: {% extends "file.html" %} {% block greet %}Hello{% endblock %} I'm not sure if that does always work, but for few my cases (including this one), it worked. However, I've rewritten those places and don't use this any more. It's too complex and might cause confusion. Because you can create second template for include: :: common.html :: {% render "greetings.html" %} {% part object %} man {% endpart %} {% endrender %} :: file.html :: {# this is that django built-in tag, you see it's still highly useful in some cases! :) #} {% include "common.html" %} :: second.html :: {% render "common.html" %} {% part greet %}Hello{% endpart %} {% endrender %} P.S. You can use the second syntax of {% render "common.html" greet="Hello" %}{% endrender %} for this case, but that can be written now as {% with "Hello" as greet %}{% include "common.html" %}{% endwith %} So, my last proposal for django is the following: {% include %} to get parameters: {% include "common.html" set greet="Hello" %} {% render %} from django-containers to be added into django, it has a killer feature: its parts can be blocks. There's no way to do this now in django without templatetags, but I believe there should be one. On Fri, Jul 9, 2010 at 8:15 PM, Jonathan S wrote: > Something related, that we could really use is passing not just > variables to the include, but also blocks. I tried to implement a > template tag for this, but it doesn't work together with how Django > replaces blocks in the extended template at compile time instead of > during the renderering. > > I would like to do: > > ## in the main template: > ... > {% include "decorator.html" %} > {% block "content" %} > ... > {% endblock %} > {% endinclude %} > ... > > ## In decorator.html > > > > {% block content %}{% endblock %} > > > ## > So, the main templates includes decorator.html, but replaces the inner > block "content", by the block it passes to the include. > The decorator pattern, wraps the input in some nodes. > > There are use cases where this is required to keep the templates DRY. > I think only a few templating languages are able to do this. > The .NET framework supports the design pattern pretty well, as far as > I remember. > > The alternative in this particular example is to use two include tags > "before.html" and "after.html", but is ugly because the opening and > closing html tag are separated over different files. > > -- Jonathan > > > On 8 juin, 19:47, Gregor Müllegger wrote: >> Also +1 from me for extending the include tag instead of having a new one. >> >> Bye default it should keep its behaviour and use the current context >> for the included template. Marco's use of a new, clean context >> (demonstrated with the snippet below) is also possible to support. >> >> {% if label %} >> {{ label }} >> {% else %} >> >> You can just pass in an empty string, like one of the following three >> examples: >> >> {% include "part.html" with label= title=obj.title %} >> {% include "part.html" with label="" title=obj.title %} >> {% include "part.html" with "" as label and obj.title as title %} >> >> (I don't want to propose the implementation of all three syntaxes. I >> just want to demonstrate that all possible syntaxes can handle Marco's >> usecase.) >> >> -- >> Servus, >> Gregor Müllegger >> >> 2010/6/8 burc...@gmail.com : >> >> >> >> > I'd suggest to change both include and with/blocktrans syntax into >> > more programmer-friendly style: >> >> > {% include "part.html" title=obj.title|capfirst main_class="large" %} >> >> > This is both more dense, and from quick grasp you can see where are >> > the delimiters ("as" is not so good for this). >> >> > Also I think we need an argument to tell that outer context is passed >> > inside. >> >> > On Tue, Jun 8, 2010 at 11:30 PM, Gonzalo Saavedra >> > wrote: >> >> I'm +1 on the optional "with" parameter for {% include %}. -1 on >> >> adding a new tag for this. >> >> >> I also use {% with %}{% include %} a lot in templates but we should >> >> follow with/blocktrans syntax for consistency: >> >> >> {% include "part.html" with obj.title|capfirst as title and "large" >> >> as main_class %} >> >> >> A related proposal for the "with" tag: It'd be nice to support more >> >> than one variable definition (as blocktrans does): >> >> >> {% with "a" as var1 and "b" as var2 %}...{% endwith %} >> >> >> The current solution is nesting "with" tags, which is not very pretty. >> >> >> gonz
Re: LOGIN_URL, LOGOUT_URL, LOGIN_REDIRECT_URL and SCRIPT_NAME.
Tom, HTTP_HOST and other don't solve the multiple-host deployment, and it is a solution you can do by yourself if you need. I'd like to see better solution: ability to make reverse work for such URLs. I think, currently the problem is in the binding time: The load order is typically the following: settings.py */urls.py */models.py */admin.py */views.py This is not mentioned anywhere (or is it?), but if this order is broken, "cycle in imports" errors will probably occur. So you can't put reverse now in settings.py unless there is some late-binding construct, like LOGIN_URL = RevLink('accounts:login', account_type='user') which can be resolved in views.py or admin.py or models.py into reverse in some way (__unicode__ or __call__?). Magic prefixes can be used as LOGIN_URL = MAGIC + RevLink('accounts:login', account_type='user') or even better more explicit LOGIN_URL = RevLink('accounts:login', path=PREFIX, host=HOST, protocol=PROTOCOL, {'account_type': 'user'}) I'd like django to have better reverse, and better inter-module bindings! On Fri, Jul 9, 2010 at 5:47 PM, Tom Evans wrote: > On Thu, Jul 8, 2010 at 3:40 PM, Russell Keith-Magee > wrote: >> Personally, I see this as a case of explicit vs implicit. >> >> As currently defined, LOGIN_URL points to the login URL. Period. >> >> Under the proposed patch, the onus is on every possible script to >> ensure that the script prefix has been set correctly. WSGI will do >> this by default, but WSGI scripts aren't the only consumers of Django >> code. Personally, I spend almost as much time on background processing >> scripts for sites I support as I do on pages served via HTTP. >> >> So - is it more confusing to require that a settings file explcitly >> define the full URL, or to expect every script to configure itself to >> populate the magic SCRIPT_NAME variable? Jacob's position (and I find >> myself agreeing with this position) is that it's less confusing to >> require the settings file to be explicit. >> > > (Apologies if I've trimmed some context, I hate untrimmed replies) > > I can understand this position, but this could also be considered a > case of 'implicit/explicit' vs DRY. I have one way of specifying URLs: > they go into my urlpatterns. > The requirement to have to repeat the same URLs in my settings.py is > tedious, and requires me to change multiple places if I modify the > urlpatterns - the canonical definition of the URL. > My login/logout urlpatterns are also named 'login' and 'logout'. > > I'm also glad you brought up background processing. One of the common > things we do in the background is to process new data and generate > emails for our users. > These emails should link directly to the website, and to do this > requires three additional bits of information that we can't get from > the urlconf - the protocol to access via, the host name and port, the > local path on the host. > These are deployment specific and (effectively) mean that all scripts > must populate some magic variables regardless. > Again, it would be nice if I didn't have to repeat the local path in > four places - LOGIN_URL, LOGOUT_URL, LOGIN_REDIRECT_URL, > HTTP_LOCAL_PATH > > The LOCAL_PATH is taken care of by $MAGIC (get_script_prefix()) when > using wsgi/fcgi - so reverse('login') returns a different value when > used within the context of a request than it does from a background > script. > > Ideally, we can solve both these issues in several backwardly compatible > steps. > > Define new settings: > HTTP_HOST > HTTP_PROTOCOL > HTTP_LOCAL_PATH: > These define the local part of the deployment location - could be > combined into one parameter. > AUTH_URLS_USE_NAMED_PATTERNS=False > > Based upon the setting of AUTH_URLS_USE_NAMED_PATTERNS, the various > places that use the LOG*_URL settings would instead look for named > views called 'login', 'logout' and 'post_login'. > > The reverse function would be changed to create a complete absolute > URI, regardless of the context it is called from. The logic here would > be something along the lines of prepending get_script_prefix() if set, > or HTTP_LOCAL_PATH otherwise. > The reverse function would also gain an additional parameter, > specifying whether to generate a fully qualified URL, defaulting to > the old behaviour. This could then be used in situations where we know > we want an fully qualified URL, eg generating links to be emailed. > > In release+1, we can cause setting LOG*_URL to raise a > PendingDeprecationWarning, and turn AUTH_URLS_USE_NAMED_PATTERNS to > True by default. > In release+2, we drop support for LOG*_URL > > If you're interested in seeing what a solution like this would look > like, I could make a patch. > > Cheers > > Tom > > -- > You received this message because you are subscribed to the Google Groups > "Django developers" group. > To post to this group, send email to django-develop...@googlegroups.com. > To unsubscribe from this group, send email to > django-developers+unsubscr...@goog
Re: LOGIN_URL, LOGOUT_URL, LOGIN_REDIRECT_URL and SCRIPT_NAME.
On Jul 7, 7:11 pm, Graham Dumpleton wrote: [snip] > web application, to be a well behaved WSGI citizen, should honour > SCRIPT_NAME setting as supplied by the server, and ensure that ways > are provided such that everything in the users code, including > configuration, urls or settings files, can be expressed relative to > the URL that the application is mounted at, thereby avoiding as much > as possible any need for a user to modify their code base when > deploying to a new environment at a different location in URL > namespace. IMO one root problem here (which has been discussed before) is that settings.py conflates "deployment config" with "application config" and thus tends to lead to fuzzy thinking about which is which (which causes problems especially for larger organizations with separate teams for development and ops). Application config should not have to change between different deployments of the same codebase; deployment config almost certainly will. The status quo makes LOGIN_URL (and friends) themselves a mishmash of deployment config and application config; i.e. the initial "application mount point" portion is deployment config, and the rest is clearly application config (it would only change if the URLconf changes). This seems like a bad thing to me: the ops team shouldn't have to understand the internal layout of the application's urls in order to deploy it correctly. I would rather see LOGIN_URL et al as purely application config, in which case they should automatically respect the URL mount point. The problem, as Russ points out, is that if this mount point is learned at runtime from the WSGI request, it adds complication to code outside the request cycle that needs to know where the application is mounted. But that exact same problem already exists for the site hostname! Django punts this issue to the developer and/or contrib.sites. Wouldn't it be most sensible to treat the URL mount point similarly to hostname, since they are really just two pieces of the same data: the deployed root URL of the application? (Practically speaking, this could mean a new field in contrib.sites.Site, or allowing/condoning "example.com/something" as a value for the domain field, and extending RequestSite to do the same by checking SCRIPT_NAME). Carl -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Regression problem on admin date format
Hello, Have anybody (Marc Garcia ?) check http://code.djangoproject.com/ticket/13621 ticket. It explains a bug concerning date and time formats. The admin does not conform the i18n locale settings on displaying time and date formats and reverts to the default format. It seems a true regression test as http://djangoadvent.com/1.2/i18n-l10n-improvements/ explains the right display. I have reverted to Django 1.2.0 and the display works perfectly. As you can imagine is a quite annoying bug for non-english date format users, perhaps enough to make a 1.2.2 release. -- Antoni Aloy López Blog: http://trespams.com Site: http://apsl.net -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: natural keys and dumpdata
On Jul 10, 1:47 am, Stijn Hoop wrote: > Hi, > > I am trying to use the 'natural keys' feature of django to make a sort > of "future proof" fixture loading possible. > [...] > With the patch linked below[1] I have successfully used dumpdata and > loaddata for a .json export of my tables. Of course I would like to > see something like this accepted, but this is of course a sort of > "feature request". It seems that my ticket in http://code.djangoproject.com/ticket/13252 covers this. It's ready for review if anyone wants to give it a spin... -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: natural keys and dumpdata
Hi Chris, you're not 100% correct with this statement: 370 When ``use_natural_keys=True`` is specified, the primary key is no longer 371 provided in the serialized data of this object since it can be calculated 372 during deserialization:: since in other old fixtures (that don't use use_natural_keys yet) foreign keys can rely on objects pks from this fixture. Or you might want to load objects by their pk for some purposes. There might be other cases, these two are the most simple ones. New option to tell if one should save pk or not... seems an overkill. I've no idea if this is important and what decision to choose, but anyway, I want to bring attention to these (rare?) cases. On Sat, Jul 10, 2010 at 2:13 AM, SmileyChris wrote: > On Jul 10, 1:47 am, Stijn Hoop wrote: >> Hi, >> >> I am trying to use the 'natural keys' feature of django to make a sort >> of "future proof" fixture loading possible. >> [...] >> With the patch linked below[1] I have successfully used dumpdata and >> loaddata for a .json export of my tables. Of course I would like to >> see something like this accepted, but this is of course a sort of >> "feature request". > > It seems that my ticket in http://code.djangoproject.com/ticket/13252 > covers this. It's ready for review if anyone wants to give it a > spin... > > -- > You received this message because you are subscribed to the Google Groups > "Django developers" group. > To post to this group, send email to django-develop...@googlegroups.com. > To unsubscribe from this group, send email to > django-developers+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/django-developers?hl=en. > > -- Best regards, Yuri V. Baburov, ICQ# 99934676, Skype: yuri.baburov, MSN: bu...@live.com -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: New admin feature: Delete and replace with existing object
On Fri, Jul 9, 2010 at 10:37 PM, Ric wrote: > i'll reply one by one > > * What if you have multiple models referring to Author? Do you > assume > that every related model will be updated the same way? > > * What if you have multiple foreign keys from Blog to Author? Do you > assume that every foreign key on a single model will be updated in > the > same way? > > i dont' think this is an issue, we do not need to reinvent the wheel. > django uses get_deleted_objects to retrieve all related objects > > (deleted_objects, perms_needed) = get_deleted_objects((obj,), opts, > request.user, self.admin_site) > > we can use this feature, and then do a function that cycle and update, > or for a more efficient alghoritm we can use a queryset update. > > but maybe it's better to call method save on every single object > beacause the save method could be customized. > > * What if you use admin actions to delete N authors? Do we assume > that every one of the authors will be replaced with the same single > substitute? > > I think yes, beacuse as you said before, a mutiple user choose would > be not so user friendly. > > but we can define an attr, something like > admin.ModelAdmin.replace_multiple_objects = True > if True, for every object will be created a replacer field... but i > think this is difficult to archive, and not so smart... > > > I think that this behaviour is indipendent from the model. and i don't > think there are different interpretations of this feature. > > let me explain: > you are going to delete an object. > django is saying to you: "in order to preserve database integrity, i > have to delete all this object." > > do you want to delete them or you want to substitute with another one? I understand this. What you seem to be missing is that in the general case, there isn't a single, canonical answer that will *always* be correct. If you are deleting 1 object, there are O(N*M) objects that need to be updated, where N is the number of models X that have a relation to the object being deleted, and M is the number of instances of X that have a relation to the object being deleted. As an example: I have an Author, Article and Address record. Article and Address both have a FK on Author. If I delete Author "John", I need to do something to any Article and Address record that points at John. In my business logic, I want to update all Articles to point at a dummy author, but I want to cascade delete all the Address records. To complicate issues some more - lets say I want some Article records to point at a dummy author, and some to point at a new author that is going to take responsibility. I need to be able to select which authors will be applied to which articles. There might even be a programatic scheme that I can use to semi-automate this reassignment (or automate the initial values prior to a formal confirmation by the site admin). Next complication -- if you're doing a bulk delete of P objects (using an admin delete action), all the decisions that had to be done N*M times now need to be done P*N*M times. On top of all this, there are cascading problems. For example, lets say Authors can belong to Teams. If I delete a team, I know I want to cascade delete all the Authors that belong to that team. But the Articles associated with members of those teams need to be reassigned. Then consider the bulk deletion of teams, and so on. The point I'm trying to make is that there is no single answer that we can implement that will be right for all business cases. So the best we can do is to make it *possible* to introduce pre-deletion cleanup logic, make that interface as clean as we can, and leave it up to the end user to implement a scheme that is appropriate for their implementation. These are the 'hooks' that I'm referring to. Once those hooks are in place, the simple "confirm you want to replace all instances of X with Y" approach that you're proposing should be a couple of lines of code, which will probably form the simple example provided in documentation. However, it will be possible for other users to implement more complex schemes, as appropriate. > thanks Russ, but how to involve more people in this discussion, and > take the discussion to a next step? A prototype implementation would be a good start. I've been raising design issues looking entirely from a public API perspective. I haven't done any exploration of how this would be implemented, so I have no idea what technical hurdles (if any) exist. Yours, Russ Magee %-) -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
Re: natural keys and dumpdata
On Sat, Jul 10, 2010 at 3:13 AM, SmileyChris wrote: > On Jul 10, 1:47 am, Stijn Hoop wrote: >> Hi, >> >> I am trying to use the 'natural keys' feature of django to make a sort >> of "future proof" fixture loading possible. >> [...] >> With the patch linked below[1] I have successfully used dumpdata and >> loaddata for a .json export of my tables. Of course I would like to >> see something like this accepted, but this is of course a sort of >> "feature request". > > It seems that my ticket in http://code.djangoproject.com/ticket/13252 > covers this. It's ready for review if anyone wants to give it a > spin... I'll put it on my todo list; if anyone else wants to give it a sanity check, I'd appreciate the extra eyeballs. Yours, Russ Magee %-) -- You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-develop...@googlegroups.com. To unsubscribe from this group, send email to django-developers+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.