Re: +1 on part of #1810 (put parse_dsn() in django.conf)
My issue with adding DSN support to Django isn't that its not useful to some people, but that it feels like we are adding functionality that belongs in the DB-API layer, not Django. In an ideal world, the database settings would just be passed verbatim to the connect() function of the appropriate DB backend. Unfortunately, this is the one function in DB-API that varies among providers so there is no standard syntax. Some implementations support a DSN argument and others don't. It seems like for every objection to the current feature (reusing existing environment variables, supporting "standard" DSNs, securing passwords) there is a simple, reasonable solution, (albeit one that may require writing a few lines of code) yet some people don't find that satisfactory. There are lots of useful snippets on the Django project site that aren't part of the official distribution. Couldn't someone just post the parse_dsn code there and let it bake for a while? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: EVOLUTION - Add Field Schema Evolution Support
> I am wondering how to retrieve a collection of columns within a table, > whilst using the standard python dbapi2 functionality. Do you mean cursor.description? cursor = connection.cursor() cursor.execute('select * from blog_post where 1 = 0') for col in cursor.description: print col Each col is a 7-item sequence containing column name, type_code, display_size, internal_size, precision, scale, and null_ok) http://www.python.org/dev/peps/pep-0249/ -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: RFC: Django history tracking
There was a similar thread on this earlier where I commented about a slightly different way to store the changes: http://groups.google.com/group/django-users/browse_thread/thread/f36f4e48f9579fff/0d3d64b25f3fd506?q=time_from&rnum=1 To summarize, in the past I've used a time_from/time_thru pair of date/time columns to make it more efficient to retrieve the version of a row as it looked at a particular point in time. Your design of just using change_date makes this more difficult. I can also think of use cases where I want the versioning to track both date and time since I would expect multiple changes on the same day. Maybe these could also be options? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: RFC: Django history tracking
Uros Trebec wrote: > > > To summarize, in the past I've used a time_from/time_thru pair of > > date/time columns to make it more efficient to retrieve the version of > > a row as it looked at a particular point in time. Your design of just > > using change_date makes this more difficult. > > I don't know what you mean exactly, but I'm not using just > change_date. The ID in *_history table defines the "revision/version > number", so you don't have to use "change_date" to get the exact > revision. Let me clarify. What I meant was that your design makes it hard to directly query the row that was in effect at a certain point of time, i.e. given a date/time, how do I find the record that was current at that instant in time? In your model I would have to use a query like this to find the active record for 1/1/06: select * from FooHist where change_date = (select max(change_date) from FooHist where change_date < '2006-01-01') So you find the most recent change that occurred *before* the date in question, which requires a subselect. That is a bit ugly, inefficient, and I think very difficult to map to the Django DB API. With a time_from/time_thru model such a query looks like this: select * from FooHist where time_from <= '2006-01-06' and (time_thru > '2006-01-06' or time_thru is null) So here we are looking for the row who's *active interval* contains the date in question which is a simple, direct query (no subselect). The test for null is a special case for the version of the row that is current (has no end date). I've seen other people use a sentinal value like '-12-31' to make the query a little simpler (but then you get that magic date all over the place). I know some people might say this smells of premature optimization, but in my experience - where I have had to make a lot of applications work correctly for a past date - you may end joining many tables with such an expression and the subselects will kill you. You are simply adding one more date/time field to allow joining the table via time more easily. Since this is a *history* table, joining based on time is a very common use case. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Syncdb generates non-unique foreign key constraints
I mistakenly posted this in django-users so reposting here... -- I've been having a problem rebuilding my database from scratch via syncdb. I've tracked it down to duplicate constraint name. Here is the output from manage.py sql for my app: ALTER TABLE `data_rawinst` ADD CONSTRAINT `inst_id_referencing_data_inst_id` FOREIGN KEY (`inst_id`) REFERENCES `data_inst` (`id`); ALTER TABLE `data_instmap` ADD CONSTRAINT `inst_id_referencing_data_inst_id` FOREIGN KEY (`inst_id`) REFERENCES `data_inst` (`id`); Note that I have two tables, both with FK's to data_inst and its generating the same constraint name (`inst_id_referencing_data_inst_id`). It seems the source table should be part of that name, such as `data_rawinst_inst_id_referencing_data_inst_id`. I'm on the trunk at Rev 3350. This used to work, I'm pretty sure, but I haven't rebuilt the whole DB from scratch for a long time so I don't know if its been lingering for a while. Thanks, -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Syncdb generates non-unique foreign key constraints
Michael Radziej wrote: > DavidA wrote: > > I've been having a problem rebuilding my database from scratch via > > syncdb. I've tracked it down to duplicate constraint name. Here is the > > output from manage.py sql for my app: > > ... > > This used to work, I'm pretty sure, but I haven't rebuilt the whole DB > > from scratch for a long time so I don't know if its been lingering for > > a while. > > The algorithm for naming the constraints has changed since the > original way created names that were too long for mysql. > > See ticket #2257. There's not a patch, but the idea of the solution. > > > Michael Thanks. I'll try the suggestion in the ticket. I also did a little more research on this and found this MySQL bug report: http://bugs.mysql.com/bug.php?id=13942 One thing to note is that in MySQL if you don't name the constraint, it automatically creates one of the form: _ibfk_# where # is a number, ensuring its unique, similar to the suggestion in the ticket. (But as the MySQL bug report points out, very long table names can generate constraint names that exceed the 64 character limit). Since the SQL to generate FK constraints is somewhat non-standard, wouldn't it make more sense if the generation of the constraint statement was handled in the backend where you can employ more DB-specific logic to it? Right now the statements is mostly built up in django.core.management.syncdb -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Table options in CREATE TABLE..
Geert, Just for the record, I use the "SQL initial data file" feature that Adrian mentioned to enable full-text indexing on a couple of my tables. The relevant part of my script is: ALTER TABLE data_rawtrade ENGINE=MyISAM; CREATE FULLTEXT INDEX ix_ft_data_rawtrade_all ON data_rawtrade (tradeType, investIdType, investId, portfolio, book, strategy, counterparty, custodian, account); CREATE FULLTEXT INDEX ix_ft_data_rawtrade_intalloc ON data_rawtrade (portfolio, book, strategy); CREATE FULLTEXT INDEX ix_ft_data_rawtrade_extalloc ON data_rawtrade (counterparty, custodian, account); It all works fine, of course I need to use custom SQL to actually do a full-text search, but that's no biggie. There are two suggestions I would make regarding this feature to improve it a bit: 1) In addition to providing initial data in a file /sql/.sql, I think it would be good to support a form of /sql//.sql so if you want to put DB-specific stuff in there you can. That way if I moved my app to PostgreSQL, for example, it wouldn't try to execute the SQL (which would break) 2) While there is a get_in_bulk() method in the DB api, for cases like full-text indexing, I think it would be more useful to have a get_from_cursor() or something like that where you could just use custom SQL to do whatever query you needed (i.e. "select * from data_rawtrade where match(portfolio,book,strategy) against('hedge')") and load the resulting objects directly from the cursor. It seems a bit wasteful to do a "select id from ...", get the IDs, and then call get_in_bulk() (2 DB hits and a potentially gnarly "in" clause depending on how many objects you found with your search). -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: avoiding ADD CONSTRAINT in management.syncdb()
Alexis Smirnov wrote: > ALTER TABLE `console_restoreevent` ADD CONSTRAINT > `identity_id_referencing_console_identity_id` FOREIGN KEY (`identity_i > d`) REFERENCES `console_identity` (`id`); > ALTER TABLE `console_backupevent` ADD CONSTRAINT > `identity_id_referencing_console_identity_id` FOREIGN KEY (`identity_id > `) REFERENCES `console_identity` (`id`); > COMMIT; > > Questions: > - Any idea why MySQL 5.0.20-nt does't like above SQL? > - Is it a good idea to try to create SQL that respects the order of classes > within .py file (thus avoiding ALTER TABLE) > - If so, do you think the fix above is the right one? MySQL requires each constraint to have the same name. If you have two FK fields to the same model, with the same name, the current naming scheme will generate the same exact constraint name. That's what the cryptic MySQL error is about (both of your constraints are named `identity_id_referencing_console_identity_id`). It seems an easy fix would be to add the referencing table name to the constraint name but there are 64-character limits to these names in MySQL. In fact, I think it used to work that way and was changed to work around the length problem, which now triggers a new one. Here is the old thread with some references to relevant tickets and a link to the MySQL bug report: http://groups.google.com/group/django-developers/browse_thread/thread/103d6c504f78d59d/db4d115ddb175d27?q=syncdb&lnk=gst&rnum=1#db4d115ddb175d27 A simple workaround is to name the field differently in each class (e.g. identity and ident). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: avoiding ADD CONSTRAINT in management.syncdb()
Malcolm Tredinnick wrote: > On Wed, 2006-07-19 at 11:24 +0000, DavidA wrote: > > MySQL requires each constraint to have the same name. > > I'm pretty sure you meant to say "different name" there. :-) Oops. I guess that's what the "preview" button is for. > The problem should be fixed in r3373 and it only costs two extra > characters for a duplicated relation (unless you have more than 10 with > the same potential name, in which case, it'll cost you one more > character). But doesn't that patch assume you are generating all constraints at the same time? If I create one model and run syncdb, then later create another with the same named FK and run syncdb, won't they still end up with the same names? Or am I misunderstanding how that process works? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: avoiding ADD CONSTRAINT in management.syncdb()
Malcolm Tredinnick wrote: > I was trying to avoid a hash-based solution because it leads to > unreadable names (and I don't think every database supports unnamed > constraints, so that isn't a universal solution, either). I need to do a > bit of research and them come up with a legal hash construction. But why not let the backend decide the best way to build the ALTER TABLE/ADD CONSTRAINT statement? Then the MySQL backend could leave them unnamed, avoiding the uniqueness/length issues, and other backends could do it the old way since they aren't affected. It seems this bit of SQL is non-standard enough that it might benefit to move it out of management.py. As you pointed out, the hashing approach will be ugly, and may take a bit of work to get a format that is legal in all DBs. And other approaches that I thought of are worse (query the DB for current constraint names to ensure you don't collide, or scan all apps during syncdb to build a complete list of constraints before running your patch algorithm). -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: avoiding ADD CONSTRAINT in management.syncdb()
Malcolm Tredinnick wrote: > > That is also possible, but I was trying to avoid another reliance on the > backend (things like "sqlall" start to get complex). Still, it's > probably only a single proxy function call, so not too hard to maintain. > > I'm sure if I keep coming up with bad implementations, you'll keep > pounding. So one day it will be perfect. :-) Easy for me to throw out ideas when I'm not the one doing the maintenance! I'm actually fine with the hashing idea. Constraints don't really need to have nice names since you rarely see them and don't reference them during normal SQL usage. And the MySQL bug I referenced earlier actually had to do with the auto-generated constraint names: MySQL still uses table/column names to generate them and adds a number on the end like in your patch. If tables have really long names, it can generate constraint names that exceed the 64-character limit. So maybe leaving the logic in management.py and getting it right there _is_ the better way to go. One thing I did notice on the current algorithm: it uses the source column name but not the source table name. I actually think that's backwards. It should use the table names, not the column names, which would result in a unique constraint name, the one exception being when you have two FK columns in the same table. But that's much more rare (although still possible). Anyway, we've beat this horse to death. I'm interested to see what you come up with... -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: MS-SQL server LIMIT/OFFSET implementation
> > On Thu, 2006-07-20 at 17:34 -0400, Dan Hristodorescu wrote: > > > > > > and for SQL 2000 should look like this: > > > > > > SELECT fields FROM table > > > WHERE primary key IN > > > (SELECT TOP limit primary_key FROM table > > > WHERE primary_key NOT IN > > > (SELECT TOP offset primary_key FROM table > > > WHERE filter_conditions > > > ORDER BY sort_field) > > > AND filter_criteria > > > ORDER BY sort_field) > > > ORDER BY sort_field > > > > > > And with join tables it looks completely crazy (I've only used it > > > using DISTINCT with joins), but that's the optimal way to do. > > > Since its so complicated for SQL 2000, couldn't you just cheat a bit and do some in SQL and some in Python? SELECT TOP limit+offset FROM table ... and then in the backend: return cursor.fetchall()[offset:] (OK, I know its more complicated than that, but you get the idea). That's typically the way I've seen paging done with SQL 2k in the past. -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Model inheritance API
Malcolm Tredinnick wrote: > On Sun, 2006-07-23 at 17:12 +0100, Bill de hÓra wrote: > > Malcolm Tredinnick wrote: > > > --- > > > 3. What you don't get > > > --- > > > [...] > > > I am not implementing the "everything in one table" storage model. It is > > > not universally applicable (you can't inherit from third-party models, > > > for a start) and I have software engineering objections to it. > > > > Why? NOT NULL constraints in the children? > > That's one good reason. It's also not very normalised; table design with > lots of sparse columns won't get me invited to the cool parties. Wide > rows where you only need to access a few fields tend to not always > perform as well as people might like (I'm talking about very large > datasets here; small datasets don't matter so much). > > Finally, because I would like to support extending third-party models > out of the box, we need the multi-table model and I don't want/need the > extra complexity of more and more alternatives right now. That's > something somebody could add later if their continued existence depended > on it. It's all "under the covers" work. > > I'm on a "pick one thing, do it well" drive at the moment (with a slight > concession to the abstract base class case). The big downside to one table to hold all data would be that if I ever added a new derived class, I need to rebuild the table for _all_ other classes. Unless you pair this with schema evolution, that would seem to be a showstopper. -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Validation Aware Models and django.forms on steroids
One comment on ValidationErrors: When I've done these types of things in the past, I've typically returned two levels of validations messages: warnings and errors. An error indicates that the attempted save will fail (i.e. it would either cause a an object to be saved in an invalid state or it would violate a DB constraint and throw an exception in the backend). A warning would not result in a failure but is still worthy of notifying the user, but there are cases where its OK so its not identified as an error. An example with a user registration form: user id: first name: last name: password: confirm password: Errors: - user id cannot be blank - user id already exists - password and confirmation do not match Warnings - first/last name not set - password blank - password shorter than suggested length And the logic I'll typically have is - if errors exist then redisplay form with errors and ask user to fix and resubmit - if no errors but some warnings, redisplay form with warnings and ask user to fix _or_confirm_save_with_warnings_ - if no errors and no warnings, just save as usual The reason I point this out is that I like to centralize the logic where the validation rules live - I don't want errors checked for in one place and warnings in another (as others have pointed out, I want ALL validation messages generated at once and then displayed in the form). So I'd suggest a ValidationException that has an errors dict, a warnings dict and possibly a form (or a very easy way to create a form from the exception). -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: CharFields and children defaulting to emtpy string
Malcolm Tredinnick wrote: > When I first started using Django, this drove me nuts. I had the same reaction, but also understand there isn't a great way to handle it. In Microsoft's SQL Enterprise Manager, you can enter Ctrl-0 in a field and it will set that field to NULL. I'm thinking about doing something like this in one of my Django forms since I need to support both an empty string _and_ a NULL input so I want an explicit way to differentiate between the two. A control character doesn't work well for browser input, but I may end up treating an empty string as NULL and force the user to put "" if they really want a blank string (which in this case is the exception, not the norm). I don't know if something like that is a reasonable solution for Django or not, but I know it would come in handy in a number of situations. -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: CharFields and children defaulting to emtpy string
gabor wrote: > > assuming that you want to differentiate between: > > - the user did not fill in the data > - the user's input was "" > > wouldn't it be better to represent this in html as a checkbox+an input > field? > > and by default have the checkbox unselected, and the input-field disabled. > and then if the user checks the checkbox, enable the field. > > and then in the view code, simply check the checkbox's value and set the > field to null or to what-was-in-the-input-field. gabor, That's certainly more explicit but makes for a busier UI and also makes it more difficult to reuse a standard FormWrapper in the template. After thinking about this more, I can't come up with a case where the input value should ever be an empty string so I think always mapping an empty string to NULL on save() will work just fine. Just for some background, the use case here is that for a financial instrument editor (stock, bond, swap, etc.) there are certain fields that come from a third party data vendor but occasionlly the user will wish to override. Other fields are only maintained internally so they can be directly edited. For the overridable fields, if the user enters a new value, it will become the effective value for that field, but if they clear the field, it reverts back to the default value. For my specific case, I don't think treating empty fields as NULL will create a problem (and if it does I can use the "" as an easy workaround). Thanks for the ideas, though. -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Potential bug in django.core.handlers.modpython ?
Hi, I noticed an inconsistency between the request META information in the modpython handler versus the base handler. I'm opening the URL http://web/data/pos/ which goes through two URL conf's: The default urls.py for the application: (r'^data/', include('pfweb.core.urls')), And the urls.py for the 'core' app: (r'^pos/$', 'pfweb.core.views.list_positions'), But under mod_python, request.META['PATH_INFO'] is '/pos/' while in the development server its '/data/pos/' (which I think is right). I'm using the PATH_INFO as a base for my next/previous links so they aren't working in production (under apache). Shouldn't I be able to see the full path of the URL somehow under mod_python? (Maybe there is a better way than I'm trying). I'm not sure what's right so I thought I'd post a question here before submitting a ticket. Thanks, -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Potential bug in django.core.handlers.modpython ?
Steven Armstrong wrote: > Have you got a folder/file named 'data' in your apaches document root? > If so, try nuking or renaming it. No. And no virtual directories or aliases named 'data' either, nor a file or folder named 'data' in my django project directory. Note that the page URL is working (http://web/data/pos/) correctly under both Apache and the development server. Its just that I can't generate the correct URL for my next/prev links under Apache because that path is not correct. -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Potential bug in django.core.handlers.modpython ?
Waylan Limberg wrote: > My guess is the problem lies in your Apache settings. Post a copy of > your mod-python settings and we'll see what we come up with. Here you go (C:/pf/src/pfweb is the Django project directory): NameVirtualHost * ServerName web DocumentRoot "C:/pf/src/pfweb" SetHandler mod_python PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE pfweb.settings PythonInterpreter admin SetHandler mod_python PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE pfweb.settings PythonInterpreter apps Alias /admin-media/ "C:/Python24/Lib/site-packages/Django-0.95-py2.4.egg/django/contrib/admin/media/" SetHandler none --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Call for testing: New setup.py
Works on Windows 2003 Server SP1 + Python 2.4.2 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Call for testing: New setup.py
DavidA wrote: > Works on Windows 2003 Server SP1 + Python 2.4.2 I spoke too soon. I tried to run 'manage.py test' and it complained about an invalid action so I poked around and the management.py in C:\Python24\lib\site-packages\django\core was different than the one in the SVN checkout directory (that I had just done an install from). I figured it installed the old stuff in the build directory so I remove the build directory and re-ran 'setup.py install' and now I get this error: C:\Django\trunk>setup.py install running install running build running build_py running build_scripts creating build creating build\scripts-2.4 copying and adjusting django\bin\django-admin.py -> build\scripts-2.4 running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_scripts copying build\scripts-2.4\django-admin.py -> C:\Python24\Scripts running install_data C:\Django\trunk> I tried manually creating 'build\lib' but that didn't help (it still doesn't build or install). I also tried 'setup.py build' and it does nothing. I then tried removing the django installation directory in lib\site-packages and it still won't install anything. Now I've sort of hosed myself - guess I'll downgrade... -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Call for testing: New setup.py
DavidA wrote: > I spoke too soon. I _really_ spoke too soon. I tried again as 'python setup.py install' rather than 'setup.py install' and it worked. For some reason my file type mapping on this particular Win box was mucked up. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: MSSQL Support
Sean De La Torre wrote: > I've been maintaining/enhancing a ticket > (http://code.djangoproject.com/ticket/2358) contributed by another > django user that adds MSSQL support to django. In addition to what > that user started, I've added full introspection capabilities and > patched a few bugs that I've found. I've been running a production > site using this patch for about a month now, and the MSSQL integration > seems to be stable. > > I'd appreciate it if other MSSQL users could give the patch a try. > The one item missing from the ticket is paging, but MSSQL doesn't > support that natively, so any input regarding that problem would also > be most appreciated. > > If the Django-users list is the more appropriate place for this > message, please let me know. > > Thanks, What version of SQL Server have you been testing with? I think in 2005 there is support for paging. This brings up the question of how to handle different versions of backends. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: MSSQL Support
Sean De La Torre wrote: > I've been testing with SQL Server 2000 and MSDE. When I have more > time I intend to install SQL Server 2005 Express to see if there are > any issues with the newer versions. > > On 10/21/06, DavidA <[EMAIL PROTECTED]> wrote: > > > > > > Sean De La Torre wrote: > > > I've been maintaining/enhancing a ticket > > > (http://code.djangoproject.com/ticket/2358) contributed by another > > > django user that adds MSSQL support to django. In addition to what > > > that user started, I've added full introspection capabilities and > > > patched a few bugs that I've found. I've been running a production > > > site using this patch for about a month now, and the MSSQL integration > > > seems to be stable. > > > > > > I'd appreciate it if other MSSQL users could give the patch a try. > > > The one item missing from the ticket is paging, but MSSQL doesn't > > > support that natively, so any input regarding that problem would also > > > be most appreciated. > > > > > > If the Django-users list is the more appropriate place for this > > > message, please let me know. > > > > > > Thanks, > > > > What version of SQL Server have you been testing with? I think in 2005 > > there is support for paging. This brings up the question of how to > > handle different versions of backends. > > > > > > > > > For 2005 you can use the new ROW_NUMBER() function to help with limit/offset: SELECT * FROM ( SELECT ROW_NUMBER() OVER (ORDER BY field DESC) AS Row, * FROM table) AS foo WHERE Row >= x AND Row <= y For 2000 I've seen people use this approach, which works if your sort field(s) are unique: SELECT * FROM ( SELECT TOP x * FROM ( SELECT TOP y fieldlist FROM table WHERE conditions ORDER BY field ASC) as foo ORDER by field DESC) as bar ORDER by field ASC So to get records 81-100, y is 100 and x is 20, and the inner-most select gets the top 100 rows. The middle select orders these in reverse order and gets the top 20. The outer select reverses these again to put them in the right order. I'm not sure where if the db backends separate responsibilities enough to allow you to wrap this up nicely in the SQL server backend. But it might work. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: Suggestion: Aggregate/Grouping/Calculated methods in Django ORM
Jacob Kaplan-Moss wrote: > No, I think not -- I think that syntax (``queryset.groupby(field).max()``) > actually looks like the best proposal for aggregates I've seen thus far... > > Thoughts, anyone? > > Jacob I think it quickly gets more complicated than that syntax would support. For example, how would you ask for more than one aggregate value in that syntax? My common use case is grouping a bunch of financial positions, where the SQL would look something like: select account, count(*), sum(quantity), sum(total_pnl) from position group by account Would I have to call queryset.groupby(account) three times: once for count(), once for sum(quantity) and once for sum(total_pnl)? And what exactly does queryset.groupby() return? In my case, if account is a ForeignKey from a Position model to an Account model, can I dereference fields from the result? account_summary = Position.objects.filter(date=today).groupby(account) for summary on account_summary: print summary.name### would this work? Is this the name property of an Account? And how would I dereference the aggregate fields in the groupby results? By index? (is account_summary[0][2] the quantity sum of the first account summary row?) I've run into all of these issues (multiple aggregate columns, dereferencing model relations, aggregate alias names) in playing around with this and I think they are all problems you run into quickly that make a solution rather complicated. My idea was a queryset.groupby() could return some sort of dynamic Django model class where the attributes where the aggregated fields plus the fields you were grouping by and if you were grouping by a relation field, it would magically work like any other model relation. But I don't know how complicated that would be and I haven't thought of a syntax that works nicely for the more complex cases. -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Suggestion: Aggregate/Grouping/Calculated methods in Django ORM
Russell Keith-Magee wrote: > annotate() returns a query set, so it can be used multiple times, be > combined with filters, etc. The argument handling strategy employed in > filter() is reused here; kwargs to annotate() can be decomposed on a > __ boundary to describe table joins, with the last part describing the > aggregate operator to be used. The syntax follows something like: > > Model.objects.annotate(field1__field2__aggregate='annotation') [snip] > e.g., > # Get order 1234, and annotate it a few different ways > >>> order = Order.objects.get(id=1234).annotate( > books__price__sum='total_cost', > books__count='item_count') > # Inspect the order > >>> order.description > "Dad's birthday" > # Annotated orders have a 'total_cost' attribute... > >>> order.total_cost > 47.2 > # ... and an 'item_count' attribute > >>> order.item_count > 3 I like making the aggregate function a part of a keyword argument. It seems consistent with the Django DB API and offers a lot of flexibility. Better, in my opinion than individual functions for each aggregator. The 'annotate' name is a little indirect. Maybe something like 'calc_fields'? > 3. Just the facts, Ma'am > > Ok; so what if you just want _the minimum_, or _the average_? For > this, I propose an aggregates() queryset modifier. > > >>> Book.objects.aggregates(price__min='min_price', > >>> pub_date__max='last_update') > {'min_price':0.5, 'last_update': 2006-11-22} > > aggregates() would expand queries in the same manner as annotate(), > but would be a terminal clause, just like the values() clause. > > This is a more verbose notation than the simple 'max()/min()' . I have > discussed my problems with these operators previously; however, if > there is sufficient demand, I don't see any reason that min('field') > couldn't be included in the API as a shorthand for > Model.objects.aggregates(field__min='min')['min']. This seems good, too, but maybe call it 'calc_values' or something with the 'values' name in it to be consistent with the existing values() method. The shortcut is nice but I could live without it. > 4. Comparisons > ~~ > There is one additional requirement I can see; to perform queries like > (c), you need to be able to compare annotation attributes to object > attributes. > > # Annotate a query set with the average price of books > >>> qs = Book.objects.annotate(price__average='avg_price'). > # Filter all books with obj.avg_price < obj.price > >>> expensive_books = qs.filter(avg_price__lt=F('price')) > > The F() object is a placeholder to let the query language know that > 'price' isn't just a string, it's the name of a field. This follows > the example of Q() objects providing query wrappers. To make it more like Q(), would it be better to do F(avg_price__lt='price') so you could combine them with | and &? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Suggestion: Aggregate/Grouping/Calculated methods in Django ORM
John Lenton wrote: > > I hadn't even considered having a multi-parameter tuple-returning > "sum"; I was ok with either calling groupby thrice, or saving the > groupby and calling the different ops in sequence. In either case, a > database roundtrip per call. I'm often grouping thousands of rows for my cases so doing multiple round trips per field would be painful. > I had thought that queryset.groupby should behave in the same way > itertools.groupby would behave, i.e. that there would only be > implementation (and performance) differences between > > queryset = Position.objects.filter(date=today) > account_summary = itertools.groupby(queryset, operator.attrgetter('account')) > > and > > account_summary = Position.objects.filter(date=today).groupby('account') I'm confused. Where do you specify the aggregate functions for aggregating the specific columns (sum, avg, etc.)? Or am I misunderstanding what this does? > in view of the above: no. To do the above, you'd do this instead: > > for account, positions in account_summary: >print account.name I see. Works for me. > > > And how would I dereference the aggregate fields in the groupby > > results? By index? (is account_summary[0][2] the quantity sum of the > > first account summary row?) > > positions would be the (lazy) iterator for that purpose, already set > up for you (by this I mean, I don't expect it to be a performance > gain, just a convenience). But would there be aliases for these aggregate expessions? That is, the equivalent of select sum(quantity) as tot_quantity ... > > My idea was a queryset.groupby() could return some sort of dynamic > > Django model class where the attributes where the aggregated fields > > plus the fields you were grouping by and if you were grouping by a > > relation field, it would magically work like any other model relation. > > that sounds way too magic for my taste :) True. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Bug in django/contrib/admin/templatetags/admin_list.py
I posted this to django-users a few days ago but no one commented and I didn't see any mention of it here so I thought I'd repost to the dev group. I'm using M-R and just updated to 2721 and still didn't see a fix. If you use a FloatField in the admin list_display, you get an error rendering the template from line 160 in admin_list.py (TypeError, float argument required) 158.elif isinstance(f, models.FloatField): 159.if field_val is not None: 160.result_repr = ('%%.%sf' % f.decimal_places) % field_val 161.else: 162.result_repr = EMPTY_CHANGELIST_VALUE This is because FloatFields are stored as strings so you can't use one directly in a %f format. I think the line should be: 160.result_repr = ('%%.%df' % f.decimal_places) % float(field_val) But I'm not sure if that should be float() or Decimal(). Thanks, -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---
Re: ticket #347 mysql engine used for tables
Maybe I don't understand the implications but I have been using ENGINE=MyISAM on my tables so I can use MySQLs full-text indexing (not supported by InnoDB). Does this thread imply that I can no longer use MyISAM tables with MySQL? Or just that if I do, I must tweak the output of manage.py (which I already need to do to add the engine and fulltext clases). Thanks, -Dave --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers -~--~~~~--~~--~--~---