Hi everyone, I was hoping someone could advise on whether I'm tackling a specific problem in the correct manner.
Specifically, I'm trying to use a "clean" set of historical data to fix omissions/errors among a stream of newer data. To do so, I've devised a series of backend SQL statements which need to be triggered in various orders depending on the state of my object's attributes. If my Person object, for example, only has a name and a state, then I need to execute a SQL statement relying on those attributes to pull additional information about the Persion. And if that search yields a unique id for my Person object, then I can fire another SQL statement that uses the unique ID to pull more attributes from my historical data. Or, if my Person object happened to be instantiated with a unique ID, then I can just skip the name/state search (and various others) and go straight to the ID search. After each query, I clearly need to inspect the status of the attributes: once they've all been corrected, I can stop executing corrective queries and move on to the next Person object. To implement the above, I was starting to research generators and graph data structures (e.g. http://www.python.org/doc/essays/graphs.html). So my question -- is that indeed the correct approach for what I've described as the use case? I've never tackled graphs or generators before, and I figure I have plenty of reading to do before I could devise a solution relying on both. I'd be grateful if anyone could weigh in on my approach, or perhaps advise on alternatives. Regards, Serdar _______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor