On Fri, 14 May 2010 06:49:59 +0200 Jan Jansen <knack...@googlemail.com> wrote:
> Hi there, > > I'm working on a code to read and write large amounts of binary data > according to a given specification. In the specification there are a lot > of "segments" defined. The segments in turn have defintions of datatypes > and what they represent, how many of some of the data values are present > in the file and sometimes the offset from the beginning of the file. > > Now I wonder, what would be a good way to model the code. > > Currently I have one class, that is the "FileReader". This class holds > the file object, information about the endianess and also a method to > read data (using the struct module). Then, I have more classes > representing the segements. In those classes I define data-formats, call > the read-method of the FileReader object and hold the data. Currently > I'm passing the FileReader object as arguement. > > Here some examples, first the "FileReader" class: > > class JTFile(): > > def __init__(self, file_obj): > self.file_stream = file_obj > self.version_string = "" > self.endian_format_prefix = "" > > def read_data(self, fmt, pos = None): > format_size = struct.calcsize(fmt) > if pos is not None: > self.file_stream.seek(pos) > return struct.unpack_from(self.endian_format_prefix + fmt, > self.file_stream.read(format_size)) Since JTFile (as name says) is mainly a file, you could subtype it from file, thus avoiding its file_stream attribute and replacing self.file_stream.read/seek with direct self.read/seek. Also, this better mirrors the model, I guess. > and here an example for a segment class that uses a FileReader instance > (file_stream): > > class LSGSegement(): > > def __init__(self, file_stream): > self.file_stream = file_stream > self.lsg_root_element = None > self._read_lsg_root() > > def _read_lsg_root(self): > fmt = "80Bi" > raw_data = self.file_stream.read_data(fmt) > self.lsg_root_element = LSGRootElement(raw_data[:79], raw_data[79]) > > So, now I wonder, what would be a good pythonic way to model the > FileReader class. Maybe use a global functions to avoid passing the > FileReader object around? Or something like "Singleton" I've heard about > but never used it? Or keept it like that? A singleton object is just a unique instance of a type. The singleton pattern simply ensures this unicity by refusing to create more instances. The python way, I guess, is rather a gentleman agreement, which in this case means creating a single instance and no more, since you are the only user of your file-reading service. If this were to be distributed as module, then document this "uncity" point, or implement the singleton pattern. (But I don't understand why there should be only one file-reader. Rather, there should be only one per (disk) file ;-) If you make the type a subtype of file, then this is certainly automatically ensured since no 2 python file objects can point to the same disk file -- I guess -- to be checked.) > Cheers, > > Jan Denis ________________________________ vit esse estrany ☣ spir.wikidot.com _______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor