On Tue, 23.10.12 20:13, Ciprian Dorin Craciun ([email protected]) wrote:
> > In the journal file format indexing field objects that are only > > referenced once is practically free, as instead of storing an offset to > > the bsearch object we use for indexing we just store the offset of the > > referencing entry in-line. > > I guess those offsets are quite cheap, and the in-line entry for > the "once-only" data are ok. But (from what I understand) every value > you store has to be searched through the file before storing (to see > if it already exists as a value). Thus wouldn't this impact CPU usage? Looking for pre-existing objects is cheap. It's a hashtable, and hence effectively O(1). The hash table should usually be cached in memory quickly. If the hash table gets too full (over 75% fill-level) we simply rotate the file and start anew. This should result in O(1) all across the hash table as collisions should be the exception. > > The other thing is simply that the stuff is really integrated with each > > other. The journal sources are small because we reuse a lot of internal > > C APIs of systemd, and the format exposes a lot of things that are > > specific to systemd, for example the vocabulary of well-known fields is > > closely bound to systemd. > > I understand this issue with the focus. Nevertheless your journal > idea sounds nice, and I hope someone will take it and implement it in > a standalone variant. (I hope in a native compilable language...) Why? Why would anybody want to use the journal but not systemd? People who have issues with the latter usually are not rational about these things, and probably have a more philosophical/religious issue with systemd, but then will also have the issues with the journal since it follows the same philosophy and thinking. Also, note that the journal file access in libsystemd-journal works fine on non-systemd too. People can just split this off if they want, and use it indepdently of systemd, the same way they already do it with udev. No need to implement anything anew. > > The network model existing since day one is one where we rely on > > existing file sharing infrastructure to transfer/collect files. I.e. use > > NFS, SMB, FTP, WebDAV, SCP, rsync whatever suits you, and make available > > at one spot, and "journactl -m" will interleave them as necessary. > > By "interleave" you mean only "taking note of new files", not > actually "rewriting the contents". By interleaving I simply mean interleaving on display, i.e. taking various files from various sources, and presenting them as a continous stream, even though they actually come from many sources. Various sources can be: rotated journals, per-user journal files, journal files from containers of the local host, journal files from other hosts, and more. Lennart -- Lennart Poettering - Red Hat, Inc. _______________________________________________ systemd-devel mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/systemd-devel
