FYI: The cache.log, store log, and HDD swap.state journals produced by Squid are system-specific and not able to be sent to any remote logging systems. Logrotate is still the best system in Debian (as far as I am aware anyhow) to manage those log files needs.
Where and how the access.log records are delivered is purely optional and syslog systems can be used there at administrator discretion. syslog use is more impacted by several other major issues:
Firstly; syslog timestamping does not represent sub-millisecond and ranged time durations as required by a proxy to display a transaction. This causes a requirement to duplicate the timestamp information in each log entry (syslog time and squid time).
Secondly; The latency delays in syslog delivery and recording over the network in turn often results in the "duplicate" timestamps to display conflicting information. While a second or two may seem small, when one second can contain upwards of a 20,000 transactions with order-dependent behaviours this discrepancy can be a major issue in detecting traffic problems.
Thirdly; syslog being a networked delivery mechanism over UDP is prone to loss of log information at the most critical high-load periods where it is most vital to retain.
On high performance systems, the traffic accounting and records are maintained through a local logging daemon or pipe, which may deliver to a billing system with neither local file nor syslog involvement.
Overall syslog may be favoured by some admin but for Squid it is among the worst of the available logging mechanims. So I do not think it is something we want to be encouraging for use with the Squid package.
Amos