Along this line of discussion, the issue of DB backend integration is on
the list of TODO - I wonder if there has been any discussion or exchange
about implementation.
This may help solve some of the scalability issues.
_F
Brad Knowles wrote:
>At 3:07 PM +1000 2005-07-28, Iain Pople wrote:
At 3:07 PM +1000 2005-07-28, Iain Pople wrote:
> We are using Solaris UFS.
If switching filesystems is an option, you might want to take a
look at Veritas VxFS. It's a commercial replacement that should be
more than enough for your needs. I've had experience with SGI XFS
(on real SG
At 12:55 AM -0400 2005-07-28, [EMAIL PROTECTED] wrote:
> While I can't address your question directly, I can say that this
> isn't an issue. Just use ReiserFS or set your ext2/3 filesystems to
> use dir_index. Either one will give you hashed directory lookups and
> resolve this performance i
We are using Solaris UFS.
On 28/07/2005, at 2:55 PM, [EMAIL PROTECTED] wrote:
> On Thu, Jul 28, 2005 at 01:55:36PM +1000, Iain Pople wrote:
>
>> The concern is that on a unix filesystem it is very slow to access
>> directories with thousands of entries.
>>
>
> While I can't address your question
On Thu, Jul 28, 2005 at 01:55:36PM +1000, Iain Pople wrote:
> The concern is that on a unix filesystem it is very slow to access
> directories with thousands of entries.
While I can't address your question directly, I can say that this
isn't an issue. Just use ReiserFS or set your ext2/3 filesy
Hi,
I need to setup a system with around 15,000 lists. Have other people
been using mailman for such a large number of lists? One thing that
concerns me is that there doesn't seem to be any support for a hashed
directory structure for storing the the lists config and archives under:
$VAR_PR