Roberto, Here is some food for thought...
Multiple smaller indices you can split them across several servers, but you can't do that with a monolithic index. With multiple smaller indices you can choose to search only a subset of them, should that make sense for your app. How much does it cost to have 1 server with a LOT of RAM that serving this index will need? Maybe it's cheaper to have multiple smaller machines. How long does it take you to rebuild one big index, should it get corrupted vs. rebuilding only a subset of your data? How long does it take you to copy the index around the network after you optimize it vs. copying only a subset, or multiple subsets in parallel? etc. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch ----- Original Message ---- > From: Roberto Nieto <[EMAIL PROTECTED]> > To: solr-user@lucene.apache.org > Sent: Saturday, June 14, 2008 7:31:28 AM > Subject: doubt with an index of 300gb > > Hi users, > > I´m going to create a big index of 300gb in a SAN where i have 4TB. I read > many entries in the mail list talking about using multiple index with > multicore. I would like to know what kind of benefit can i have > using multiple index instead of one big index if i dont have problems with > the disk? I know that the optimizes and the commits would be faster with > smaller indexs, but in search? The RAM use would be the same using 10 > indexes of 30gb than using 1 index of 300gb? Any suggestion or experience > will be very usefull for me. > > Thanks in advance. > > Rober.