Re: 2GB limit on 32 bits
On Fri, 9 Nov 2007 09:03:01 -0300 "Isart Montane" <[EMAIL PROTECTED]> wrote: > I've read there's a kernel limitation for a 32 bits architecture of 2Gb > per process, and i just wanna know if anybody knows an alternative to > get a new 64bits server. You don't say what CPU you have. But the 32 bit limit is real (it's an architecture issue, not a kernel limitation...). You could try running several servers on different ports, each managing part of your index, each up to 2 GB RAM - but you may be pushing your CPU / disks too much and hit other issues - try and see how it goes. If I were you, i'd seriously look into getting a new (64 bit) server . B _ {Beto|Norberto|Numard} Meijome "Too bad ignorance isn't painful." Don Lindsay I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
2GB limit on 32 bits
Hi all, i'm experiencing some trouble when i'm trying to lauch solr with more than 1.6GB. My server is a FC5 with 8GB RAM but when I start solr like this java -Xmx2000m -jar start.jar i get the following errors: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I've tried to start a virtual machine like this java -Xmx2000m -version but i get the same errors. I've read there's a kernel limitation for a 32 bits architecture of 2Gb per process, and i just wanna know if anybody knows an alternative to get a new 64bits server. Thanks Isart
Re: 2GB limit on 32 bits
I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on a 2.6linux kernel is 4GB on user space and 4GB for kernel... that's why I asked if there's any way to reach 4GB per process. Thanks anyway :( cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz stepping: 6 cpu MHz : 1596.192 cache size : 4096 KB physical id : 0 siblings: 2 core id : 0 cpu cores : 2 fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm bogomips: 3194.21 processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz stepping: 6 cpu MHz : 1596.192 cache size : 4096 KB physical id : 0 siblings: 2 core id : 1 cpu cores : 2 fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm bogomips: 3192.09 processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz stepping: 6 cpu MHz : 1596.192 cache size : 4096 KB physical id : 3 siblings: 2 core id : 0 cpu cores : 2 fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm bogomips: 3192.13 processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz stepping: 6 cpu MHz : 1596.192 cache size : 4096 KB physical id : 3 siblings: 2 core id : 1 cpu cores : 2 fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm bogomips: 3192.12 On Nov 9, 2007 9:26 AM, Norberto Meijome <[EMAIL PROTECTED]> wrote: > On Fri, 9 Nov 2007 09:03:01 -0300 > "Isart Montane" <[EMAIL PROTECTED]> wrote: > > > I've read there's a kernel limitation for a 32 bits architecture of 2Gb > > per process, and i just wanna know if anybody knows an alternative to > > get a new 64bits server. > > You don't say what CPU you have. But the 32 bit limit is real (it's an > architecture issue, not a kernel limitation...). You could try running > several servers on different ports, each managing part of your index, each > up to 2 GB RAM - but you may be pushing your CPU / disks too much and hit > other issues - try and see how it goes. > > If I were you, i'd seriously look into getting a new (64 bit) server . > B > > _ > {Beto|Norberto|Numard} Meijome > > "Too bad ignorance isn't painful." > Don Lindsay > > I speak for myself, not my employer. Contents may be hot. Slippery when > wet. Reading disclaimers makes you go blind. Writing them is worse. You have > been Warned. >
Re: 2GB limit on 32 bits
Hi norberto, i've tried a simple C app to maloc 2GB and it doesn't works (the same with 1.5Gb works) so it seems to be a kernel problem. The server is a FC5 with this uname -a Linux X 2.6.18-1.2239.fc5smp #1 SMP Fri Nov 10 13:22:44 EST 2006 i686 i686 i386 GNU/Linux Any ideas how to reach the 4GB? On Nov 9, 2007 10:41 AM, Norberto Meijome <[EMAIL PROTECTED]> wrote: > On Fri, 9 Nov 2007 10:30:16 -0300 > "Isart Montane" <[EMAIL PROTECTED]> wrote: > > > I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on > > a 2.6linux kernel is 4GB on user space and 4GB for kernel... that's > > why I asked > > if there's any way to reach 4GB per process. > > ok - i'm obviously too tired - 32 bit should allow you up to 4 GB / proc. > If hte kernel doesnt let allow you more than that, that's an issue with your > kernel. > > u need to know first why u cant reach over 2 GB - java limit, OS limit ? > > i'll sit in a corner very quietly now > > _ > {Beto|Norberto|Numard} Meijome > > "People demand freedom of speech to make up for the freedom of thought > which they avoid. " > Soren Aabye Kierkegaard > > I speak for myself, not my employer. Contents may be hot. Slippery when > wet. Reading disclaimers makes you go blind. Writing them is worse. You have > been Warned. >
Re: 2GB limit on 32 bits
On Fri, 9 Nov 2007 10:30:16 -0300 "Isart Montane" <[EMAIL PROTECTED]> wrote: > I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on > a 2.6linux kernel is 4GB on user space and 4GB for kernel... that's > why I asked > if there's any way to reach 4GB per process. ok - i'm obviously too tired - 32 bit should allow you up to 4 GB / proc. If hte kernel doesnt let allow you more than that, that's an issue with your kernel. u need to know first why u cant reach over 2 GB - java limit, OS limit ? i'll sit in a corner very quietly now _ {Beto|Norberto|Numard} Meijome "People demand freedom of speech to make up for the freedom of thought which they avoid. " Soren Aabye Kierkegaard I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
Re: 2GB limit on 32 bits
More info. The kernel is compiled with HIGHMEM64 and PAE On Nov 9, 2007 11:05 AM, Isart Montane <[EMAIL PROTECTED]> wrote: > Hi norberto, > > i've tried a simple C app to maloc 2GB and it doesn't works (the same with > 1.5Gb works) so it seems to be a kernel problem. > > The server is a FC5 with this uname -a > Linux X 2.6.18-1.2239.fc5smp #1 SMP Fri Nov 10 13:22:44 EST 2006 i686 > i686 i386 GNU/Linux > > Any ideas how to reach the 4GB? > > > On Nov 9, 2007 10:41 AM, Norberto Meijome <[EMAIL PROTECTED] > wrote: > > > On Fri, 9 Nov 2007 10:30:16 -0300 > > "Isart Montane" < [EMAIL PROTECTED]> wrote: > > > > > I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on > > > a 2.6linux kernel is 4GB on user space and 4GB for kernel... that's > > > why I asked > > > if there's any way to reach 4GB per process. > > > > ok - i'm obviously too tired - 32 bit should allow you up to 4 GB / > > proc. If hte kernel doesnt let allow you more than that, that's an issue > > with your kernel. > > > > u need to know first why u cant reach over 2 GB - java limit, OS limit ? > > > > i'll sit in a corner very quietly now > > > > _ > > {Beto|Norberto|Numard} Meijome > > > > "People demand freedom of speech to make up for the freedom of thought > > which they avoid. " > > Soren Aabye Kierkegaard > > > > I speak for myself, not my employer. Contents may be hot. Slippery when > > wet. Reading disclaimers makes you go blind. Writing them is worse. You have > > been Warned. > > > >
Re: 2GB limit on 32 bits
Some OSs split that 4GB into a 2GB data space and a 2GB instruction space. To get a 64bit address space, the CPU, OS, and JVM all need to support 64 bits. There have been 64 bit Xeon chips since 2004, the Linux 2.6 kernel supports 64 bit, and recent JVMs do, too. If your Xeon supports 64 bits, you should be able to get the rest of it to do 64 bits. I'm not an expert on configuring that stuff, though. wunder On 11/9/07 5:41 AM, "Norberto Meijome" <[EMAIL PROTECTED]> wrote: > On Fri, 9 Nov 2007 10:30:16 -0300 > "Isart Montane" <[EMAIL PROTECTED]> wrote: > >> I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on >> a 2.6linux kernel is 4GB on user space and 4GB for kernel... that's >> why I asked >> if there's any way to reach 4GB per process. > > ok - i'm obviously too tired - 32 bit should allow you up to 4 GB / proc. If > hte kernel doesnt let allow you more than that, that's an issue with your > kernel. > > u need to know first why u cant reach over 2 GB - java limit, OS limit ? > > i'll sit in a corner very quietly now > > _ > {Beto|Norberto|Numard} Meijome > > "People demand freedom of speech to make up for the freedom of thought which > they avoid. " > Soren Aabye Kierkegaard > > I speak for myself, not my employer. Contents may be hot. Slippery when wet. > Reading disclaimers makes you go blind. Writing them is worse. You have been > Warned.
Re: 2GB limit on 32 bits
Isn't Xeon5110 64bit? Maybe you could just put a 64 bit OS in you box. Also, take a look at http://www.spack.org/wiki/LinuxRamLimits -- Walter Isart Montane wrote: > I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on > a 2.6linux kernel is 4GB on user space and 4GB for kernel... that's > why I asked > if there's any way to reach 4GB per process. > > Thanks anyway :( > > cat /proc/cpuinfo > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 15 > model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz > stepping: 6 > cpu MHz : 1596.192 > cache size : 4096 KB > physical id : 0 > siblings: 2 > core id : 0 > cpu cores : 2 > fdiv_bug: no > hlt_bug : no > f00f_bug: no > coma_bug: no > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm > bogomips: 3194.21 > > processor : 1 > vendor_id : GenuineIntel > cpu family : 6 > model : 15 > model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz > stepping: 6 > cpu MHz : 1596.192 > cache size : 4096 KB > physical id : 0 > siblings: 2 > core id : 1 > cpu cores : 2 > fdiv_bug: no > hlt_bug : no > f00f_bug: no > coma_bug: no > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm > bogomips: 3192.09 > > processor : 2 > vendor_id : GenuineIntel > cpu family : 6 > model : 15 > model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz > stepping: 6 > cpu MHz : 1596.192 > cache size : 4096 KB > physical id : 3 > siblings: 2 > core id : 0 > cpu cores : 2 > fdiv_bug: no > hlt_bug : no > f00f_bug: no > coma_bug: no > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm > bogomips: 3192.13 > > processor : 3 > vendor_id : GenuineIntel > cpu family : 6 > model : 15 > model name : Intel(R) Xeon(R) CPU5110 @ 1.60GHz > stepping: 6 > cpu MHz : 1596.192 > cache size : 4096 KB > physical id : 3 > siblings: 2 > core id : 1 > cpu cores : 2 > fdiv_bug: no > hlt_bug : no > f00f_bug: no > coma_bug: no > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm > constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm > bogomips: 3192.12 > > On Nov 9, 2007 9:26 AM, Norberto Meijome <[EMAIL PROTECTED]> wrote: > > >> On Fri, 9 Nov 2007 09:03:01 -0300 >> "Isart Montane" <[EMAIL PROTECTED]> wrote: >> >> >>> I've read there's a kernel limitation for a 32 bits architecture of 2Gb >>> per process, and i just wanna know if anybody knows an alternative to >>> get a new 64bits server. >>> >> You don't say what CPU you have. But the 32 bit limit is real (it's an >> architecture issue, not a kernel limitation...). You could try running >> several servers on different ports, each managing part of your index, each >> up to 2 GB RAM - but you may be pushing your CPU / disks too much and hit >> other issues - try and see how it goes. >> >> If I were you, i'd seriously look into getting a new (64 bit) server . >> B >> >> _ >> {Beto|Norberto|Numard} Meijome >> >> "Too bad ignorance isn't painful." >> Don Lindsay >> >> I speak for myself, not my employer. Contents may be hot. Slippery when >> wet. Reading disclaimers makes you go blind. Writing them is worse. You have >> been Warned. >> >> > >
Re: 2GB limit on 32 bits
On Fri, 9 Nov 2007 11:58:53 -0300 "Isart Montane" <[EMAIL PROTECTED]> wrote: > More info. > > The kernel is compiled with HIGHMEM64 and PAE Sorry, i havent dealt with linux kernel options for years. PAE will give you 36 bits of address. but if the kernel is still limiting the user space to 2 GB / proc, there isn't much PAE will do. Check your OS documentation. and, let me say it one more time - 64 bit platform. :) B _ {Beto|Norberto|Numard} Meijome "All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can't get them together again, there must be a reason. By all means, do not use hammer." IBM maintenance manual, 1975 I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
Re: 2GB limit on 32 bits
OK! i will try to reinstall the SO to 64bits and i will let you know Thanks! Norberto Meijome wrote: On Fri, 9 Nov 2007 11:58:53 -0300 "Isart Montane" <[EMAIL PROTECTED]> wrote: More info. The kernel is compiled with HIGHMEM64 and PAE Sorry, i havent dealt with linux kernel options for years. PAE will give you 36 bits of address. but if the kernel is still limiting the user space to 2 GB / proc, there isn't much PAE will do. Check your OS documentation. and, let me say it one more time - 64 bit platform. :) B _ {Beto|Norberto|Numard} Meijome "All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can't get them together again, there must be a reason. By all means, do not use hammer." IBM maintenance manual, 1975 I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
Trim filer active for solr.StrField ?
I have defined a field in the solr schema of type "string" which is associated with solr.StrField . As it seems, strings with spaces as prefix or suffix are written in the index correctly and if I view the contents of the index with the web interface the spaces are still there. But if I use the Solrj and query for documents, the strings are trimmed (whitespace cutted at the end and an the front). may be is some kind of TrimFilter active? How can I prevent timming (by solr schema or in the solrj api)? Thanks Jörg
Delte all docs in a SOLR index?
Sorry for another basic question -- but what is the best safe way to delete all docs in a SOLR index. I tried -- and that didn't work, plus wasn't sure if it was safe -- when I put a real id in it works, but that is too tedious. I am in my first few days using SOLR and Lucene, am iterating the schema often, starting and stoping with test docs, etc. I like to know a very quick way to clean out the index and start over repeatedly -- can't seem to find it on the wiki -- maybe its Friday :) Thanks, Dave __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Delte all docs in a SOLR index?
: Sorry for another basic question -- but what is the best safe way to : delete all docs in a SOLR index. I thought this was a FAQ, but it's hidden in another question (rebuilding if schema changes) i'll pull it out into a top level question... *:* : I am in my first few days using SOLR and Lucene, am iterating the schema : often, starting and stoping with test docs, etc. I like to know a very : quick way to clean out the index and start over repeatedly -- can't seem : to find it on the wiki -- maybe its Friday :) Huh .. that's actually the FAQ that does talk about deleting all docs :) "How can I rebuild my index from scratch if I change my schema?" http://wiki.apache.org/solr/FAQ#head-9aafb5d8dff5308e8ea4fcf4b71f19f029c4bb99 -Hoss
Re: Delte all docs in a SOLR index?
I tried try: *:* ryan
Re: Delte all docs in a SOLR index?
Thanks! - Original Message From: Ryan McKinley <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Friday, November 9, 2007 1:48:45 PM Subject: Re: Delte all docs in a SOLR index? > I tried try: *:* ryan __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Delte all docs in a SOLR index?
Thanks! - Original Message From: Chris Hostetter <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Friday, November 9, 2007 1:51:03 PM Subject: Re: Delte all docs in a SOLR index? : Sorry for another basic question -- but what is the best safe way to : delete all docs in a SOLR index. I thought this was a FAQ, but it's hidden in another question (rebuilding if schema changes) i'll pull it out into a top level question... *:* : I am in my first few days using SOLR and Lucene, am iterating the schema : often, starting and stoping with test docs, etc. I like to know a very : quick way to clean out the index and start over repeatedly -- can't seem : to find it on the wiki -- maybe its Friday :) Huh .. that's actually the FAQ that does talk about deleting all docs :) "How can I rebuild my index from scratch if I change my schema?" http://wiki.apache.org/solr/FAQ#head-9aafb5d8dff5308e8ea4fcf4b71f19f029c4bb99 -Hoss __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Trim filer active for solr.StrField ?
the spaces are still there. But if I use the Solrj and query for documents, the strings are trimmed (whitespace cutted at the end and an the front). may be is some kind of TrimFilter active? How can I prevent timming (by solr schema or in the solrj api)? what is your specific SolrQuery? calling: query.setQuery( " stuff with spaces " ); does not call trim(), but some other calls do. ryan
question about batches in new solr.py (SOLR-216)
I'm experimenting with the new solr.py from http://issues.apache.org/jira/browse/SOLR-216 think perhaps I'm confused about how batching is are supposed to work. I wrote this test script: import solr client = solr.SolrConnection('http://localhost:8080/solr') client.begin_batch() client.add(id=998) client.add(id=999) client.end_batch() client.commit() response = client.query(q="id:998 OR id:999", fields=['id']) assert len(response.results) == 2 and it fails; only the first document is returned. I would expect to get both. The Solr logfiles show no errors, but suggest that only the first was processed: Nov 9, 2007 1:07:21 PM org.apache.solr.handler.XmlUpdateRequestHandler update INFO: added id={998} in 2ms Nov 9, 2007 1:07:21 PM org.apache.solr.core.SolrCore execute INFO: /update 0 2 Am I misunderstanding something? Thanks, Charlie
RE: Delte all docs in a SOLR index?
A safer way is to stop Solr and remove the index directory. There is less chance of corruption, and it will faster. -Original Message- From: David Neubert [mailto:[EMAIL PROTECTED] Sent: Friday, November 09, 2007 10:56 AM To: solr-user@lucene.apache.org Subject: Re: Delte all docs in a SOLR index? Thanks! - Original Message From: Chris Hostetter <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Friday, November 9, 2007 1:51:03 PM Subject: Re: Delte all docs in a SOLR index? : Sorry for another basic question -- but what is the best safe way to : delete all docs in a SOLR index. I thought this was a FAQ, but it's hidden in another question (rebuilding if schema changes) i'll pull it out into a top level question... *:* : I am in my first few days using SOLR and Lucene, am iterating the schema : often, starting and stoping with test docs, etc. I like to know a very : quick way to clean out the index and start over repeatedly -- can't seem : to find it on the wiki -- maybe its Friday :) Huh .. that's actually the FAQ that does talk about deleting all docs :) "How can I rebuild my index from scratch if I change my schema?" http://wiki.apache.org/solr/FAQ#head-9aafb5d8dff5308e8ea4fcf4b71f19f029c 4bb99 -Hoss __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Delte all docs in a SOLR index?
On 9-Nov-07, at 3:42 PM, Norskog, Lance wrote: A safer way is to stop Solr and remove the index directory. There is less chance of corruption, and it will faster. In trunk, it should be quicker and safer than stopping/restarting. Also, to clarify the 'corruption' issue, this should only be possible in the event of cold process termination (like power loss). -Mike -Original Message- From: David Neubert [mailto:[EMAIL PROTECTED] Sent: Friday, November 09, 2007 10:56 AM To: solr-user@lucene.apache.org Subject: Re: Delte all docs in a SOLR index? Thanks! - Original Message From: Chris Hostetter <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Friday, November 9, 2007 1:51:03 PM Subject: Re: Delte all docs in a SOLR index? : Sorry for another basic question -- but what is the best safe way to : delete all docs in a SOLR index. I thought this was a FAQ, but it's hidden in another question (rebuilding if schema changes) i'll pull it out into a top level question... *:* : I am in my first few days using SOLR and Lucene, am iterating the schema : often, starting and stoping with test docs, etc. I like to know a very : quick way to clean out the index and start over repeatedly -- can't seem : to find it on the wiki -- maybe its Friday :) Huh .. that's actually the FAQ that does talk about deleting all docs :) "How can I rebuild my index from scratch if I change my schema?" http://wiki.apache.org/solr/ FAQ#head-9aafb5d8dff5308e8ea4fcf4b71f19f029c 4bb99 -Hoss __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com