Merry Christmas and happy new year

2007-12-24 Thread climbingrose
Good day all Solr users & developers,

May I wish you and your family a merry Xmas and happy new year. Hope that
new year brings you all health, wealth and peace. It's been my pleasure to
be on this mailing list and working with Solr. Thank you all!

-- 
Cheers,

Cuong Hoang


SOLR ON TOMCAT SERVER

2007-12-24 Thread NaveenKumar.Dhayalan

Hi all,

   Problem in deploying SOLR  on tomcat, followed the setups in the link
http://wiki.apache.org/solr/SolrTomcat#head-7036378fa48b79c0797cc8230a8a
a0965412fb2e

 

Please anyone help me??

 

My Configuration:

Tomcat 5.5.16

apache-solr-nightly(19/12/2007).

 

Error:

INFO: Using JNDI solr.home: C:/Solrweb/solr

Dec 24, 2007 3:44:05 PM org.apache.solr.servlet.SolrDispatchFilter init

INFO: looking for multicore.xml: C:\Solrweb\solr\multicore.xml

Dec 24, 2007 3:44:05 PM org.apache.solr.servlet.SolrDispatchFilter init

SEVERE: Could not start SOLR. Check solr/home property

java.lang.ExceptionInInitializerError

at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.ja

va:89)

at
org.apache.catalina.core.ApplicationFilterConfig.getFilter(Applicatio

nFilterConfig.java:223)

at
org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(Applica

tionFilterConfig.java:304)

at
org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFi

lterConfig.java:77)

at
org.apache.catalina.core.StandardContext.filterStart(StandardContext.

java:3600)

at
org.apache.catalina.core.StandardContext.start(StandardContext.java:4

189)

at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase

.java:759)

at
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:73

9)

at
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:524)

 

at
org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.ja

va:608)

at
org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.j

ava:535)

at
org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:470

)

at
org.apache.catalina.startup.HostConfig.start(HostConfig.java:1112)

at
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java

:310)

at
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(Lifecycl

eSupport.java:119)

at
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1021)

 

at
org.apache.catalina.core.StandardHost.start(StandardHost.java:718)

at
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1013)

 

at
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442

)

at
org.apache.catalina.core.StandardService.start(StandardService.java:4

50)

at
org.apache.catalina.core.StandardServer.start(StandardServer.java:709

)

at org.apache.catalina.startup.Catalina.start(Catalina.java:551)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.

java:39)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces

sorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at
org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:275)

at
org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)

Caused by: java.lang.RuntimeException: XPathFactory#newInstance() failed
to crea

te an XPathFactory for the default object model:
http://java.sun.com/jaxp/xpath/

dom with the XPathFactoryConfigurationException:
javax.xml.xpath.XPathFactoryCon

figurationException: No XPathFctory implementation found for the object
model: h

ttp://java.sun.com/jaxp/xpath/dom

at javax.xml.xpath.XPathFactory.newInstance(Unknown Source)

at org.apache.solr.core.Config.(Config.java:41)

... 28 more

 

Thanks

Naveen

 

 



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply 
e-mail and destroy all copies of the original message. 
Any unauthorized review, use, disclosure, dissemination, forwarding, printing 
or copying of this email or any action taken in reliance on this e-mail is 
strictly 
prohibited and may be unlawful.

An observation on the "Too Many Files Open" problem

2007-12-24 Thread Mark Baird
Running our Solr server (latest 1.2 Solr release) on a Linux machine we ran
into the "Too Many Open Files" issue quite a bit.  We've since changed the
ulimit max filehandle setting, as well as the Solr mergeFactor setting and
haven't been running into the problem anymore.  However we are seeing some
behavior from Solr that seems a little odd to me.  When we are in the middle
of our batch index process and we run the lsof command we see a lot of open
file handles hanging around that reference Solr index files that have been
deleted by Solr and no longer exist on the system.

The documents we are indexing are potentially very large, so due to various
memory constraints we only send 300 docs to Solr at a time.  With a commit
between each set of 300 documents.  Now one of the things that I read may
cause old file handles to hang around was if you had an old IndexReader
still open pointing to those old files.  However whenever you issue a commit
to the server it is supposed to close the old IndexReader and open a new
one.

So my question is, when the Reader is being closed due to a commit, what
exactly is happening?  Is it just being set to null and a new instance being
created?  I'm thinking the reader may be sitting around in memory for a
while before the garbage collector finally gets to it, and in that time it
is still holding those files open.  Perhaps an explicit method call that
closes any open file handles should occur before setting the reference to
null?

After looking at the code, it looks like reader.close() is explicitly being
called as long as the closeReader property in SolrIndexSearcher is set to
true, but I'm not sure how to check if that is always getting set to true or
not.  There is one constructor of SolrIndexSearcher that sets it to false.

Any insight here would be appreciated.  Are stale file handles something I
should just expect from the JVM?  I've never ran into the "Too Many Files
Open" exception before, so this is my first time looking at the lsof
command.  Perhaps I'm reading too much into the data it's showing me.


Mark Baird


Re: An observation on the "Too Many Files Open" problem

2007-12-24 Thread Yonik Seeley
On Dec 24, 2007 1:25 PM, Mark Baird <[EMAIL PROTECTED]> wrote:
> So my question is, when the Reader is being closed due to a commit, what
> exactly is happening?  Is it just being set to null and a new instance being
> created?

No, we reference count the SolrIndexSearcher so that it is closed
immediately after the last request is finished using it.  We do that
specifically to avoid running out of file descriptors.

> After looking at the code, it looks like reader.close() is explicitly being
> called as long as the closeReader property in SolrIndexSearcher is set to
> true, but I'm not sure how to check if that is always getting set to true or
> not.  There is one constructor of SolrIndexSearcher that sets it to false.

I just verified that the constructor that sets it to false (because it
is passed a reader) is not used anywhere in Solr.

> Any insight here would be appreciated.  Are stale file handles something I
> should just expect from the JVM?

I wouldn't think so.
I don't currently have a UNIX system handy to check... could you
perhaps try the latest development version and see if it has the same
behavior?

-Yonik


Re: An observation on the "Too Many Files Open" problem

2007-12-24 Thread Otis Gospodnetic
Mark,

Another question to ask is: do you *really* need to be calling commit every 300 
docs?  Unless you really need searchers to see your 300 new docs, you don't 
need to commit.  Just optimize + commit at the end of your whole batch.  
Lowering the mergeFactor is the right thing to do.  Out of curiosity, are you 
using a single instance of Solr for both indexing and searching?

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch

- Original Message 
From: Mark Baird <[EMAIL PROTECTED]>
To: Solr Mailing List 
Sent: Monday, December 24, 2007 7:25:00 PM
Subject: An observation on the "Too Many Files Open" problem

Running our Solr server (latest 1.2 Solr release) on a Linux machine we
 ran
into the "Too Many Open Files" issue quite a bit.  We've since changed
 the
ulimit max filehandle setting, as well as the Solr mergeFactor setting
 and
haven't been running into the problem anymore.  However we are seeing
 some
behavior from Solr that seems a little odd to me.  When we are in the
 middle
of our batch index process and we run the lsof command we see a lot of
 open
file handles hanging around that reference Solr index files that have
 been
deleted by Solr and no longer exist on the system.

The documents we are indexing are potentially very large, so due to
 various
memory constraints we only send 300 docs to Solr at a time.  With a
 commit
between each set of 300 documents.  Now one of the things that I read
 may
cause old file handles to hang around was if you had an old IndexReader
still open pointing to those old files.  However whenever you issue a
 commit
to the server it is supposed to close the old IndexReader and open a
 new
one.

So my question is, when the Reader is being closed due to a commit,
 what
exactly is happening?  Is it just being set to null and a new instance
 being
created?  I'm thinking the reader may be sitting around in memory for a
while before the garbage collector finally gets to it, and in that time
 it
is still holding those files open.  Perhaps an explicit method call
 that
closes any open file handles should occur before setting the reference
 to
null?

After looking at the code, it looks like reader.close() is explicitly
 being
called as long as the closeReader property in SolrIndexSearcher is set
 to
true, but I'm not sure how to check if that is always getting set to
 true or
not.  There is one constructor of SolrIndexSearcher that sets it to
 false.

Any insight here would be appreciated.  Are stale file handles
 something I
should just expect from the JVM?  I've never ran into the "Too Many
 Files
Open" exception before, so this is my first time looking at the lsof
command.  Perhaps I'm reading too much into the data it's showing me.


Mark Baird