I checked docsPending. I get following
commits : 0
autocommit maxDocs : 1
autocommit maxTime : 1000ms
autocommits : 0
optimizes : 0
docsPending : 0
deletesPending : 0
adds : 0
deletesById : 0
deletesByQuery : 0
Most surprising is Once I add document I see numDocs and maxDoc increasing.
But I
Dave
You may want to break large docs into chunks, say by chapter or other
logical segment.
This will help in
- relevance ranking - the term frequency of large docs will cause
uneven weighting unless the relevance calculation does log normalization
- finer granularity of retrieval - for exa
Not sure what's wrong, but I'll share some of the debugging I did when
getting my implementation to work these past 2 weeks:
1) Change schema.xml to suit your needs. I basically just changed the
fields to ones I needed and didn't touch the fieldtypes at first.
2) Stop SOLR, delete the index, and
Will the hardlink snapshot scheme work across physical disk
partitions? Can I snapshoot to a different partition than the one
holding the live solr index?
Hi Andrew,
I thought the same thing. Any feedback from your question?
-- Kim
--
View this message in context:
http://www.nabble.com/Apache-web-server-logs-in-solr-tp12280450p15660102.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks. I'm trying to do a general purpose secure enterprise search system.
Specifically, it needs to be able to crawl web pages (which are almost all
small files) and filesystems (which may have widely varying file sizes). I
realize other projects exist that have done similar, but none take int