Hello,
I would like to know if can implement the Embedded SOLR using the SOLR
collection distribution?
Regards,
Dilip
-Original Message-
From: mike topper [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 22, 2007 8:29 PM
To: solr-user@lucene.apache.org
Subject: almost realtime update
Hello, everybody:-)
I'm interested with the mechanism of data replciation in Solr, In the
"Introduction to the solr enterprise Search Server", Replication is
one of features of Solr, but I can't find anything about replication
issues on the Web site and documents, including how to split the
index,
thanks for your reply, my response below:
On 9/5/07, Mike Klaas <[EMAIL PROTECTED]> wrote:
> On 4-Sep-07, at 4:50 PM, Ravish Bhagdev wrote:
>
> > - I have about 11K html documents to index.
> > - I'm trying to index these documents (along with 3 more small string
> > fields) so that when I search
On Wed, 2007-09-05 at 15:56 +0800, Dong Wang wrote:
> Hello, everybody:-)
> I'm interested with the mechanism of data replciation in Solr, In the
> "Introduction to the solr enterprise Search Server", Replication is
> one of features of Solr, but I can't find anything about replication
> issues on
On Sep 5, 2007, at 3:30 AM, Dilip.TS wrote:
I would like to know if can implement the Embedded SOLR using the
SOLR
collection distribution?
Partly... the rsync method of getting a master index to the slaves
would work, but you'd need a way to to the slaves so that
they reload their Ind
The front page of the Solr WIki has a small section on replication:
http://wiki.apache.org/solr/
Solr's built-in replication does not split the index. It replicate the
entire index by only copying files that have changed.
Bill
On 9/5/07, Dong Wang <[EMAIL PROTECTED]> wrote:
>
> Hello, everybo
Hello all,
I will apologize up front if this is comes twice.
I've bin trying to index a 300MB file to solr 1.2. I keep getting out of
memory heap errors.
Even on an empty index with one Gig of vm memory it sill won't work.
Is it even possible to get Solr to index such large files?
Do I need to
Hi,
I'm having no luck getting Solr 1.2 to run under Tomcat 5.5 using
context fragments. I've followed the example on wiki:
http://wiki.apache.org/solr/SolrTomcat
The only thing I've changed is the installation method. I'm using the
Tomcat manager to create a context path, and also point to
OK found the start of the trail... I had a duplicate entry for
fulltext in my schema. Removed that. Now when I first try to deploy
Solr, I get this error:
SEVERE: org.apache.solr.common.SolrException: Error loading class
'solr.IndexInfoRequestHandler'
Matt
On Sep 5, 2007, at 11:25 AM, Ma
On 9/5/07, Brian Carmalt <[EMAIL PROTECTED]> wrote:
> I've bin trying to index a 300MB file to solr 1.2. I keep getting out of
> memory heap errors.
300MB of what... a single 300MB document? Or is that file represent
multiple documents in XML or CSV format?
-Yonik
Hi All,
Now i am facing the problem with case sensitive text. I am indexing
smaller case word but when i give the same word in upper case for search,
its not getting search.
Example : Indexing word : "corent"
Searching word : "CORENT".
If i search "CORENT" it retrieve
Thank you, Thorsten Scherler and Bill Au.I'm so indiscretionary to
post this question, Thanks for your patience.
Ok, Here comes my new questions, Solr's Wiki says
"All the files in the index directory are hard links to the latest
snapshot. This technique has these advantages: Can keep multiple
snap
I am a pretty new user of Lucene, but I think the simple answer is "what
analyzer are you using when you index" and "use the same analyzer
when you search". I believe StandardAnalyzer for example does
lowercasing, so if you use the same one when you search all should
work as you wish.
: snapshot. This technique has these advantages: Can keep multiple
: snapshots on each host without the need to keep multiple copies of
: index files that have not changed. File copying from master to slave
: Why do hard links make file copying between master and slave fast?
: Thanks. Best Regards
: OK found the start of the trail... I had a duplicate entry for fulltext in my
: schema. Removed that. Now when I first try to deploy Solr, I get this error:
really? defining the same field name twice gave you an
ArrayIndexOutofBounds? ... that's bad, i'll open a bug on that.
: SEVERE: org.apa
Hi-
Here are the lines to add to the end of Tomcat's conf/logging.properties
file to get rid of query/update logging noise:
org.apache.solr.core.SolrCore.level = WARNING
org.apache.solr.handler.XmlUpdateRequestHandler.level = WARNING
org.apache.solr.search.SolrIndexSearcher.level = WARNING
I w
Not that I've noticed. I'll do a more careful grep soon here - I just
got back from a long weekend.
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
++
Hello,
I am trying to post the following to my index:
http://www.nytimes.com/2007/08/25/business/worldbusiness/25yuan.html?ex=1345694400&en=499af384a9ebd18f&ei=5088&partner=rssnyt&emc=rss
The url field is defined as:
However, I get the following error:
Posting file docstor/ffc110ee5c9a2e
It is apparently attempting to parse &en=499af384a9ebd18f in the
URL. I am
not clear why it would do this as I specified indexed="false." I
need to
store this because that is how the user gets to the original article.
the ampersand is an XML reserved character. you have to escape it
(t
When I load the distrobutiondump.jsp, there is no output in my
catalina.out file.
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
++
On Sep 5, 2007, at
It seems that the scripts cannot open new searchers at the end of the
process, for some reason. Here's a message from cron, but I'm not
sure what to make of it... It looks like the files properly copied
over, but failed the install. I removed the temp* directory, but
still SOLR could not la
If it helps anyone, this index is around a gig in size.
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
++
On Sep 5, 2007, at 3:14 PM, Matthew Runo wrote
On Sep 5, 2007, at 11:37 AM, Matt Mitchell wrote:
SEVERE: org.apache.solr.common.SolrException: Error loading class
'solr.IndexInfoRequestHandler'
You're using my old hand-built version of Solr, I suspect. Hoss
explained it fully in his previous message on this thread.
Care needs to be t
: Care needs to be taken when upgrading Solr but leaving solrconfig.xml
: untouched because additional config may be necessary. Comparing your
: solrconfig.xml with the one that ships with the example app of the version of
: Solr you're upgrading too is recommended.
Hmmm... that's kind of a scar
I guess my warning is more because I play on the edge and have
several times ended up tweaking various apps solrconfig.xml's as I
upgraded them to keep things working.
Anyway, we'll all agree that diff'ing your config files with the
example app can be useful.
Erik
On Sep 5, 2007,
Not really. It is a very poor substitute for reading the release notes,
and sufficiently inadequate that it might not be worth the time.
Diffing the example with the previous release is probably more
instructive, but might or might not help for your application.
A config file checker would be use
On Wed, 05 Sep 2007 17:18:09 +0200
Brian Carmalt <[EMAIL PROTECTED]> wrote:
> I've bin trying to index a 300MB file to solr 1.2. I keep getting out of
> memory heap errors.
> Even on an empty index with one Gig of vm memory it sill won't work.
Hi Brian,
VM != heap memory.
VM = OS memory
heap m
Yonik Seeley schrieb:
On 9/5/07, Brian Carmalt <[EMAIL PROTECTED]> wrote:
I've bin trying to index a 300MB file to solr 1.2. I keep getting out of
memory heap errors.
300MB of what... a single 300MB document? Or is that file represent
multiple documents in XML or CSV format?
-Yonik
Hello again,
I run Solr on Tomcat under windows and use the tomcat monitor to start
the service. I have set the minimum heap
size to be 512MB and then maximum to be 1024mb. The system has 2 Gigs of
ram. The error that I get after sending
approximately 300 MB is:
java.lang.OutOfMemoryError: Ja
29 matches
Mail list logo