Thanks that was the problem! I mistakingly thought the lib-folder containing
the jetty.jar etc. was the folder to put the plugins into. After adding a
lib-folder to solr-home everything is resolved.
Geert-Jan
hossman wrote:
>
>
> : SEVERE: java.lang.ClassCastException:
> : wrappt.solr.requ
On 10/10/2007, Ryan McKinley <[EMAIL PROTECTED]> wrote:
> > Without seeing the actual queries that are slow, it's difficult to
> determine
> > what the problem is. Have you tried using EXPLAIN (
> > http://dev.mysql.com/doc/refman/5.0/en/explain.html) to check if your
> query
> > is using the tab
i did what u said, but it not fix my problem.
so i wanna try to find another way.
2007/10/10, Otis Gospodnetic <[EMAIL PROTECTED]>:
>
> Here are some ways:
>
>
>
> Index less data, store fewer fields and less data, compress fields,
> change Lucene's the term index interval (default 128; increa
the most basic stuff, and copyField things around. With SOLR-139, to
rebuild an index you simply reconfigure the copyField settings and
basically `touch` each document to reindex it.
had not thought of that... yes, that would work
Yonik has some pretty prescient design ideas here:
<
On 9-Oct-07, at 8:28 PM, Stu Hood wrote:
Sorry... where do the unique values come into the equation?
Faceting. You should have a filterCache > # unique values in all
fields faceted-on (using the fieldCache method).
Also, you say that the queryResultCache memory usage is very low...
how
Pieter Berkel wrote:
Given that the tables are of type InnoDB, I think it's safe to assume that
you're not planning to use MySQL full-text search (only supported on MyISAM
tables).
I am only using SQL for the base store - it is only accessed for
updating and generating solr documents. All s
Here are some ways:
Index less data, store fewer fields and less data, compress fields,
change Lucene's the term index interval (default 128; increasing it
will make your index a little bit smaller, but will slow down
queries)... But in general, the more your index the more hw you'll
need. I sa
FYI: you don't need to resend your question just because you didn't get a
reply within a day, either people haven't had a chance to reply, or they
don't know the answer.
: XML Parsing Error: mismatched tag. Expected: .
:
: Location:
http://localhost:8080/solrServlet/searchServlet?query=%D9%85
: I have installed solr lucene for my website: clickindia.com, but I am
: unable to apply proximity search for the same over there.
:
: Please help me that how should I index solrconfig.xml & schema.xml
: after providing an option of proximity search.
in order for us to help you, you're going to
Sorry... where do the unique values come into the equation?
Also, you say that the queryResultCache memory usage is very low... how
could this be when it is storing the same information as the
filterCache, but with the addition of sorting?
Your answers are very helpful, thanks!
Stu Hood
Webm
On 9-Oct-07, at 7:53 PM, Stu Hood wrote:
Using the filter cache method on the things like media type and
location; this will occupy ~2.3MB of memory _per unique value_
Mike, how did you calculate that value? I'm trying to tune my
caches, and any equations that could be used to determine some
i just wanna know is it exist which can decrease index size,,not by
increasing hardware or optimizing lucene params.
--
regards
jl
> Using the filter cache method on the things like media type and
> location; this will occupy ~2.3MB of memory _per unique value_
Mike, how did you calculate that value? I'm trying to tune my caches, and any
equations that could be used to determine some balanced settings would be
extremely hel
So, this problem came up again. Now it only happens in a linux environment
when searches are being conducted while an index is running.
Does anything need to be closed on the searching side?
AgentHubcap wrote:
>
> As it turns out I was modifying code that wasn't being run. Running an
> opti
On 9-Oct-07, at 12:36 PM, David Whalen wrote:
stored="true" />
multiValued="true" />
stored="true" />
stored="true" />
I'm sure we could stop storing many of these columns, especially
if someone told me that would make a big difference.
I don't think that it would make a difference
You could just make a separate Lucene index with the document ID unique and
with multiple tag values. Your schema would have the entryID as the unique
field and multiple tag values per entryID.
I just made a phrase-suggesting clone of the Spellchecker class that is
almost exactly the same. It ind
Hi Harry,
I re-discovered this thread last week and have made some minor changes to
the code (remove deprication warnings) so that it compiles with trunk. I
think it would be quite useful to get this stemmer into Solr once all the
legal / licensing issues are resolved. If there are no objections
Given that the tables are of type InnoDB, I think it's safe to assume that
you're not planning to use MySQL full-text search (only supported on MyISAM
tables). If you are not concerned about transactional integrity provided by
InnoDB, perhaps you could try using MyISAM tables (although most people
On Oct 9, 2007, at 3:14 PM, Ryan McKinley wrote:
2. Figure out how to keep the base Tuple store in solr. I think
this will require finishing up SOLR-139. This would keep the the
core data in solr - so there is no good way to 'rebuild' the index.
With SOLR-139, cool stuff can be done to 'r
You did not give your queries. I assume that you are searching against the
'entryID' and updating the tag list.
MySQL has a "fulltext" index. I assume this is a KWIC index but do not know.
A "fulltext" index on "entryID" should be very very fast since single-record
results are what Lucene does be
Late reply on this but I just wanted to say thanks for the
suggestions. I went through my whole schema and was storing things
that didn't need to be stored and indexing a lot of things that didn't
need to be indexed. Just completed a full reindex and it's a much more
reasonable size now.
Kevin
On
David Whalen wrote:
Make sure you have:
class="org.apache.solr.handler.admin.LukeRequestHandler" />
defined in solrconfig.xml
What's the consequence of me changing the solrconfig.xml file?
Doesn't that cause a restart of solr?
editing solrconfig.xml does *not* restart solr.
But you need to
> Make sure you have:
> class="org.apache.solr.handler.admin.LukeRequestHandler" />
> defined in solrconfig.xml
What's the consequence of me changing the solrconfig.xml file?
Doesn't that cause a restart of solr?
> for a large index, this can be very slow but the results are valuable.
In what
what does the LukeReqeust Handler tell you about the # of
distinct terms in each field that you facet on?
Where would I find that?
check:
http://wiki.apache.org/solr/LukeRequestHandler
Make sure you have:
class="org.apache.solr.handler.admin.LukeRequestHandler" />
defined in solrconfig.x
> is this the same 25,000,000 document index you mentioned before?
Yep.
> how big is your index on disk? are you faceting or sorting on
> other fields as well?
running 'du -h' on my index directory returns 86G. We facet
on almost all of our index fields (they were added to the index
solely for
> Then you will be using the FieldCache counting method, and
> this param is not applicable :-) Are all your field that you
> facet on like this?
Unfortunately yes. Could I improve my situation by changing
them to multiValued?
_
Hello-
I am running into some scaling performance problems with SQL that I hope
a clever solr solution could fix. I've already gone through a bunch of
loops, so I figure I should solicit advice before continuing to chase my
tail.
I have a bunch of things (100K-500K+) that are defined by a s
: So, naturally we increased the heap size and things worked
: well for a while and then the errors would happen again.
: We've increased the initial heap size to 2.5GB and it's
: still happening.
is this the same 25,000,000 document index you mentioned before?
2.5GB of heap doesn't seem like mu
: We're using Jetty also, so I get the sense I'm looking at the
: wrong log file.
if you are using the jetty configs that comes in the solr downloads, it
writes all of the solr log messages to stdout (ie: when you run it on the
commandline, the messages come to your terminal). i don't know off
Thanks, but I'm using the updated o.a.s.handler.StandardRequestHandler. I'm
going to try on 1.2 instead to see if it changes things.
Geert-Jan
ryantxu wrote:
>
>
>> It still seems odd that I have to include the jar, since the
>> StandardRequestHandler should be picked up in the war right? I
On 10/9/07, David Whalen <[EMAIL PROTECTED]> wrote:
> > This is only used during the term enumeration method of
> > faceting (facet.field type faceting on multi-valued or
> > full-text fields).
>
> What if I'm faceting on just a plain String field? It's
> not full-text, and I don't have multiValue
: SEVERE: java.lang.ClassCastException:
: wrappt.solr.requesthandler.TopListRequestHandler cannot be cast to
: org.apache.solr.request.SolrRequestHandler at
: org.apache.solr.core.RequestHandlers$1.create(RequestHandlers.java:149)
: added this handler to a jar called: solrRequestHandler1.jar and
Does all your XML look like this sample here - http://wiki.apache.org/
solr/UpdateXmlMessages ??
Are you sending in any elements without a name attribute or
with a blank value?
Erik
On Oct 9, 2007, at 12:45 PM, Urvashi Gadi wrote:
is there a way to find out the line number in the
It still seems odd that I have to include the jar, since the
StandardRequestHandler should be picked up in the war right? Is this also a
sign that there must be something wrong with the deployment?
Note that in 1.3, the StandardRequestHandler was moved from
o.a.s.request to o.a.s.handler:
When we are doing a reindex (1x a day), we post around 150-200
documents per second, on average. Our index is not as large though,
about 200k docs. During this import, the search service (with faceted
page navigation) remains available for front-end searches and
performance does not noticea
I'm about to do a prototype deployment of Solr for a pretty
high-volume site, and I've been following this thread with some
interest.
One thing I want to confirm: It's really possible for Solr to handle a
constant stream of 10K updates/min (>150 updates/sec) to a
25M-document index? I new Solr and
Hi Yonik.
According to the doc:
> This is only used during the term enumeration method of
> faceting (facet.field type faceting on multi-valued or
> full-text fields).
What if I'm faceting on just a plain String field? It's
not full-text, and I don't have multiValued set for it
Dave
>
On 10/9/07, David Whalen <[EMAIL PROTECTED]> wrote:
> I run a faceted query against a very large index on a
> regular schedule. Every now and then the query throws
> an out of heap space error, and we're sunk.
>
> So, naturally we increased the heap size and things worked
> well for a while and th
is there a way to find out the line number in the xml file? the xml file i m
using is quite large.
On 10/9/07, Erik Hatcher <[EMAIL PROTECTED]> wrote:
>
> What is the XML you POSTed into Solr?
>
> It looks like somehow you've sent in a field with no name or value,
> though this is an error that
Hi All.
I run a faceted query against a very large index on a
regular schedule. Every now and then the query throws
an out of heap space error, and we're sunk.
So, naturally we increased the heap size and things worked
well for a while and then the errors would happen again.
We've increased the
What is the XML you POSTed into Solr?
It looks like somehow you've sent in a field with no name or value,
though this is an error that probably should be caught higher up in
Solr.
Erik
On Oct 9, 2007, at 11:06 AM, Urvashi Gadi wrote:
Hi All,
i m trying to index my data using po
The way I'd do it would be to buy more servers, set up Tomcat on
each, and get SOLR replicating from your current machine to the
others. Then, throw them all behind a load balancer, and there you go.
You could also post your updates to every machine. Then you don't
need to worry about getti
Yeah, I'm compiling with a reference to apache-solr-nightly.jar wich is from
the same nightly builld (7 october 2007) as the apache.solr-nightly.war I'm
deploying against. I include this same apache-solr-nightly.jar in the lib
folder of my deployed server.
It still seems odd that I have to incl
Hi All,
i m trying to index my data using post.jar and i get the following error
Error 500
HTTP ERROR: 500name and value cannot both be empty
java.lang.IllegalArgumentException: name and value cannot both be empty
at org.apache.lucene.document.Field.(Field.java:197)
the only r
It worked. Thanks a lot. I just updated value attrb of tag of
solr.xml. Maybe you should update wiki for Unix as well as Windows examples.
- Original Message
From: Jérôme Etévé <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, October 9, 2007 6:49:38 AM
Subject: R
Are you compiling your custom request handler against the same
version of Solr that you are deploying with? My hunch is that
you're compiling against an older version.
Erik
On Oct 9, 2007, at 9:04 AM, Britske wrote:
I'm trying to add a new requestHandler-plugin to Solr by exten
All:
How can I break up my install onto more than one box? We've
hit a learning curve here and we don't understand how best to
proceed. Right now we have everything crammed onto one box
because we don't know any better.
So, how would you build it if you could? Here are the specs:
a) the index
Chris:
We're using Jetty also, so I get the sense I'm looking at the
wrong log file.
On that note -- I've read that Jetty isn't the best servlet
container to use in these situations, is that your experience?
Dave
> -Original Message-
> From: Chris Hostetter [mailto:[EMAIL PROTECTED]
>
I'm trying to add a new requestHandler-plugin to Solr by extending
StandardRequestHandler.
However, when starting solr-server after configuration i get a
ClassCastException:
SEVERE: java.lang.ClassCastException:
wrappt.solr.requesthandler.TopListRequestHandler cannot be cast to
org.apache.solr.r
Hello
I’m a newbie to solr and I need ur help in developing an Arabic search engine
using solr.
I succeeded to build the index but failed searching it. I got that error when I
submit a query like “محمد”.
XML Parsing Error: mismatched tag. Expected: .
Location:
http://localhost:8080/sol
Hi Hoss,
Yes I know that, but I want to have a proper dummy backup (something that
could be kept in a very controlled environment). I thought about using this
approach (a slave just for this purpose), but if I'm using it just as a
backup node there is no reason I don't use a proper backup structur
On 10/9/07, Chris Laux <[EMAIL PROTECTED]> wrote:
> Jérôme Etévé wrote:
> [...]
> > /var/solr/foo/ is the solr home for this instance (where you'll put
> > your schema.xml , solrconfig.xml etc.. ) .
>
> Thanks for the input Jérôme, I gave it another try and discovered that
> what I was doing wrong
Jérôme Etévé wrote:
[...]
> /var/solr/foo/ is the solr home for this instance (where you'll put
> your schema.xml , solrconfig.xml etc.. ) .
Thanks for the input Jérôme, I gave it another try and discovered that
what I was doing wrong was copying the solr/example/ directory to what
you call "/var/
Hi,
Here's what I've got (multiplesolr instance within the same tomcat server)
In
/var/tomcat/conf/Catalina/localhost/
For an instance 'foo' :
foo.xml :
/var/tomcat/solrapp/solr.war is the path to the solr war file. It can
be anywhere on the disk.
/var/solr/foo/ is the solr home for this
> Hello Group,
> Does anyone able to deploy solr.war @ tomcat. I just tried to deploy it as
> per wiki and it gives bunch of exceptions and I dont think those exceptions
> have any relevance with the actual cause. I was wondering if there is any
> speciaf configuration needed?
I had that very
seperating requests over 2 ports is a nice solution when having multiple
user-types. I like that althuigh I don't think i need it for this case.
I'm just going to go the 'normal' caching-route and see where that takes me,
instead of thinking it can't be done upfront :-)
Thanks!
hossman wrot
56 matches
Mail list logo