There seems to be some improvement. The writes speeds are faster. Server
restarts are lower.
We changed the configuration to:
50
1
Before the Change:
- Server Restarts: 10 times in 12 hours
- CPU load: Average:50 and Peak:90
After the Change:
- Server Restarts: 4 times in 12 hours.
hi,
how to get the autocomplete/autosuggest feature in the solr1.4.plz give me
the code also...
--
View this message in context:
http://old.nabble.com/how-to-get-the-autocomplete-feature-in-solr-1.4--tp26402992p26402992.html
Sent from the Solr - User mailing list archive at Nabble.com.
Darniz,
The "indexer" is typically an external application you write. This application
gets documents from some data source and sends them to Solr for indexing. It
is this application that needs to be able to re-get the appropriate set of
documents from the data source and re-send them to Sol
http://wiki.apache.org/solr/SpellCheckComponent#Configuration
There are some tweaks for the spell checking responses. You can set
how accurate the spelling check. And you can pick a different
spell-checking distance algorithm.
Note also that you can make a spelling dictionary from a file that's
j
Been there done that.
Indexing into the smaller cores will be faster.
You will be able to spread the load across multiple machines.
There are other advantages:
You will not have a 1/2Terabyte set of files to worry about.
You will not need 1.1T in one partition to run an optimize.
You will not nee
Maduranga Kannangara wrote:
> Permanent solution we found was to add:
>
> 1. flush() before closing the segment.gen file write (On Lucene).
>
Hmm ... but close does flush?
> 2. Remove the slave's segment.gen before replication
>
>
> Point 1 elaborated:
>
> Lucene 2.4, org.apache.lucene.index.S
While trying to make use of the StreamingUpdateSolrServer for updates with the
release code for Solr.14 I noticed some characters such as é did not show up in
the index correctly. The code should set the CharsetName via the constructor
of the OutputStreamWriter. I noticed that the Commons
Permanent solution we found was to add:
1. flush() before closing the segment.gen file write (On Lucene).
2. Remove the slave's segment.gen before replication
Point 1 elaborated:
Lucene 2.4, org.apache.lucene.index.SegmentInfos.finishCommit(Directory dir)
method:
Writing of segment.gen file w
I want to use the standard QueryComponent to run a query then sort a *limited
number of the results* by some function query. So if my query returns
10,000 results, I'd like to calculate the function over only the top, say
100 of them, and sort that for the ultimate results. Is this possible?
Th
On Tue, Nov 17, 2009 at 11:09:38AM -0800, Chris Hostetter said:
>
> Several things about your message don't make sense...
Hmm, sorry - a byproduct of building up the mail over time I think.
The query
?q="Here there be dragons"
&fl=id,title,score
&debugQuery=on
&qt=dismax
&qf=title
gets echoed
I am looking at executing a single solr query and having solr automatically
execute one (or more) additional solr queries (inside solr) as a way to save
some overhead/time. I am doing this by overriding the SearchComponent. My
code works and I was looking at ways to optimize the code.
the origi
Hi users,
i wanted to know is there a way we can initialte solr indexing.
I mean for example i have a field which was of type string and i indexed 100
documents.
When i change the field to text i dont want to load the document again, i
should be able to just run a command line and the documents sh
Thanks a lot Hoss!
[ ]'s
Leonardo da S. Souza
°v° Linux user #375225
/(_)\ http://counter.li.org/
^ ^
On Tue, Nov 17, 2009 at 6:12 PM, Chris Hostetter
wrote:
>
> : I'm newbie using Solr and I'd like to run some tests against our data
> set. I
> : have successful tested Solr + Cell using th
CHANGES.txt contains information, but no instructions.
-Adam
- Original Message
From: Chris Hostetter
To: solr-user@lucene.apache.org
Sent: Tue, November 17, 2009 1:43:14 PM
Subject: Re: Where is upgrading documentation?
: I apologize in advance for the simple questionwe're runn
: If documents are being added to and removed from an index (and commits
: are being issued) while a user is searching, then the experience of
: paging through search results using the obvious solr mechanism
: (&start=100&Rows=10) may be disorienting for the user. For one
: example, by the time th
: I'm newbie using Solr and I'd like to run some tests against our data set. I
: have successful tested Solr + Cell using the standard Http Solr server
: and now we need to test the Embedded solution and when a try to start the
: embedded server i get this exception:
:
: INFO: registering core:
:
: I downloaded solr 1.4.0 but discovered when using solrj 1.4 that a
: required slf4j jar was missing in the distribution (i.e.
: apache-solr-1.4.0/dist). I got a java.lang.NoClassDefFoundError:
: org/slf4j/impl/StaticLoggerBinder when using solrj
...
: Have I overlooked something or ar
: I apologize in advance for the simple questionwe're running on Solr
: 1.3, looking to upgrade to 1.4. I haven't been able to find
: instructions or guidelines for upgrading. Can anyone point me in the
: right direction?
Official info for people upgrading can be found in the CHANGES.txt
On Tue, Nov 17, 2009 at 2:24 PM, Chris Hostetter
wrote:
>
> : Basically, search entries are keyed to other documents. We have finite
> : storage,
> : so we purge old documents. My understanding was that deleted documents
> : still
> : take space until an optimize is done. Therefore, if I don't
: PlantSearch^1 GeographySearch^1 RegionSearch^1
: CountrySearch^1 BusUnitSearch^1 BusinessFunctionSearch^1
: Businessprocesses^1 LifecycleStatus^1 ApplicationNature^1 UploadedDate^1
:
: PlantSearch^1 GeographySearch^1 RegionSearch^1
: CountrySearch^1 BusUnitSearch^1 BusinessFunctionS
Hi,
Sending this mail again after I joined the sol-user group..Kindly find time
to help.
Thanks and Rgds,
Anil
-- Forwarded message --
From: Anil Cherian
Date: Fri, Nov 13, 2009 at 3:48 PM
Subject: solr index-time boost... help required please
To: solr-user@lucene.apache.org, solr-
: Basically, search entries are keyed to other documents. We have finite
: storage,
: so we purge old documents. My understanding was that deleted documents
: still
: take space until an optimize is done. Therefore, if I don't optimize, the
: index
: size on disk will grow without bound.
:
: A
: I am using Dismax request handler for queries:
:
: ...select?q=foo bar foo2 bar2&qt=dismax&mm=2...
...
: But now I want change this to the following:
:
: List all documents that have at least 2 of the optional clauses OR that
: have at least one of the query terms (e.g. foo) more than
The new PECL package solr-0.9.7 (beta) has been released at
http://pecl.php.net/.
Release notes
-
- Fixed bug 16924 AC_MSG_NOTICE() is undefined in autoconf 2.13
- Added new method SolrClient::getDebug()
- Modified SolrClient::__construct() so that port numbers and other integer
values
Hi,
This may not be 100% complete, but it worked for me:
http://www.jroller.com/otis/entry/upgrading_to_solr_1_4
Let me know if I missed anything.
Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Ori
Several things about your message don't make sense...
1) the field names listed in your "qf" don't match up to the field names
in the generated query.toString() ... suggesting that they come from
differnet examples
2) the query.toString() output from each of your queries are identicle,
and ye
On Tue, Nov 17, 2009 at 06:09:56PM +0200, Eugene Dzhurinsky wrote:
> java.lang.NullPointerException
> at
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:421)
I compared schema.xml from Solr installation package with the one I created,
and found out that my
I apologize in advance for the simple questionwe're running on Solr 1.3,
looking to upgrade to 1.4. I haven't been able to find instructions or
guidelines for upgrading. Can anyone point me in the right direction?
Thanks!
Adam
Hi there!
I am trying to test the distributed search on 2 servers. I've created simple
application which adds sample documents to 2 different solr servers (version
1.3.0).
While it is possible to search for certain keyphrase on any of these servers,
I am getting weird error when trying to search
Thanks Otis... I remember that one!
It still did not remove the document! So obviously its something else thats
happening.
On Tue, Nov 17, 2009 at 10:47 AM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> Mark,
>
> http://localhost:8983/solr/update?stream.body=%3Ccommit/%3E
>
> Otis
> --
Mark,
http://localhost:8983/solr/update?stream.body=%3Ccommit/%3E
Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
> From: Mark Ellul
> To: solr-user@lucene.apache.org; noble.p...@gm
Hi Noble,
I have updated my entity specs, by having a separate entity for
selecting rows which are not deleted for and ones that are deleted, so
I am sure now that the document is not getting added in the same
import.
I read in the tutorial that the deletes are not taken out until the
commit is d
Kewin,
Kerwin wrote:
Our approach is similar to what you have mentioned in the jira issue except
that we have all metadata in the xml and not in the database. I am therefore
using a custom XmlUpdateRequestHandler to parse the XML and then calling
Tika from within the XML Loader to parse the cont
why don't you add a new timestamp field . you can use the
TemplateTransformer with the formatDate() function
On Tue, Nov 17, 2009 at 5:49 PM, Mark Ellul wrote:
> Hi Noble,
>
> Excellent Question... should the field that does the deleting be in a
> different entity to the one that does the additi
The doc already existed before the delta-import has been run.
And it exists afterwards... even though it says its deleting it.
Any ideas of what I can try?
On 11/17/09, Noble Paul നോബിള് नोब्ळ् wrote:
> are you sure that the doc w/ the same id was not created after that?
>
> On Mon, Nov 16, 2
are you sure that the doc w/ the same id was not created after that?
On Mon, Nov 16, 2009 at 11:12 PM, Mark Ellul wrote:
> Hi,
>
> I have added a deleted field in my database, and am using the
> Dataimporthandler to add rows to the index...
>
> I am using solr 1.4
>
> I have added my the deleted
Hi Sascha,
Thanks for your reply.
Our approach is similar to what you have mentioned in the jira issue except
that we have all metadata in the xml and not in the database. I am therefore
using a custom XmlUpdateRequestHandler to parse the XML and then calling
Tika from within the XML Loader to par
I downloaded solr 1.4.0 but discovered when using solrj 1.4 that a required
slf4j jar was missing in the distribution (i.e. apache-solr-1.4.0/dist). I got
a java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder when using
solrj
I solved the problem according to
http://www.slf4j.org
Hi,
I was just working with spell check in SOLR 1.3 and came across this problem.
My indexed data contains four artist names
1. Rihanna
2. Arianna
3. Michael
4. Michel
I was trying to implement spelling suggestions by saying spellcheck=true and
spellcheck.build=true.
When I search for
a.
Hi all,
I came across this issue when I was exploring the solr FunctionQuery. Hope
anyone of you can help me on this:
I need to combine the score of a normal key word search with one numeric
field in the index to form a new score. So I use the query() function
provided in the FunctionQuery,
So th
40 matches
Mail list logo