based-authorization-plugin>?
Thank you!
Ya-Lan Yang
Software Engineer
Cline Center for Advanced Social Research
University of Illinois Urbana-Champaign
217.244.6641
yly...@illinois.edu
Cline Center for Advanced Social Research
2001 S. First St. Suite 207, MC-689
Champaign, IL 61820-7478
www.clinece
It could be related to NUMA.
Check out this article about it which has some fixes that worked for me.
http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-3-1-memory-swapping-tp4126641p41271
Ravi,
It looks like you are re-indexing data by pulling data from your solr server
and then indexing it back to the same server. I can think of many things
that could go wrong with this setup. For example are all your fields stored?
Since you are iterating through all documents on the solr server
How frequently are you committing? Frequent commits can slow everything down.
--
View this message in context:
http://lucene.472066.n3.nabble.com/network-slows-when-solr-is-running-help-tp4120523p4120992.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm in the process of updating from Solr 3.4 to Solr 4.6. Is the SolrJ 3.4
Client forward compatible with Solr 4.6?
This isn't mentioned in the documentation
http://wiki.apache.org/solr/javabin page.
In a test environment, I did some indexing and querying with a SolrJ3.4
Client and a Solr4.6
Have you looked at field collapsing?
http://wiki.apache.org/solr/FieldCollapsing
You would collapse on the id and the when solr returns the results you could
extract the facet label from solr document from each group.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Store-co
We need to implement a locking mechanism for a full-reindexing SOLR server
pool. We could use a database, Zookeeper as our locking mechanism but thats
a lot of work. Could solr do it?
I noticed the core admin RENAME function
(http://wiki.apache.org/solr/CoreAdmin#RENAME) Is this an synchronous ato
In our current architecture, we use a staging core to perform full re-indexes
while the live core continues to serve queries. After a full re-index we use
the core admin to swap the live and stage index. Both the live and stage
core are on the same solr instance.
In our new architecture we want to
I assume your're indexing on the same server that is used to execute search
queries. Adding 20K documents in bulk could cause the Solr Server to 'stop
the world' where the server would stop responding to queries.
My suggestion is
- Setup master/slave to insulate your clients from 'stop the world'
The search for the full word arkadicolson exceeds 8 characters so thats why
it's not working.
The fix is to add another field that will tokenize into full words.
The query would look like this
some_field_ngram:arkadicolson AND some_field_whole_word:arkadicolson
--
View this message in context:
Would it be a good idea to have Solr throw syntax error if an empty string
query occurs?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Strange-behavior-with-search-on-empty-string-and-NOT-tp3818023p3823572.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am curious why solr results are inconsistent for the query below for an
empty string search on a TextField.
q=name:"" returns 0 results
q=name:"" AND NOT name:"FOOBAR" return all results in the solr index. Should
it should not return 0 results too?
Here is the debugQuery.
0
1
on
on
0
name:
Solr has no limitation on the number of cores. It's limited by your hardware,
inodes and how many files you could keep open.
I think even if you went the Lucene route you would run into same hardware
limits.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lucene-vs-Solr-desig
Solr has cores which are independent search indexes. You could create a
separate core per user.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lucene-vs-Solr-design-decision-tp3813457p3813489.html
Sent from the Solr - User mailing list archive at Nabble.com.
Here's one way to do it using dismax.
1. You'll have two fields.
title_text which is has a type of TextField
title_string which has type String. This is an exact match field.
2. Set the dismax qf=title_string^10 title_text^1
You could even make this better by doing also handling infix searches
What are you trying to speed up and what are the timings you are getting?
Giving solr a lot of memory does not always speed things up. You must leave
some room to allow the O/S to cache the index files.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-not-using-allocated-
It's best run the data import once per minute. Solr updates works best when
updates are batched and commits are infrequent.
Doing a post per document as a transaction would require a solr commit,
which could cause the server to hang under update load. Of course you could
not do the commit but your
I implemented a similar feature for a categorization suggestion service. I
did the faceting in the client code, which is not exactly the best
performing but it worked very well.
It would be nice to have the Solr server do the faceting for performance.
Burton-West, Tom wrote:
>
> If relevance ra
Are you batching the documents before sending them to the solr server? Are
you doing a commit only at the end? Also since you have 32 cores, you can
try upping the number of concurrent updaters from 16 to 32.
Jaeger, Jay - DOT wrote:
>
> 500 / second would be 1,800,000 per hour (much more than
index. I am wondering if anyone has done a
similar setting and partitioning their indexes to many small individual
ones. And how is the performance?
Thank you.
Regards
-
Bin Lan
Software Engineer
Perimeter E-Security
O - (203)541-3412
Follow Us on Twitter: www.twitter.com/PerimeterNews
Read
20 matches
Mail list logo