Hi,
I've read on SolrWiki that solrmeter is not active developed anymore, but I
wonder if it is still valid to do some performance test or if there is some
better approach / tool.
I'd like also to know where I can find the latest compiled version for
SolrMeter instead of compiling with mav
You can do this by specifying slop for your fields.
If you want to see how exactly your query is being treated you should use
the analysis tool that is available in the solr admin ui under your
collection name.
On Mon, 28 Dec 2015, 12:24 Jason wrote:
> Hi, all
> I'm wondering how multi generated
Hi Gian
We've using solr meter to test the performance of solr instances for quite
a while now and in my experience it is pretty reliable.
Finding a compiled jar is difficult but building from the code is pretty
straightforward and will only take you a few minutes.
On Mon, 28 Dec 2015, 13:47 Gian
I know the analysis tool under solr admin ui.
Specifying slop for query fields is the using edismax, right?
I have quried using edismax like below.
http://localhost:8080/solr/collection1/select?q=test:(chloro-4-hydroxy)&fl=*,score&debugQuery=true&q.op=AND&qf=test&pf=test~1^2.0&defType=edismax
B
yes, i saw massive full GC ,so i change java -Xms10g -Xmx10g
and there is another problem,
shard1:
192.168.100.210:7001-leader
192.168.100.211:7001-replica
shard2:
192.168.100.211:7002:leader
192.168.100.212:7001:replica
shard3:
192.168.100.210:70
Yes you use the pf options under edismax.
Have you indexed the field with term frequencies and position data?
Because slop basically works with phrase queries for which you need the
term frequencies and positions available.
On Mon, 28 Dec 2015, 15:02 Jason wrote:
> I know the analysis tool under
Has changing the heap size fixed your GC problem?
As for the leader not being elected, I am not very sure about it but if
there was some issue then you'd see it in the solr logs as exceptions so
you should check that.
On Mon, 28 Dec 2015, 15:21 elvis鱼人 wrote:
> yes, i saw massive full GC ,so i
Hi All,
i am trying to determine stable version of SOLR 4. is there a blog which
we can refer.. i understand we can read through Release Notes. I am
interested in user reviews and challenges seen with various versions of
SOLR 4.
Appreciate your contribution.
Thanks,
Abhishek
You should take a look at solr's jira.
That'll give you a pretty good idea of the various feature upgrades across
versions as well as the bugs present in the various versions.
On Mon, 28 Dec 2015, 17:42 abhi Abhishek wrote:
> Hi All,
>i am trying to determine stable version of SOLR 4. is the
yes ,just may be fixed
--
View this message in context:
http://lucene.472066.n3.nabble.com/no-servers-hosting-shard-very-strange-tp4247349p4247525.html
Sent from the Solr - User mailing list archive at Nabble.com.
Solr 4.10.3
On Mon, Dec 28, 2015 at 5:51 PM, Binoy Dalal wrote:
> You should take a look at solr's jira.
> That'll give you a pretty good idea of the various feature upgrades across
> versions as well as the bugs present in the various versions.
>
> On Mon, 28 Dec 2015, 17:42 abhi Abhishek wrot
There have been a lot of new features added to the Streaming API and the
documentation hasn't kept pace, but it is something I'd like to have filled
in by the release of Solr 6.
With the Streaming API you can take two (or more) totally disconnected
collections and get a result set with documents f
I don't use Curl but there are a couple of things that come to mind
1: Maybe use document routing with the shards. Use an "!" in your unique
ID. I'm using gmail to read this and it sucks for searching content so if
you have done this please ignore this point. Example: If you were storing
documents
There's no benefit in adding the same field twice because that'll just
increase the size of your index without providing any real benefits at
query time.
For increasing the scores, boosting is definitely the way to go.
On Mon, 28 Dec 2015, 09:46 Jamie Johnson wrote:
> What is the difference of a
I'll add one important caveat:
At this time the /export handler does not support returning scores. In
order to join result sets you would typically need to be working with the
entire result sets from both sides of the join, which may be too slow
without the /export handler. But if you're working w
Thanks, I wasn't sure if adding twice and boosting results in a similar
thing happening under the hood or not. Appreciate the response.
Jamie
On Dec 28, 2015 9:08 AM, "Binoy Dalal" wrote:
> There's no benefit in adding the same field twice because that'll just
> increase the size of your index
It will only be the same in a very few number of very well choreographed
cases or complete coincidences.
If you're interested in how this might happen you should take a look at how
lucene matches and scores docs based on your query.
https://lucene.apache.org/core/3_5_0/api/core/org/apache/lucene/se
*What I am trying to accomplish: *
Generate a facet based on the documents uploaded and a text file containing
terms from a domain/ontology such that a facet is shown if a term is in the
text file and in a document (key phrase extraction).
*The problem:*
When I select the facet for the term "*not
Can you do the opposite? Index into an unanalyzed field and copy into the
analyzed?
If I remember correctly facets are based off of indexed values so if you
tokenize the field then the facets will be as you are seeing now.
On Dec 28, 2015 9:45 AM, "Kevin Lopez" wrote:
> *What I am trying to acc
Correct me if I'm wrong but I believe one can use the /export and /select
handlers interchangeably within a single streaming expression. This could
allow you to use the /select handler in the search(...) clause where a
score is necessary and the /export handler in the search(...) clauses where
it i
I am not sure I am following correctly. The field I upload the document to
would be "content" the analyzed field is "ColonCancerField". The "content"
field contains the entire text of the document, in my case a pubmed
abstract. This is a tokenized field. I made this field untokenized and I
still re
1) When faceting use field of type string. That'll rid you of your
tokenization problems.
Alternatively do not use any tokenizers.
Also turn doc values on for the field. It'll improve performance.
2) If however you do need to use a tokenized field for faceting, make sure
that they're pretty short i
bq: so I cannot copy this field to a text field with a
keywordtokenizer or strfield
1> There is no restriction on whether a field is analyzed or not as far as
faceting is concerned. You can freely facet on an analyzed field
or String field or KeywordTokenized field. As Binoy says, though,
facetin
SolrMeter has some pretty cool features, one of which is to extract
queries from existing Solr logs. If the Solr logging patterns have
changed, which they do, that may require some fixing up...
Let us know...
Erick
On Mon, Dec 28, 2015 at 12:25 AM, Binoy Dalal wrote:
> Hi Gian
> We've using sol
Solr meter works very well with solr 4.10.4 including the query extraction
feature.
We've been using it for quite a while now.
You should give it a try. Won't take very long to setup and use.
On Mon, 28 Dec 2015, 23:23 Erick Erickson wrote:
> SolrMeter has some pretty cool features, one of which
Hi,
I am facing an issue where I need to change Solr schema but I have crucial
data that I don't want to delete. Is there a way where I can change the
schema of the index while keeping the data intact?
Regards,
Salman
Is the schema change affects the data you want to keep?
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On 29 December 2015 at 01:48, Salman Ansari wrote:
> Hi,
>
> I am facing an issue where I need to change Solr schema but I have crucial
> data th
Yes that would work. Each search(...) has it's own specific params and can
point to any handler that conforms to the output format in the /select
handler.
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Dec 28, 2015 at 11:12 AM, Dennis Gove wrote:
> Correct me if I'm wrong but I believe o
You can say that we are not removing any fields (so the old data should not
get affected), however, we need to add new fields (which new data will
have). Does that answer your question?
Regards,
Salman
On Mon, Dec 28, 2015 at 9:58 PM, Alexandre Rafalovitch
wrote:
> Is the schema change affects
All crucial data that you don't want to delete should be stored in a
non-Solr backing store, either flat files (e.g., CSV or Solr XML), an
RDBMS, or a NoSQL database. You should always be in a position to either
fully reindex or fully discard your Solr data. Solr is not a system of
record database.
Is the field multivalued?
-- Jack Krupansky
On Sun, Dec 27, 2015 at 11:16 PM, Jamie Johnson wrote:
> What is the difference of adding a field with the same value twice or
> adding it once and boosting the field on add? Is there a situation where
> one approach is preferred?
>
> Jamie
>
Yes the field is multi valued
On Dec 28, 2015 3:48 PM, "Jack Krupansky" wrote:
> Is the field multivalued?
>
> -- Jack Krupansky
>
> On Sun, Dec 27, 2015 at 11:16 PM, Jamie Johnson wrote:
>
> > What is the difference of adding a field with the same value twice or
> > adding it once and boosting
I have having issues with {!join}. If the core have multiValued field and
the inner join does not have a multiValued field it does not find the
ones...
Solr 5.3.1... 5.3.1
Example.
PS1226 is in practicing_specialties_codes in providersearch core. This
field is multiValued.
in the autosuggest co
Hi, Binoy
Thanks your reply.
I've found why score is same.
If I query with test:(chloro-4-hydroxy), then score is same.
But quering with test:(chloro 4 hydroxy), thean score of id 'test1' is
bigger than 'test2'.
So pf parameter under edismax is only applied to explicitly separated
queries.
In my c
Precisely. You can change how solr generates your phrase queries by
tweaking the worddelimiterfactory settings or setting the
autogeneratephrasequeries parameter for your fields to true or false.
This will also determine whether or not pf boosting is applied.
On Tue, 29 Dec 2015, 10:41 Jason wrot
Adding new fields is not a problem. You can continue to use your
existing index with the new schema.
On Tue, Dec 29, 2015 at 1:58 AM, Salman Ansari wrote:
> You can say that we are not removing any fields (so the old data should not
> get affected), however, we need to add new fields (which new d
36 matches
Mail list logo