Thanks, Erick. Analysis page shows the positions are growing=> there are no
"glued" words on the same position.
On Sun, Jun 14, 2015 at 6:10 PM, Erick Erickson
wrote:
> My guess is that you have WordDelimiterFilterFactory in your
> analysis chain with parameters that break up E-Tail to both "e"
To clarify additionally: we use StandardTokenizer & StandardFilter in front
of the WDF. Already following ST's transformations e-tail gets split into
two consecutive tokens
On Mon, Jun 15, 2015 at 10:08 AM, Dmitry Kan wrote:
> Thanks, Erick. Analysis page shows the positions are growing=> there
0
down vote
favorite
I am implementing Solr search, the search order is not displaying on the
basis of relevancy. Lets say if use the search keywords as .net ios it's
returning the results based on score. I have a field KeySkills which holds
the following data
KeySkills:Android, ios, Ph
I've indexed a rich-text documents with the following content:
This is a testing rich text documents to test the uploading of files to Solr
When I tried to use the suggestion, it return me the entire field in the
content once I enter suggest?q=t. However, when I tried to search for
q='rich', I d
Thank you for the replies.
The shard-per-user approach is interesting. We will look into it as well.
The errors we're getting when having ~1500 collections vary depending on
the action (restarting the server, creating a new collection etc).
The frequent ones are:
1. Connection refused when star
ehehe Edwin, I think you should read again the document I linked time ago :
http://lucidworks.com/blog/solr-suggester/
The suggester you used is not meant to provide infix suggestions.
The fuzzy suggester is working on a fuzzy basis , with the *starting* terms
of a field content.
What you are lo
2015-06-12 17:53 GMT+01:00 JACK :
> As explained above, actually I have around 10 lack data not 5 row.
This is not changing how the edismax and norms in Solr work. So it's not a
point.
> It's not
> about synonyms . When I checked in the FAQ page of Solr wiki, it is found
> that if we need to ge
I read you are using a really deep analysed field type.
Let's forget about the dummy copy field.
Can you show me based on your main field, an example of query, and an
example of first 20 results ?
Showing me the ranking you get and the ranking you expect ?
Cheers
2015-06-15 10:39 GMT+01:00 Alessa
Hi Alessandro Benedetti,
The query is
http://localhost:8983/solr/MYDBCORE/select?q=product_name:(laptop+bag)&wt=json&indent=true
1.Dell Inspiron 3542 Laptop (Black) without Laptop Bag
2.Dell 3542 15-inch Laptop with Laptop Bag by Dell
3.Dell Inspiron N3137 11-inch Laptop without Laptop Bag by De
I would really suggest you to take a look to the basics of Lucene scoring.
As it's evident you have not a clear idea of how the scoring is working.
http://www.solrtutorial.com/solr-search-relevancy.html ( simple but old)
http://lucene.apache.org/core/5_2_0/core/index.html
After that, take always
Using your same data I get the expected ranking and not yours.
I am using a simple analysis so I would invite you to analyse the analysis
chain in first instance.
as a second suggestion, are you omitting or not omitting norms ?
By default they should not be omitted.
2015-06-15 11:13 GMT+01:00 JACK
Slight correction, the url, if running locally, would be:
Http://localhost:8983/solr/index.html
The reason we need your help: there is so much to the admin UI that I
cannot possibly have created the test setups to have tested it all. If
there are aspects of the UI you rely upon, please try them o
Hi
Why is the dataimport working in 5.1 example dih and not in 5.2?
== 5.1.0 ==
downloading solr-5.1.0.tgz
tar xzf solr-5.1.0.tgz
cd solr-5.1.0
./bin/solr -e dih
http://192.168.56.101:8983/solr/#/db/dataimport ==> Works fine!
./bin/solr stop
== 5.2.0 ==
downloading solr-5.2.0.tgz
tar xzf
Digging into the code, I see this:
[code]
public SpanWeight(SpanQuery query, IndexSearcher searcher)
throws IOException {
this.similarity = searcher.getSimilarity();
this.query = query;
termContexts = new HashMap<>();
TreeSet terms = new TreeSet<>();
query.extractTerms(ter
Hi Alessandro Benedetti,
Its my Analysis value index
WT
text
raw_bytes
start
end
positionLength
type
position
laptop
[6c 61 70 74 6f 70]
0
6
1
word
1
bag
[62 61 67]
7
10
1
word
2
SF
text
raw_bytes
start
end
positionLength
type
position
laptop
[6c 61 7
Hello Everyone,
I had posted a question on stackoverflow.com after performing a few POCs
My hadrware consist of a single i-3 intel processor (4 CPU as per "dxdiag"
on run ), 8GB Ram, Laptop machine.
My Question Link :
http://stackoverflow.com/questions/30823314/lucene-vs-solr-indexning-speed-for
On 6/15/2015 5:56 AM, Veit, Marcus (marcus.v...@uni-graz.at) wrote:
> http://192.168.56.101:8983/solr/#/db/dataimport ==> Works NOT!
>
> What's wrong?
> Have I done something wrong?
> Is it a bug in 5.2 example?
https://issues.apache.org/jira/browse/SOLR-7588
There are no logs, because it's pure
15 June 2015, Apache Solr™ 5.2.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 5.2.1
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted se
Your posted terms doesn't make a lot of sense …
I simply indexed your content in a text_general field.
Without omitting norms or applying any deep analysis.
And I got the expected results.
You should use the analysis tool with your different inputs, taking a look
to the token stream produced.
Mayb
Actually I can see a problem in your question…
Lucene and Solr are not competitor technologies.
Solr is a Search Server that internally uses the Lucene library and offers
easy to use configuration and REST API.
Lucene is a library that implements tons of search algorithms and features.
You can see
Thanks works fine with 5.2.1 now.
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Montag, 15. Juni 2015 15:03
An: solr-user@lucene.apache.org
Betreff: Re: solr 5.2.0 example dih: dataimport failed
On 6/15/2015 5:56 AM, Veit, Marcus (marcus.v...@uni-graz
Hi Yonik,
I find the syntax quite expressive, only one question :
*1*) $ curl http://localhost:8983/solr/demo/query -d '
q=author_s:yonik&fl=id,comment_t&
json.facet={
genres : {
type: terms,
field: cat_s,
domain: { blockParent : "type_s:book" }
}
}'
I read this :
Give me all the
Also, add &debug=all to your query to see exactly how
scores are calculated.
Best,
Erick
On Mon, Jun 15, 2015 at 3:18 AM, Alessandro Benedetti
wrote:
> I would really suggest you to take a look to the basics of Lucene scoring.
> As it's evident you have not a clear idea of how the scoring is wor
Gaaah, that'll teach me to type URLs late on Sunday!
Thanks Upayavira!
You'll notice that 5.2.1 just had the release announcement posted,
so let the fun begin!
Erick
On Mon, Jun 15, 2015 at 4:12 AM, Upayavira wrote:
> Slight correction, the url, if running locally, would be:
>
> Http://localho
On Mon, Jun 15, 2015 at 10:24 AM, Alessandro Benedetti
wrote:
> So why in both cases we express the parent type ?
>
> ( "Note that regardless
> of which direction we are mapping (parents to children or children to
> parents) we provide a query that defines the complete set of parents in the
> inde
I didn't really follow this issue - what was the motivation for the rewrite?
Is it entirely under: "new code should be quite a bit easier to work on for
programmer
types" or are there other reasons as well?
- Mark
On Mon, Jun 15, 2015 at 10:40 AM Erick Erickson
wrote:
> Gaaah, that'll teach me
Basically I expect you're falling afoul of a very common misunderstanding;
It's not that Solr is slower, it's that the client isn't feeding Solr
as fast as it
should.
If you profile your Solr server, my suspicion is that you're not
driving it very hard.
You'll probably see 4 spikes in CPU activity
The current UI was written before tools like AngularJS were widespread,
and before decent separation of concerns was easy to achieve in
Javascript.
In a sense, your paraphrase of the justification was as you described -
to make it easier for programmer types - partly by using a tool that is
closer
Hello,
Just a minor question. I'm using the Java Database Connector with the DIH
trying to index from a MySQL database but whenever I run the DIH for a full
import it keeps giving me this error
Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException:
org.apache.solr.handler.data
I'm using Jetty. That might be important.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-5-1-0-Where-do-I-put-the-JDBC-tp4211923p4211925.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
We have some master data and some content data. Master data would be things
like userid, name, email id etc.
Our content data for example is a blog.
The blog has certain fields which are comma separated ids that point to the
master data.
E.g. UserIDs of people who have commented on a particu
On 6/15/2015 10:00 AM, Paden wrote:
> Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException:
> org.apache.solr.handler.dataimport.DataImportHandlerException: Could not
> load driver: com.mysql.jdbc.Driver Processing Document # 1
>
> My fairly positive assumption is that it simpl
Anyone have any suggestions on things I could try ? Does it seem like a solr
bug ?
> On Jun 14, 2015, at 9:40 PM, Summer Shire wrote:
>
> Hi all,
>
> Every time I optimize my index with maxSegment=2 after some time the
> replication fails to get filelist for a given generation. Looks like the
Hi,
I have the requirement to index internationalized fields ('name') with Solr.
For this purpose, I want to use dynamic fields and have e.g. 'name_en',
'name_de', 'name_fr' in my Solr documents.
When querying the index, I need to know which language a match was found in.
For this, I want to us
Sure, just curious. Wasn't sure if there was other motivations around what
could be done, or the overall look and feel that could be achieved or
anything beyond it's just easier for devs to work on and maintain (which is
always good when it comes to JavaScript - I still wish it was all GWT :) ).
-
Hi all,
Is DeletionPolicy customization still available in Solr Cloud? Is there
a way to rollback to a previous commit point in Solr Cloud thanks to a
specific deletion policy?
Thanks,
Aurélien
SolrCloud does not really support any form of rollback.
On Mon, Jun 15, 2015 at 5:05 PM Aurélien MAZOYER <
aurelien.mazo...@francelabs.com> wrote:
> Hi all,
>
> Is DeletionPolicy customization still available in Solr Cloud? Is there
> a way to rollback to a previous commit point in Solr Cloud tha
: Subject: How to use https://issues.apache.org/jira/browse/SOLR-7274
:
: How do you set this up?
Some draft documentation is available in the online ref gude (not yet
ready to be published) ... i just addded a link to here from the jira...
https://cwiki.apache.org/confluence/display/solr/Secu
Thank you for your answer Marc,
Aurélien
On 15/06/2015 23:19, Mark Miller wrote:
SolrCloud does not really support any form of rollback.
On Mon, Jun 15, 2015 at 5:05 PM Aurélien MAZOYER <
aurelien.mazo...@francelabs.com> wrote:
Hi all,
Is DeletionPolicy customization still available in Sol
Hi guys,
Is there a way to facet on same field in *different ways?* For
example, using a different facet.prefix. Here are the details
facet.field={!key=myKey}myField&facet.prefix=p ==> works
facet.field={!key=myKey}myField&f.myField.facet.prefix=p ==> works
facet.field={!key=myKey}myFie
Hi,
We have a few custom solrcloud components that act as value sources inside
solrcloud for boosting items in the index. I want to get the final raw
lucene query used by solr for querying the index (for debugging purposes).
Is it possible to get that information?
Kindly advise
Thanks,
Nitin
Hi,
Do you have any suggestions to improve the performance for merging and
optimizing index?
I have been using embedded solr server to merge and optimize the index. I
am looking for the right parameters to tune. My use case have about 300
fields plus 250 copyfields, and moderate doc size (about 65K
The first question is why you're optimizing at all. It's not recommended
unless you can demonstrate that an optimized index is giving you enough
of a performance boost to be worth the effort.
And why are you using embedded solr server? That's kind of unusual
so I wonder if you've gone down a wrong
On 6/14/2015 6:53 PM, Erick Erickson wrote:
> And anyone who, you know, really likes working with UI code please
> help making it better!
Did some quick testing with Solr 5.2.1, after starting it and adding a
core based on the techproducts example. For the most part, it looks
very good, and most
: I encounter this peculiar case with solr 4.10.2 where the parsed query
: doesnt seem to be logical.
:
: PHRASE23("reduce workforce") ==>
: SpanNearQuery(spanNear([spanNear([Contents:reduceä,
: Contents:workforceä], 1, true)], 23, true))
1) that does not appear to be a parser syntax of any pars
Hi, Erick,
First thanks for sharing the ideas. I am further giving more context here
accordingly.
1. why optimize? I have done some experiments to compare the query response
time, and there is some difference. In addition, the searcher will be
customer-facing. I think any performance boost will be
Thanks Benedetti,
I've change to the AnalyzingInfixLookup approach, and it is able to start
searching from the middle of the field.
However, is it possible to make the suggester to show only part of the
content of the field (like 2 or 3 fields after), instead of the entire
content/sentence, which
Also, is there a way to overcome the long content problem?
I'm getting this error when I've indexed large rich-text documents and
tried to build the suggester.
*{*
* "responseHeader":{*
*"status":500,*
*"QTime":47},*
* "error":{*
*"msg":"Document contains at least one immense term i
Ah, OK. For very slowly changing indexes optimize can makes sense.
Do note, though, that if you incrementally index after the full build, and
especially if you update documents, you're laying a trap for the future. Let's
say you optimize down to a single segment. The default TieredMergePolicy
trie
I think your advice on future incremental update is very useful. I will
keep eye on that.
Actually, I am currently interested in how to boost merging/optimizing
performance of single solr instance.
Parallelism at MapReduce level does not help merging/optimizing much,
unless Solr/Lucene internally
50 matches
Mail list logo