16.9.2015, 16.16, Shawn Heisey kirjoitti:
I agree here. I don't like the forceful termination unless it becomes
truly necessary.
I changed the timeout to 20 seconds in the script installed in
/etc/init.d ... a bit of a brute force approach. When I find some time,
I will think about how to make
Yes, of course, the only reason to have more shards is so that they
can reside on different machines (or use different disks, assuming you
have enough CPU/memory etc) so that you can scale your indexing
throughput. Move one of them to a different machine and measure the
performance.
On Thu, Sep 17
Hi,
Would like to check, does creating more shards for the core improve the
overall performance? I'm using Solr 5.3.0.
I tried the indexing for a core with 1 shard and another core with 2
shards, but both are taking the same amount of time to do the indexing.
Currently, both my shards are in the
How about in Denver?
On Sun, Sep 13, 2015 at 7:53 PM, Otis Gospodnetić <
otis.gospodne...@gmail.com> wrote:
> Hi Tim,
>
> A slightly delayed reply ;)
> We are running Solr training in NYC next month -
> http://sematext.com/training/solr-training.html - 2nd seat is 50% off.
>
> Otis
> --
> Mon
Ravi:
Sameer is correct on how to get it done in one go.
Don't get too hung up on replicationFactor. You can always
ADDREPLICA after the collection is created if you need to.
Best,
Erick
On Wed, Sep 16, 2015 at 12:44 PM, Sameer Maggon
wrote:
> I just gave an example API call, but for your sce
I bet the terms component does not analyse the terms, so you will need
to hand in already analysed phonetic terms. You could use the
http://localhost:8983/solr/YOUR-CORE/analysis/field URL to have Solr
analyse the field for you before passing it back to the term component.
Upayavira
On Wed, Sep 1
On Wed, Sep 16, 2015, at 10:47 PM, vetrik kumaran murugesan wrote:
> Hi Team,
>
> Can you please help me understand the following usage of below mentioned
> jar files, in apache Solr 5.3.0,
>
> 1. Tagsoup 1.2.1
> 2. Junit4-ant v 2.1.13
> 3. com.googlecode.juniversalchardet v1.0.3
>
>
> 2. I
This is a very strange error. I have another index (nearly identical
solrconfig.xml and similar schema) running Solr 4.9.1 on Oracle JDK 8u60
which works perfectly. Those systems are running Ubuntu 14.
Sending to a different index on different servers running under Oracle
JDK 7u72 on CentOS 6 (o
Hi Team,
Can you please help me understand the following usage of below mentioned
jar files, in apache Solr 5.3.0,
1. Tagsoup 1.2.1
2. Junit4-ant v 2.1.13
3. com.googlecode.juniversalchardet v1.0.3
2. Is it right to ask , can we rebuild solr 5.3.0 without/replacing
above mentioned files?
Many thanks for your suggestion.
It works well for querying the field with phonetic matching and responses a
list of docs tagged with the term.
However, is there any way that i can get a list of matched terms ? The
phonetic matching seems not work with Term Component (i'm using terms.regex
to fil
That is, use a TextField plus a KeywordTokenizerFactory, rather than a
StringField
On Wed, Sep 16, 2015, at 09:03 PM, Upayavira wrote:
> If you want to analyse a string field, use the KeywordTokenizer - it
> just passes the whole field through as a single tokenizer.
>
> Does that get you there?
>
If you want to analyse a string field, use the KeywordTokenizer - it
just passes the whole field through as a single tokenizer.
Does that get you there?
On Wed, Sep 16, 2015, at 08:52 PM, Jie Gao wrote:
> I understand that i can configure "solr.PhoneticFilterFactory" for both
> indexing and query
I understand that i can configure "solr.PhoneticFilterFactory" for both
indexing and query time for "solr.TextField". However, i want to query a
list of term (indexed and stored) from a field ordered by phonetic
similarity, which can be easily done by most of relational database.
Term Component al
I just gave an example API call, but for your scenario, the
replicationFactor will be 4 (replicationFactor=4). In this way, all 4
machines will have the same copy of the data and you can put an LB in front
of those 4 machines.
On Wed, Sep 16, 2015 at 12:00 PM, Ravi Solr wrote:
> OK...I understoo
OK...I understood numShards=1, when you say replicationFactor=2 what does
it mean ? I have 4 machines, then, only 3 copies of data (1 at leader and 2
replicas) ?? so am i not under utilizing one machine ?
I was more thinking in the lines of a Mesh connectivity format i.e.
everybody has others copy
Basic authentication (and the API support, that you're trying to use) was
only released with 5.3.0 so it wouldn't work with 5.2.
5.2 only had the authentication and authorization frameworks, and shipped
with Kerberos authentication plugin out of the box.
There are a few known issues with that thou
On Wed, Sep 16, 2015, at 06:37 PM, Jie Gao wrote:
> Hi,
>
>
> I want to query a list of terms indexed and stored in multivalued string
> field via Term Component. The term component can support exact matching
> and
> regex based fuzzy matching. However, Is any way i can configure scheme to
> do
You'll have to say numShards=1 and replicationFactor=2.
http://
[hostname]:8983/solr/admin/collections?action=CREATE&name=test&configName=test&numShards=1&replicationFactor=2
On Wed, Sep 16, 2015 at 11:23 AM, Ravi Solr wrote:
> Thank you very much for responding Sameer so numShards=0 and
> repl
Thank you very much for responding Sameer so numShards=0 and
replicationFactr=4 if I have 4 machines ??
Thanks
Ravi Kiran Bhaskar
On Wed, Sep 16, 2015 at 12:56 PM, Sameer Maggon
wrote:
> Absolutely. You can have a collection with just replicas and no shards for
> redundancy and have a load bal
You should be able to easily see where the task is hanging in ivy code.
- Mark
On Wed, Sep 16, 2015 at 1:36 PM Susheel Kumar wrote:
> Not really. There are no lock files & even after cleaning up lock files (to
> be sure) problem still persists. It works outside company network but
> inside it
I mention the same thing in
https://issues.apache.org/jira/browse/LUCENE-6743
They claim to have addressed this with Java delete on close stuff, but it
still happens even with 2.4.0.
Locally, I now use the nio strategy and never hit it.
- Mark
On Wed, Sep 16, 2015 at 12:17 PM Shawn Heisey wrot
Hi,
I want to query a list of terms indexed and stored in multivalued string
field via Term Component. The term component can support exact matching and
regex based fuzzy matching. However, Is any way i can configure scheme to
do phonetic matching/query?
Thanks,
Jerry
Not really. There are no lock files & even after cleaning up lock files (to
be sure) problem still persists. It works outside company network but
inside it stucks. let me try to see if jconsole can show something
meaningful.
Thanks,
Susheel
On Wed, Sep 16, 2015 at 12:17 PM, Shawn Heisey wrote:
Hi,
I try to follow:
https://cwiki.apache.org/confluence/display/solr/Basic+Authentication+Plugin,
to protect Solr 5.2 Admin with password, but I have not been able to secure.
1) When I run the following command:
curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication
-H 'Cont
Absolutely. You can have a collection with just replicas and no shards for
redundancy and have a load balancer in front of it that removes the
dependency on a single node. One of them will assume the role of a leader,
and in case that leader goes down, one of the replicas will be elected as a
leade
Hello,
We are trying to move away from Master-Slave configuration to a
SolrCloud environment. I have a couple of questions. Currently in the
Master-Slave setup we have 4 Machines 2 of which are indexers and 2 of them
are query servers. The query servers are fronted via Load Balancer.
Ther
Hello all,
I have a SolrJ bean annotated with @Field. Is it possible to omit a
field prior to index? For instance:
*My bean*
@Field
public String f1;
@Field
public String f2;
@Field
public String f3;
*My indexing*
client.addBean(this);
I want the resulting document to omit f2, so it would lo
Hi Mark,
Solr allows you to provide more than one spellchecker to be used together
to provide the corrections.
This means you can provide a set of suggestions from your index and one
from an external system.
Anyway you need to provide it as a File, Solr doesn't have an automatic set
of suggestions
On 9/16/2015 9:32 AM, Mark Miller wrote:
> Have you used jconsole or visualvm to see what it is actually hanging on to
> there? Perhaps it is lock files that are not cleaned up or something else?
>
> You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
If that does turn out to be the p
Solr Grouping is unlike to generate the response like that directly.
Solr group response result is grouped by field, while you want FLAT format
result.
Flattern result self should be easy task, but may cause another problem,
how to control Navigation/Pagination behavior? that depends on your
appli
Top level queries need a *:* in front, something like
q=*:* -usrlatlong_0_coordinate:[* TO *]
I just took a quick check and just using usrlatlong:[* TO *]
encounters a parse error.
P.S. It would help if you told us what you _did_ receive
when you tried your options. Parse errors? All docs?
Best,
Have you looked at the group by options? See:
https://cwiki.apache.org/confluence/display/solr/Result+Grouping
Best,
Erick
On Tue, Sep 15, 2015 at 5:00 AM, Sreekant Sreedharan
wrote:
> I have a requirement to group documents by count based on a particular field:
>
> So for example if you have th
The not-very-helpful answer is that you're using the core admin API in
a SolrCloud setup. Please do not do this as (you're well aware of this by now!)
it's far too easy to get "interesting" results.
Instead, use the Collections API, specifically the ADDREPLICA and DELETEREPLICA
commands. Under the
Have you used jconsole or visualvm to see what it is actually hanging on to
there? Perhaps it is lock files that are not cleaned up or something else?
You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
- Mark
On Wed, Sep 16, 2015 at 9:50 AM Susheel Kumar wrote:
> Hi,
>
> Sending
Greetings!
Mikhail Khludnev, in his post to the thread "Google didn't help on this
one!", has pointed out one bug in Solr-5.3.0, and I was able to uncover
another one (which I wrote about in the same thread). Therefore, and
thankfully, I've been able to get past my configuration issues.
So n
Raised https://issues.apache.org/jira/browse/SOLR-8063
On Wed, Sep 16, 2015 at 3:35 PM, Mark Fenbers wrote:
> Indeed! should be changed to in the "Spell Checking"
> document (https://cwiki.apache.org/confluence/display/solr/Spell+Checking)
> and in all the baseline solrconfig.xml files provi
Hi,
Sending it to Solr group in addition to Ivy group.
I have been building Solr trunk (
http://svn.apache.org/repos/asf/lucene/dev/trunk/) using "ant eclipse" from
quite some time but this week i am on a job where things are behind the
firewall and a proxy is used.
Issue: When not in company n
On 9/16/2015 12:52 AM, Ere Maijala wrote:
> There's currently a five second delay in the bin/solr script when
> stopping a Solr instance before it's forcefully killed. In our
> experience this is not enough to allow a graceful shutdown of an active
> SolrCloud node and it seems a bit brutal to kill
On 9/16/2015 5:42 AM, Alessandro Benedetti wrote:
> Any update on this ?
I found two workarounds, and went with the second one -- removing the
PatternReplaceFilterFactory from fieldType definitions that also include
WDF. They are both documented in the issue:
https://issues.apache.org/jira/brows
Indeed! should be changed to in the "Spell Checking"
document
(https://cwiki.apache.org/confluence/display/solr/Spell+Checking) and in
all the baseline solrconfig.xml files provided in the distribution. In
addition, ' internal' should be
removed/changed in the same document and same solrco
Ah ha!! Exactly my point in the post I sent about the same time you did
(same Thread)!
Mark
On 9/16/2015 8:03 AM, Mikhail Khludnev wrote:
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java#L97
this mean that
0.5
should
On 9/16/2015 5:24 AM, Alessandro Benedetti wrote:
As a reference I always suggest :
https://cwiki.apache.org/confluence/display/solr/Spell+Checking
I read this doc and have found it moderately helpful to my current
problem. But I have at least one question about it, especially given
that my
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java#L97
this mean that
0.5
should be replaced to
0.5
On Wed, Sep 16, 2015 at 2:32 PM, Upayavira wrote:
> See this:
>
> Caused by: java.lang.ClassCastException: java.lang.
You should first run the query on the parent domain ( give me all the books
or a limited set of them).
Then you should do the faceting on the children domain.
>From Yonik blog :
"
$ curl http://localhost:8983/solr/demo/query -d '
q=cat_s:(sci-fi OR fantasy)&fl=id,title_t&
json.facet={
top_review
Any update on this ?
Cheers
2015-08-21 0:22 GMT+01:00 Shawn Heisey :
> On 7/8/2015 6:13 PM, Yonik Seeley wrote:
> > On Wed, Jul 8, 2015 at 6:50 PM, Shawn Heisey
> wrote:
> >> After the fix (with luceneMatchVersion at 4.9), both "aaa" and "bbb" end
> >> up at position 2.
> > Yikes, that's defini
Can you post us the schema description of the fields you are using for
spellchecking ?
Are you using numeric fields to get the suggestions from ? ( which should
be quite weird)
Cheers
2015-09-16 12:22 GMT+01:00 Mark Fenbers :
> On 9/15/2015 6:49 PM, Shawn Heisey wrote:
>
>>
>> >From the informa
See this:
Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast
to java.lang.String
at
org.apache.solr.spelling.AbstractLuceneSpellChecker.init(AbstractLuceneSpellChecker.java:97)
AbstractLuceneSpellChecker is expecting a string, but getting a float.
Can you paste here th
I'm no expert, but one of the neat things about RankQuery is that you
can implement your own collector, and spit out documents that are *not*
in score order. One use case I came across for which we used this was
"do not display items from the same supplier within 4 results of each
other". This woul
On 9/15/2015 6:49 PM, Shawn Heisey wrote:
>From the information we have, we cannot tell if this is a problem
request or not. Do you have a core/collection named "EventLog" on your
Solr server? It will be case sensitive. If you do, does that config
have a handler named "spellCheckCompRH" in it
Hi All,
I wanted to understand the difference between CustomScoreQuery and
RankQuery. From the outside, it seems they do the same thing with RankQuery
having more functionality.
Am I missing something?
Parvesh Garg
On Wed, Sep 16, 2015 at 12:15 PM, Florin Mandoc wrote:
> Is possible to to also add "name_s:expensive" search term in q? I know i
> can add it to fq but I will have no score boost.
Sure you can. But beware of query syntax trap. It's explained by David
Smiley at comment
http://blog.griddynamics.
Hi,
The wiki explains how to upload the security.json file to Zk (
https://cwiki.apache.org/confluence/display/solr/Authentication+and+Authorization+Plugins
).
However, is it possible to use authentication and authorization plugin in a
not SolrCloud environment ? If yes, where has to be located t
Taking a look to your request :
GET
/solr/EventLog/spellCheckCompRH?qt=%2FspellCheckCompRH&q=Some+more+text+wit+some+missspelled+wordz.&spellcheck=on&spellcheck.build=true
You call :
*Solr Core ( Collection)* : EventLog
*Request handler* : spellCheckCompRH
*Qt parameter* ( legacy to specify the r
Hi,
Sorry for letting this thread hanging, I was out of office last week.
I have managed to make it work using this query:
http://localhost:8983/solr/testscoring/select?q={!parent%20which=type_s:product%20score=max}+color_s:Red^=0%20AND%20{!func}price_i&wt=json&indent=true&fl=score,*,[docid]&deb
Thanks much Shalin for this.
I will take a look into this one.
Btw, I am using SolrCloud 4.7.0.
Thanks again.
Gauri
On Sep 16, 2015 12:44 PM, "Shalin Shekhar Mangar"
wrote:
> Okay, I found the problem, see
> https://issues.apache.org/jira/browse/SOLR-6547
>
> I have committed a fix which shou
Okay, I found the problem, see https://issues.apache.org/jira/browse/SOLR-6547
I have committed a fix which should be released with Solr 5.4. In the
mean while, if you need access to the qtime for such responses,
instead of using response.getQTime() do the following:
int qtime = -1;
NamedList hea
56 matches
Mail list logo