Why can't you use just two fields? Both stored, one has accent folding
filter, another does not. Then just chose the one you want search. Are
your fields so big you are worried about content duplication? Could be
premature optimization.
As to Ok/Not-Ok, have you by any chance changed field defini
"sort": "rint(product(sum($p_score,$s_score,$q_score),100)) desc,s_query asc
","tie": "1","q1": "$q","q_score": "query({!dismax qf=\"user_query_edge^1
user_query^0.5 user_query_fuzzy\" v=$q1})",
I also tried q1=cancer... It does not work unless I set v='cancer'
On Tue, Mar 25, 2014 at 9:12 PM, Wi
&q_score=cancer
http://hgsolr2testsl:8983/solr/autosuggest/select?omitHeader=false&q=cancer&pt1=39.740009,-104.992264&qt=joinautopraccond2&wt=json&rows=100&echoParams=all&fl=user_query,$p_score,$s_score,q_score:query({!dismax%20qf=%22user_query_edge^1%20user_query^0.5%20user_query_fuzzy%22%20v=$q
On 26 March 2014 02:44, Kiran J wrote:
>
> Hi
>
> Is it possible to set up the data import handler so that it keeps track of
> the last imported time in Zulu time and not local time ?
[...]
Start your JVM with the desired timezone, e.g.,
java -Duser.timezone=UTC -jar start.jar
Regards,
Gora
Better to user '+A +B' rather than AND/OR, see:
http://searchhub.org/2011/12/28/why-not-and-or-and-not/
François
On Mar 25, 2014, at 10:21 PM, Koji Sekiguchi wrote:
> (2014/03/26 2:29), abhishek jain wrote:
>> hi friends,
>>
>> when i search for "A and B" it gives me result for A , B
(2014/03/26 2:29), abhishek jain wrote:
hi friends,
when i search for "A and B" it gives me result for A , B , i am not sure
why?
Please guide how can i exact match when it is within phrase/quotes.
Generally speaking (w/ LuceneQParser), if you want phrase match results,
use quotes, i.e. q="A
How big is you index? #documents, #size?
Thanks,
Susheel
-Original Message-
From: cmd.ares [mailto:cmd.a...@gmail.com]
Sent: Tuesday, March 25, 2014 4:50 AM
To: solr-user@lucene.apache.org
Subject: intersect query
my_index(one core):
id,dealer,productName,amount,region
1,A1,iphone4,400
What are the main contributing factors for Solr Cloud generating a lot
of disk IO?
A lot of reads? Writes? Insufficient RAM?
I would think if there was enough disk cache available for the whole
index there would be little to no disk IO.
I'm having a very similar issue to this currently on 4.6.0 (large
java.lang.ref.Finalizer usage, many open file handles to long gone files) --
were you able to make any progress diagnosing this issue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/leaks-in-solr-tp3992047p41
In reference to my prior thread:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201403.mbox/%3ccac-cpvrzbhizomcdhkrhygqizguerntkwtkxwwx3j1rqcxe...@mail.gmail.com%3E
I followed the advice to set unmap=false on my indexes with promising
results. Without performing any index updates I am
Hi
Is it possible to set up the data import handler so that it keeps track of
the last imported time in Zulu time and not local time ?
Its not very clear from the documentation how to do it or if it is even
possible to do it.
Ref:
http://wiki.apache.org/solr/DataImportHandler#Configuring_The_Pr
On Mar 25, 2014 10:37 PM, "ku3ia" wrote:
>
> Hi all!
> Now I have a default search field, defined as
>
>
> ...
> autoGeneratePhraseQueries="true" >
>
>
>
>
>
>
> ignoreCase="true"/>
>
>
>
>
>
Same problem here:
http://lucene.472066.n3.nabble.com/Solr-4-x-EdgeNGramFilterFactory-and-highlighting-td4114748.html
On Tue, Mar 25, 2014 at 9:39 AM, Software Dev wrote:
> Bump
>
> On Mon, Mar 24, 2014 at 3:00 PM, Software Dev
> wrote:
>> In 3.5.0 we have the following.
>>
>> > positionInc
Sorry Guys, really apologize for wasting your time...bone headed coding on
my part. Did not set the rows and start to correct values for proper
pagination so it was getting the same 10 docs every single time.
Thanks
Ravi Kiran Bhaskar
On Tue, Mar 25, 2014 at 3:50 PM, Ravi Solr wrote:
> I just
I just tried even reading from one core A and indexed it into core B and
the same issue still persists.
On Tue, Mar 25, 2014 at 2:49 PM, Lan wrote:
> Ravi,
>
> It looks like you are re-indexing data by pulling data from your solr
> server
> and then indexing it back to the same server. I can th
Depending on requirements, another option for simple security is to
store the security info in the index and utilize a join. This really
only works when you have a single shard since joins aren't
distributed.
# the documents, with permissions
id:doc1, perms:public,...
id:doc2, perms:group1 group2
I'm new to Solr and I'm looking for a document level security filter
solution. Anonymous users searching my application should be able to
find public data. Logged in users should be able to find public data
and private data they have access to.
Earlier today I wrote about shards as a possible solu
Yes, it is generally a bad idea to optimize.
The system continually does merges as needed. You generally do not need to
force a full merge.
wunder
On Mar 25, 2014, at 11:27 AM, Software Dev wrote:
> So its generally a bad idea to optimize I gather?
>
> - In older versions it might have done
Ravi,
It looks like you are re-indexing data by pulling data from your solr server
and then indexing it back to the same server. I can think of many things
that could go wrong with this setup. For example are all your fields stored?
Since you are iterating through all documents on the solr server
"In older versions it might have done them all at once, but I believe
that newer versions only do one core at a time."
It looks like it did it all at once and I'm on the latest (4.7)
On Tue, Mar 25, 2014 at 11:27 AM, Software Dev
wrote:
> So its generally a bad idea to optimize I gather?
>
> - I
So its generally a bad idea to optimize I gather?
- In older versions it might have done them all at once, but I believe
that newer versions only do one core at a time.
On Tue, Mar 25, 2014 at 11:16 AM, Shawn Heisey wrote:
> On 3/25/2014 11:59 AM, Software Dev wrote:
>>
>> Ehh.. found out the ha
On 3/25/2014 11:59 AM, Software Dev wrote:
Ehh.. found out the hard way. I optimized the collection on 1 machine
and when it was completed it replicated to the others and took my
cluster down. Shitty
It doesn't get replicated -- each core in the collection will be
optimized. In older versions
Ehh.. found out the hard way. I optimized the collection on 1 machine
and when it was completed it replicated to the others and took my
cluster down. Shitty
On Tue, Mar 25, 2014 at 10:46 AM, Software Dev
wrote:
> One other question. If I optimize a collection on one node, does this
> get replicat
Iam also seeing the following in the log. Is it really commiting ??? Now I
am totally confused about how solr 4.x indexes. My relavant update config
is as shown below
1
100
12
false
[#|2014-03-25T13:44:03.765-0400|INFO|glassfish3.1.2|javax.enterprise.s
One other question. If I optimize a collection on one node, does this
get replicated to all others when finished?
On Tue, Mar 25, 2014 at 10:13 AM, Software Dev
wrote:
> Thanks for the reply. Ill make sure NOT to disable it.
What does your field type analyzer look like?
I suspect that you have a stop filter which cause "and" to be removed.
-- Jack Krupansky
-Original Message-
From: abhishek jain
Sent: Tuesday, March 25, 2014 1:29 PM
To: solr-user@lucene.apache.org
Subject: AND not as a boolean operator
hi friends,
when i search for "A and B" it gives me result for A , B , i am not sure
why?
Please guide how can i exact match when it is within phrase/quotes.
--
Thanks and kind Regards,
Abhishek jain
What kind of load are the machines under when this happens? A lot of
writes? A lot of http connections?
Do your zookeeper logs mention anything about losing clients?
Have you tried turning on GC logging or profiling GC?
Have you tried running with a smaller max heap size, or
setting -XX:CMSIniti
Can anyone else chime in? Thanks
On Mon, Mar 24, 2014 at 10:10 AM, Software Dev
wrote:
> Shawn,
>
> Thanks for pointing me in the right direction. After consulting the
> above document I *think* that the problem may be too large of a heap
> and which may be affecting GC collection and hence causi
Thanks for the reply. Ill make sure NOT to disable it.
No, don't disable replication!
The way shards ordinarily keep up with updates is by sending every document
to each member of the shard. However, if a shard goes offline for a period
of time and comes back, replication is used to "catch up" that shard. So
you really need it on.
If you created your
I'm new to Solr and am exploring the idea of creating shards on the
fly. Once the shards have been created and populated, I am hoping to
use the "shards" query parameter to combine results from multiple
shards into a single results set.
By following the "Testing Index Sharding on Two Local Servers
Hello Mikhail,
Thanks for the suggestions. It took some time to get to this -
1. FieldsCollapsing cannot be done on Multivalue fields -
https://wiki.apache.org/solr/FieldCollapsing
2. Join acts on documents, how can I use it to join multi-value fields in
the same document?
3. Block-join requir
Thank you very much for responding Mr. Høydahl. I removed the recursion
which eliminated the stack overflow exception. However, I still
encountering my main problem with the docs not getting indexed in solr 4.x
as I mentioned in my original email. The reason I am reindexing is that
with solr 4.x En
On 3/25/2014 10:42 AM, Software Dev wrote:
I see that by default in SolrCloud that my collections are
replicating. Should this be disabled in SolrCloud as this is already
handled by it?
From the documentation:
"The Replication screen shows you the current replication state for
the named core y
Hi all!
Now I have a default search field, defined as
...
In a future, I will need to search using my current field (with KStem
filte
I see that by default in SolrCloud that my collections are
replicating. Should this be disabled in SolrCloud as this is already
handled by it?
>From the documentation:
"The Replication screen shows you the current replication state for
the named core you have specified. In Solr, replication is fo
Bump
On Mon, Mar 24, 2014 at 3:00 PM, Software Dev wrote:
> In 3.5.0 we have the following.
>
> positionIncrementGap="100">
>
>
>
> maxGramSize="30"/>
>
>
>
>
>
>
>
> If we searched for "c" with highlighting enable
Hi Philip,
Comments inline:
On Tue, Mar 25, 2014 at 8:11 PM, Philip Durbin
wrote:
> I'm new to Solr and am exploring the idea of creating shards on the
> fly. Once the shards have been created and populated, I am hoping to
> use the "shards" query parameter to combine results from multiple
> sha
This night the problem occurred again and I have more data. This time this
problem happened only in one solr server and it successfully recovered.
solr which had all the leaders:
[06:38:58.205 - 06:38:58.222] Stopping recovery for
zkNodeName=core_node2core=** *- for all collections*
[06:38:5
Hello,
I have the following problem to resolve using solr :
search WITH or WITHOUT accents (selection at runtime) + highlights
how can i configure the schema to realize this ?
for example:
inputString "aaa près bbb pres"
A) accent sensitive
1.search for *près* highlight ="a
Gora! It works now!
You are amazing! thank you so much!
I dropped the atom: from the xpath and everything is working.
I did have a typo that might have been causing issues too.
thanks again!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-the-solr-dataimporthandler-cons
Thanks for Solr! It's a great product. I've been hanging out in
#lucene-dev for a while but I thought I'd join the mailing list.
ezmlm seems to pick up an alternate email address of mine in the
"Return-Path" header so I tried to override the default subscription
address by emailing
solr-user-subsc
right. If you have cfs files in the index directory, there is a thread
discussing the method of regenerating the segment files:
http://www.gossamer-threads.com/lists/lucene/java-user/39744
backup before doing changes!
source on SO:
http://stackoverflow.com/questions/9935177/how-to-repair-corrupt
Hi Ares,
How about using field collapsing? https://wiki.apache.org/solr/FieldCollapsing
&q=+region:(east OR west) +productName:iPhone
&group=true
&group.field=dealer
If the number of distinct groups is high, CollapsingQueryParser could be used
too.
https://cwiki.apache.org/confluence/display
Hi,
First of all, the wiki page you refer to is *not* the official ref-guide. The
official one can be found here
https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guide
The wiki you found is a community-edited wiki, and may talk about ideas or
patches.
The autentication y
If using stopwords with edismax, please make sure that ALL fields referred in
"qf" have stopwords defined in the fieldType and also that the stopword
dictionary is the SAME for all these. This way you will not encounter the
infamous edismax+stopwords bug mentioned in
https://issues.apache.org/j
There is no Solr feature that would break up your HTML file - you will have
to do that yourself, either before you send the file to Solr or by
developing a custom update processor that extracts the sections and directs
each to a specific field for the language. The former is probably easier
sin
> I think that we should add which version includes which parameters at
> Collections API wiki page. A new 'migrate' collection API to split all
> documents with a route key into another collection is introduced with Solr
> 4.7.0
Should not be necessary, since the top of every Confluence page read
Hi,
Seems you try to reindex from one server to the other.
Be aware that it could be easier for you to simply copy the whole index folder
over to your 4.6.1 server and start Solr as it will be able to read your 3.x
index. This is unless you also want to do major upgrades of your schema or
upda
Hi;
I think that we should add which version includes which parameters at
Collections API wiki page. A new 'migrate' collection API to split all
documents with a route key into another collection is introduced with Solr
4.7.0
Thanks;
Furkan KAMACI
2014-03-25 11:51 GMT+02:00 Cihat güzel :
> hi
Migrate is new in Solr 4.7.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
25. mars 2014 kl. 10:51 skrev Cihat güzel :
> hi all,
>
> I have a test for document migrate. I followed this url:
> https://cwiki.apache.org/confluence/display/solr/Collections+API#Collection
On 25 March 2014 15:59, Michael Clivot wrote:
> Hello,
>
> I have the following issue and need help:
>
> One HTML file has different parts for different countries.
> For example:
>
>
>
> Address for France and Benelux
>
>
>
>
> Address for Switzerland
>
>
>
> Depending on a
Hello,
I have the following issue and need help:
One HTML file has different parts for different countries.
For example:
Address for France and Benelux
Address for Switzerland
Depending on a parameter, I show or hide the parts on the website
Logically, all parts are in t
1. No, if IndexReader is on I get the same error message from checkindex
2. It doesnt do any thing but giving that error message I posted before then
quit. The full print of the error trace is:
Opening index @ E:\...\zookeeper\solr\collec
tion1\data\index
ERROR: could not read any segments file
I searched a way to index only the content/text part of a PDF (without all the
other fields Tika creates) and I found the "solution" with the "uprefix" =
ignored_ and .
The problem is, that uprefix works on fields that are not specified in the
schema. In my schema I specified two fields (id and
hi all,
I have a test for document migrate. I followed this url:
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api12Migratedocumentstoanothercollection
I am trying on solr- 4.6.1. I have two collection (collection1 and
collection2) and two shards. my collection1
1. Luke: if you leave the IndexReader on, does the index even open? Can you
access the CheckIndex?
2. The command line CheckIndex: what does the CheckIndex -fix do?
On Tue, Mar 25, 2014 at 10:54 AM, zqzuk wrote:
> Thank you.
>
> I tried Luke with IndexReader disabled, however it seems the index
Thank you.
I tried Luke with IndexReader disabled, however it seems the index is
compeletely broken, as it complains " ERROR: java.lang.Exception: there is
no valid Lucene index in this directory."
Sounds like I am out of luck, is it so?
--
View this message in context:
http://lucene.4720
my_index(one core):
id,dealer,productName,amount,region
1,A1,iphone4,400,east
2,A1,iphone4s,450,east
3,A2,iphone5s,550,east
..
4,A1,iphone4,400,west
5,A1,iphone4s,450,west
6,A3,iphone5s,550,west
..
-I'd like to get which dealer sale the 'iphone' both in the 'east' and
'west'
pl/sql
Oh, somehow missed that in your original e-mail. How do you run the
checkindex? Do you pass the -fix option? [1]
You may want to try luke [2] to open index without opening the IndexReader
and run the Tools->Check Index tool from the luke.
[1] http://java.dzone.com/news/lucene-and-solrs-checkindex
61 matches
Mail list logo