If we search only "black*" it works but when we use search text "black cat*"
or "(black cat)*" or "(black cat*)*" it come blank.
Thank you in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/matching-starts-with-only-tp4094430p4
Dear Erick,
I remembered some times ago, somebody asked about what is the point of
modify Solr to use HDFS for storing indexes. As far as I remember somebody
told him integrating Solr with HDFS has two advantages. 1) having hadoop
replication and HA. 2) using indexes and Solr documents for other pu
Dear Erick,
Hi,
Thank you for you reply. Yeah I am aware that SolrJ is my last option. I
was thinking about raw I/O operation. So according to your reply probably
it is not applicable somehow. What about the Lily project that Michael
mentioned? Is that consider SolrJ too? Are you aware of Cloudera
On 8/5/14, 8:36 AM, Rich Cariens wrote:
Of course this is extremely primitive and basic, but I think it would be
possible to write a CharFilter or TokenFilter that inspects the entire
TokenStream to guess the language(s), perhaps even noting where languages
change. Language and position informat
Hi Peter,
It seems like a bug to me, too. Please file a JIRA ticket if you can
so that someone can take it.
Koji
--
http://soleami.com/blog/comparing-document-classification-functions-of-lucene-and-mahout.html
(2014/08/05 22:34), Peter Keegan wrote:
When there are multiple 'external file field
What you haven't told us is what you mean by "modify the
index outside Solr". SolrJ? Using raw Lucene? Trying to modify
things by writing your own codec? Standard Java I/O operations?
Other?
You could use SolrJ to connect to an existing Solr server and
both read and modify at will form your M/R jo
Thanks, Joel. I created SOLR-6323.
On Tue, Aug 5, 2014 at 10:38 AM, Joel Bernstein wrote:
> I updated the docs for now. But I agree this paging issue needs to be
> handled transparently. Feel free to create a jira issue for this or I can
> create one when I have time to start looking into it.
>
Hey Erick, i think that you were right, there was a mix in the schemas and
that was generating the error on some of the documents.
Thanks for the help guys!
2014-08-05 1:28 GMT-03:00 Erick Erickson :
> Hmmm, I jus tried this with a 4.x build and I can update the document
> multiple times withou
Actually I am going to do some analysis on the solr data using map reduce.
For this purpose it might be needed to change some part of data or add new
fields from outside solr.
On Tue, Aug 5, 2014 at 5:51 PM, Shawn Heisey wrote:
> On 8/5/2014 7:04 AM, Ali Nazemian wrote:
> > I changed solr 4.9 t
You can also have a sliding re-ranking horizon. That is how we did it in
Ultraseek.
http://observer.wunderwood.org/2007/04/04/progressive-reranking/
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/
On Aug 5, 2014, at 9:38 AM, Joel Bernstein wrote:
> I updated the
I updated the docs for now. But I agree this paging issue needs to be
handled transparently. Feel free to create a jira issue for this or I can
create one when I have time to start looking into it.
Joel Bernstein
Search Engineer at Heliosearch
On Tue, Aug 5, 2014 at 12:04 PM, Adair Kovac wrote:
Probably the "most correct" way to modify the index would be to use the
Solr REST API to push your changes out.
Another thing you might want to look at is Lilly. Basically it's a way to
set up a Solr collection as an HBase replication target, so changes to your
HBase table would automatically prop
Thanks, great explanation! Yeah, if it keeps the current behavior added
documentation would be great.
Are there any other features that expect parameters to change as one pages?
If not I'm concerned that it might be hard to support for clients that
assume only the index params will change. It also
In this case, I recommend using the approach that this tutorial uses:
http://www.cominvent.com/2012/01/25/super-flexible-autocomplete-with-solr/
Basically the idea is you index the data a few different ways and then use
edismax to query them all with different boosts. You'd use the stored
version
i found this solution but when i test it nothing in suggestion
fuzzySuggest
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.fst.FuzzyLookupFactory
suggestField
suggestFolders
true
true
texts
false
2
sug
I've started a GitHub project to try out some cross-lingual analysis ideas (
https://github.com/whateverdood/cross-lingual-search). I haven't played
over there for about 3 months, but plan on restarting work there shortly.
In a nutshell, the interesting component
("SimplePolyGlotStemmingTokenFilter
On 8/5/2014 7:31 AM, Jako de Wet wrote:
> Thanks for the insight. Why the size increase when not specifying the clean
> parameter then? The PK for the documents remain the same throughout the
> whole import process.
>
> Should a full optimize combine all the results into one and decrease the
> phy
Solution found:
I was using the SimplePostTool utility to crawl and post documents to Solr
on the default example settings (except for having added a few file types to
be indexed).
Instead of finding a field that exactly passed the name of the document, I
used the resourcename text field that was
When there are multiple 'external file field' files available, Solr will
reload the last one (lexicographically) with a commit, but only if changes
were made to the index. Otherwise, it skips the reload and logs: "No
uncommitted changes. Skipping IW.commit." Has anyone else noticed this? It
seems
yeah thats true i creat this index just for auto complete
here is my schema:
the i use "suggestField" for autocomplet like i mentioned above
do you have any other configuration which can do what i need ?
2014-08-05 15:19 GMT+02:00 Michael Della Bitta-2 [via Lucene] <
ml-node+s472066n415
On 8/5/2014 6:06 AM, rockstar007 wrote:
> is there any way to get the request count per hour or per day in
> solr.Thanks,RR
There is no information about requests per hour or day, but the number
of requests is available, if you track it yourself on an hourly basis,
you can calculate it.
It's in t
Hi Shawn
Thanks for the insight. Why the size increase when not specifying the clean
parameter then? The PK for the documents remain the same throughout the
whole import process.
Should a full optimize combine all the results into one and decrease the
physical size of the core?
On Tue, Aug 5, 2
On 8/5/2014 7:20 AM, Jako de Wet wrote:
> I have a Solr Index that has 20+ million products, the core is about 70GB.
>
> What I would like to do, is a weekly delta-import, but it seems to be
> growing in size each week. (Currently its running a full-import +
> clean=false)
>
> Shouldn't the Delta
On 8/5/2014 7:04 AM, Ali Nazemian wrote:
> I changed solr 4.9 to write index and data on hdfs. Now I am going to
> connect to those data from the outside of solr for changing some of the
> values. Could somebody please tell me how that is possible? Suppose I am
> using Hbase over hdfs for do these
Hi everyone
I have a Solr Index that has 20+ million products, the core is about 70GB.
What I would like to do, is a weekly delta-import, but it seems to be
growing in size each week. (Currently its running a full-import +
clean=false)
Shouldn't the Delta-Import with the Clean=True option import
Unless I'm mistaken, it seems like you've created this index specifically
for autocomplete? Or is this index used for general search also?
The easy way to understand this question: Is there one entry in your index
for each term you want to autocomplete? Or are there multiple entries that
might con
hello,
did you find any solution to this problem ?
regards
2014-08-04 16:16 GMT+02:00 Michael Della Bitta-2 [via Lucene] <
ml-node+s472066n4150990...@n3.nabble.com>:
> How are you implementing autosuggest? I'm assuming you're querying an
> indexed field and getting a stored value back. But the
Dear all,
Hi,
I changed solr 4.9 to write index and data on hdfs. Now I am going to
connect to those data from the outside of solr for changing some of the
values. Could somebody please tell me how that is possible? Suppose I am
using Hbase over hdfs for do these changes.
Best regards.
--
A.Nazem
is there any way to get the request count per hour or per day in
solr.Thanks,RR
--
View this message in context:
http://lucene.472066.n3.nabble.com/no-of-request-count-in-solr-tp4151191.html
Sent from the Solr - User mailing list archive at Nabble.com.
The comment in the code reads slightly different:
// This enusres that reRankDocs >= docs needed to satisfy the result set.
reRankDocs = Math.max(start+rows, reRankDocs);
I think you're right though that this is confusing. The way the
ReRankingQParserPlugin works is that it grabs the top X docume
I am out of the office until 08/08/2014.
I am in a training class until Friday.
Please contact Jason Brown for anything JAS Team related.
Note: This is an automated response to your message "Re: Solr Faceting
issue" sent on 8/5/2014 12:12:04 AM.
This is the only notification you will receive
31 matches
Mail list logo