Hi. All.
I am trying to use solr to search over 2 lucene indexes. I am following
the solr tutorial and test the distributed search example. It works.
Then I am using my own lucene indexes. Search in each solr instance works
and return the expected result. But when I do distributed search usi
Lance Norskog-2 wrote:
>
> The PatternReplace and HTMPStrip tokenizers might be the right bet.
> The easiest way to go about this is to make a bunch of text fields
> with different analysis stacks and investigate them in the Scema
> Browser. You can paste an HTML document into the text box and s
The PatternReplace and HTMPStrip tokenizers might be the right bet.
The easiest way to go about this is to make a bunch of text fields
with different analysis stacks and investigate them in the Scema
Browser. You can paste an HTML document into the text box and see
exactly how the words & markup ge
If you are unhappy with the performance overhead of a function boost,
you can push it into a field query by boosting date ranges.
You would group in date ranges: documents in September would be
boosted 1.0, October 2.0, November 3.0 etc.
On 6/5/10, Asif Rahman wrote:
> Thanks everyone for your
And for 'present', you would pick some time far in the future:
2100-01-01T00:00:00Z
On 6/5/10, Israel Ekpo wrote:
> You need to make each document added to the index a 1 to 1 mapping for each
> company and consultant combo
>
>
>
>
>
> stored="true" required="true"/>
> stored="tru
Yonik,
Is there any documentation where I can read more about the big core + small
core setup?
One issue for me is that I don't just add new documents. Many of the changes is
to update existing documents, such as updating the popularity score of the
documents. Would the big core + small core s
Frank, w.r.t features you may draw a lot of inspiration from these two sites
1. http://mumbai.burrp.com/
2. http://askme.in/
Both these products are Indian local search applications. #1 primarily
focuses on the eating out domain. All the search/suggest related features on
these sites are po
Hi,
I'm playing with SOLR as the search engine for my local search site. I'm
primarily focused on restaurants right now. I'm working with the following
data attributes:
Name - Restaurant name
Cuisine - a list of 1 or more cusines, e.g. Italian, Pizza
Features - a list of 1 or more features - Op
Ok, I will have a look at distributed search, multi-core solr solution.
Thank you Yonik,
On Sun, Jun 6, 2010 at 8:54 PM, Yonik Seeley wrote:
> On Sun, Jun 6, 2010 at 1:12 PM, Furkan Kuru wrote:
> > We try to provide real-time search. So the index is changing almost in
> every
> > minute.
> >
>
Hi Solr gurus,
I'm wondering if there is an easy way to keep the targets of hyperlinks from
a field which may contain HTML fragments, while stripping the HTML.
e.g. if I had a field that looked like this:
"This is the entire content of my field, but http://example.com/ some of
the words are a
Using the Zoie/Bobo combination gives you realtime faceting. (Lucene based)
http://sna-projects.com/zoie/
http://sna-projects.com/bobo/
wiki write-up:
http://snaprojects.jira.com/wiki/display/BOBO/Realtime+Faceting+with+Zoie
We can take this over to the zoie/bobo mailing list if you have questio
On Sun, Jun 6, 2010 at 1:12 PM, Furkan Kuru wrote:
> We try to provide real-time search. So the index is changing almost in every
> minute.
>
> We commit for every 100 documents received.
>
> The facet search is executed every 5 mins.
OK, that's the problem - pretty much every facet search is reb
We try to provide real-time search. So the index is changing almost in every
minute.
We commit for every 100 documents received.
The facet search is executed every 5 mins.
Here is the stats result after facet search with normal facet.method=fc (it
took 95 seconds)
*name: * fieldValueCache *cl
On Sun, Jun 6, 2010 at 7:38 AM, Furkan Kuru wrote:
> facet.limit = default value 100
> facet.minCount is 1
>
> The document count that matches the query is 8-10K in average. I did not
> calculate the terms (maybe using using facet.limit=-1 and facet.minCount=1)
>
> My index entirely fits into memo
facet.limit = default value 100
facet.minCount is 1
The document count that matches the query is 8-10K in average. I did not
calculate the terms (maybe using using facet.limit=-1 and facet.minCount=1)
My index entirely fits into memory.
On Sun, Jun 6, 2010 at 5:10 AM, Andy wrote:
> This is s
15 matches
Mail list logo