Hi,
I have a requirement where a user enters acronym of a word, then the search
results should come for the expandable word.Let us say. If the user enters
'TV', the search results should come for 'Television'.
Is the synonyms filter is the way to achieve this?
Any inputs.
Regards,
Siva
--
Vi
Apologies for starting a new thread again, my mailing list subscription didn't
finalize till later than Yonik's response.
Using "Field1:Val1 AND (*:* NOT Field2:Val2)" works, thanks.
Does my original query "Field1:Value1 AND (NOT Field2:Val2)" fall into "need
the *:* trick if all of the clause
In addition,my index has only two store fields, id and price, and other
fields are index. I increase the document and query cache. the ec2
m2.4xLarge instance is 8 cores, 68G memery. all indexs size is about 100G.
--
View this message in context:
http://lucene.472066.n3.nabble.com/my-index-has-5
I split my docs to 100 indexs,I deploy the 100 indexs on 10 ec2 m2.4xLarge
instances for solr shards. it means each instance has 10 solr cores. it cost
4 to 10 seconds only for search when I test hundred concurrent threads,and
now I have 1000 online users per sencond, the user must wait for mor
On Nov 14, 2010, at 3:02pm, Lance Norskog wrote:
Yes, the ExtractingRequestHandler uses Tika to parse many file
formats.
Solr 1.4.1 uses a previous version of Tika (0.6 or 0.7).
Here's the problem with Tika and extraction utilities in general:
they are not perfect. They will fail on some
nowhere (unless I overlooked it) do you ever populate city_search
in the first place, it's simply defined..
Also, I don't think (but check it) that is chainable.
I don't *think* that
will populate citytext_search. Ahmet's suggestion to do two
s with source="city" is spot-on
Best
Erick
On
Yes, the ExtractingRequestHandler uses Tika to parse many file formats.
Solr 1.4.1 uses a previous version of Tika (0.6 or 0.7).
Here's the problem with Tika and extraction utilities in general: they
are not perfect. They will fail on some files. In the
ExtractingRequestHandler's case, there i
Here is a separate configuration: use separate Solr instances for
indexing and querying. Both point to the same data directory. A 'commit'
to the query Solr reloads the index. It works in read-only mode- for
production mode, I would make the indexer and queryer in different
permissions so that
This feature would make the ReplicationHandler more robust in its own
practice of "reserving" previous commit points, by pushing that code out
into Solr proper.
Jason Rutherglen wrote:
The timed deletion policy is a bit too abstract, as is keeping a
numbered limit of commit points. How would
> but I dont understand why its not indexed.
Probably something wrong with data-config.xml.
> So you can see, that the city field DOES index some data,
> whereas the
> city_search and citytext_search have NO data at all...
Then populate these two fields from city via copyField. It is 100% legal
The timed deletion policy is a bit too abstract, as is keeping a
numbered limit of commit points. How would one know what they're
rolling back to when num limit is defined?
I think committing to a name and being able to roll back to it in Solr
is a good feature to add.
On Fri, Nov 12, 2010 at 2:
In addition, I had tried and since backed away from (on Solr) indexing
heavily while also searching on the same server. This would lock up
segments and searchers longer than the disk space would allow. I
think that part of Solr can be rewritten to better handle this N/RT
use case as there is no r
Ok, that makes sense ;)
but I dont understand why its not indexed.
IMO, I've defined the "city_search" field the exact same as "city" in the
schema.xml:
So I checked the schema.jsp you suggested.
When under fields I click on the respective fields, I get this output:
Field: city
Field Type
Hi,
Thank you! I got it working after you jarred my brain. Of course, the
location of the solr instance is arbitrary/logical to tomcat. Sheesh, I feel
kind of small, now. Anyway, I was able to clearly see my mistake from your
information.
As with all help I get from here I posted my fix/walkthrou
Ok, more detail: I was testing using the NoMergePolicy in Solr. As
Hoss pointed out in another thread, NoMergePolicy has no 0-argument
constructor, and so throws an exception during loading the core.
When there is no existing data/index/ directory, Solr creates a new
index/ directory at the beginn
> both queries give me 0 results...
Then your field(s) is not populated. You can debug on /admin/dataimport.jsp
or /admin/schema.jsp
both queries give me 0 results...
--
View this message in context:
http://lucene.472066.n3.nabble.com/full-text-search-in-multiple-fields-tp1888328p1900648.html
Sent from the Solr - User mailing list archive at Nabble.com.
--- On Sun, 11/14/10, PeterKerk wrote:
> From: PeterKerk
> Subject: Re: full text search in multiple fields
> To: solr-user@lucene.apache.org
> Date: Sunday, November 14, 2010, 8:52 PM
>
> Ok, thanks. it works now for title and description fields.
> :)
>
> But now I also need it for the city
Ok, thanks. it works now for title and description fields. :)
But now I also need it for the city. And I cant get that to work, even
though im doing the exact same (or so I think).
I now have the code below for the city field.
(Im defining city field twice in my data-config and schema.xml but t
On Sun, Nov 14, 2010 at 4:17 AM, Leonardo Menezes
wrote:
> try
> Field1:Val1 AND (*:* NOT Field2:Val2), that shoud work ok
That should be equivalent to Field1:Val1 -Field2:Val2
You only need the *:* trick if all of the clauses of a boolean query
are negative.
-Yonik
http://www.lucidimagination.c
Alphanumeric + "_" + "%" + "."
So to say: "John_Smith", "John Smith", "John_B._Smith" and "John 44 Smith"
are all possible values.
On Sun, Nov 14, 2010 at 11:46 PM, Ahmet Arslan wrote:
>
> --- On Sun, 11/14/10, Parsa Ghaffari wrote:
>
> > From: Parsa Ghaffari
> > Subject: Re: Solr TermsCompon
--- On Sun, 11/14/10, Parsa Ghaffari wrote:
> From: Parsa Ghaffari
> Subject: Re: Solr TermsComponent: space in term
> To: solr-user@lucene.apache.org
> Date: Sunday, November 14, 2010, 5:06 PM
> Hi Ahmet,
>
> This is the fieldType for "name":
>
> class="solr.TextField"
> positionIncreme
Hi Ahmet,
This is the fieldType for "name":
and:
there's no ShingleFilterFactory. And also after changing parameters in the
schema, should one re-index the table?
On Sun, N
Hi,
I have up to now focussed on Jetty as it's already bundled with solr.
The main issue there seems to be the way it's unbundled by Debian; I
figure things might be similar with Tomcat, depending on how entangled
configuration is there.
Before I dig deeper into the Tomcat option: would you mind
> I'm using Solr 1.4.1 and I'm willing to use TermsComponent
> for AutoComplete.
> The problem is, I can't get it to match strings with spaces
> in them. So to
> say,
>
> terms.fl=name&terms.lower=david&terms.prefix=david&terms.lower.incl=false&indent=true&wt=json
>
> matches all strings starting
Hi,
and thanks for your hints. I've done some additional research and found
that there doesn't really seem to be any possibility of an embedded solr
server in solrpy.
Jetty, then. It'd all be probably kinda easy if it weren't for the way
things are unbundled in debian. I've recently posted to the
> terms.fl=name&terms.lower=david%20&terms.prefix=david%20&terms.lower.incl=false&indent=true&wt=json
>
> it doesn't match all strings starting with "david ". Is it
> meant to be that
> way?
This is about fielyType of name field. What is it? If it does have
ShingleFilterFactory in it, then this
Hi folks,
I'm using Solr 1.4.1 and I'm willing to use TermsComponent for AutoComplete.
The problem is, I can't get it to match strings with spaces in them. So to
say,
terms.fl=name&terms.lower=david&terms.prefix=david&terms.lower.incl=false&indent=true&wt=json
matches all strings starting with "
Move solr.war file and solrhome directory somewhere else outside the tomcat
webapps. Like /home/foo. Tomcat will generate webapps/solr automatically.
This is what i use: under catalineHome/conf/Catalina/localhost/solr.xml
I also delete ... entry from solrconfig.xml. So that data
dir is
Thanks for all the responses.
Govind: To answer your question, yes, all I want to search is plain text
files. They are located in NFS directories across multiple Solaris/Linux
storage boxes. The total storage is in hundreds of terabytes.
I have just got started with Solr and my understanding is t
try
Field1:Val1 AND (*:* NOT Field2:Val2), that shoud work ok
On Sun, Nov 14, 2010 at 9:02 AM, Viswa S wrote:
>
> Dear Solr/Lucene gurus,
> I have run into a weird issue trying use a negative condition in my query.
> Parser:StandardQueryParserMy Query: Field1:Val1 NOT Field2:Val2Resolved as:
> F
Dear Solr/Lucene gurus,
I have run into a weird issue trying use a negative condition in my query.
Parser:StandardQueryParserMy Query: Field1:Val1 NOT Field2:Val2Resolved as:
Field1:Val1 -Field2:Val2
The above query never returns any document, no matter how we use a paranthesis.
I did see some su
Hi,
I have been using Jetty on my linux/apache webserver for about 3 weeks now.
I decided that I should change to Tomcat after realizing I will be indexing
a lot of URL's and Jetty is good for small production sites as noted in the
Wiki. I am running into this error:
org.apache.solr.common.
33 matches
Mail list logo