Yeah.. that was weird. removing the line "forever,for ever" from my synonyms
file fixed the problem. In fact, i was having the same problem for every
double word like that. I decided I didn't really need the synonym filter for
that field so I just took it out, but I'd really like to know what the
p
Hi to all,
While i am working with facet using solrj, i am using string filed in schema
to avoid split in the word(i.e,Rekha dharshana, previously i was getting
rekha separate word and dharshana separate word..),in order to avoid this in
shema i use two fileds to index. My Schema.xml will look li
That's pretty strange... perhaps something to do with your synonyms
file mapping "for" to a zero length token?
-Yonik
http://www.lucidimagination.com
On Mon, Sep 14, 2009 at 12:13 AM, mike anderson wrote:
> I'm kind of stumped by this one.. is it something obvious?
> I'm running the latest trunk
The XPathRecordreader has a limit one mapping per xpath. So copying is
the best solution
On Mon, Sep 14, 2009 at 2:54 AM, Fergus McMenemie wrote:
>>I'm trying to import several RSS feeds using DIH and running into a
>>bit of a problem. Some feeds define a GUID value that I map to my
>>Solr ID, w
replication uses httpclient for connection. It is likely that you
notice some CLOSE_WAIT . But , how many do you see?
On Mon, Sep 14, 2009 at 6:37 AM, liugang8440265
wrote:
> hi,I hava a problem about solr-replication.
>
> Every time I use the replication api to replicate index , A TCP connecti
I'm kind of stumped by this one.. is it something obvious?
I'm running the latest trunk. In some cases the stopFilterFactory isn't
removing the field name.
Thanks in advance,
-mike
>From debugQuery (both words are in the stopwords file):
http://localhost:8983/solr/select?q=citations:for&debugQu
I would say once a day is a pretty good rule of thumb. If you think
this is a bit much and if you have few updates you can probably back
that off to once every couple days to once a week. However, if you
have a large batch update or your query performance starts to degrade,
you will need
Folks:
Are there good rules of thumb for when to optimize? We have a large index
consisting of approx 7M documents and we currently have it set to optimize
once a day. But sometimes there are very few changes that have been
committed during a day and it seems like a waste to optimize (esp. s
Hi,
I'ld like to set up Eclipse to run solr (in Tomcat for example), but
struggling with the issue that I can't get the index.jsp and other files
to be properly executed, for debugging and working on a plugin.
I've checked out solr via subclipse plugin, created a Dynamic Web
Project. It seems tha
>I'm trying to import several RSS feeds using DIH and running into a
>bit of a problem. Some feeds define a GUID value that I map to my
>Solr ID, while others don't. I also have a link field which I fill in
>with the RSS link field. For the feeds that don't have the GUID value
>set, I wa
I'm trying to import several RSS feeds using DIH and running into a
bit of a problem. Some feeds define a GUID value that I map to my
Solr ID, while others don't. I also have a link field which I fill in
with the RSS link field. For the feeds that don't have the GUID value
set, I want to
Using http://localhost:8983/solr/update/csv?stream.file, is there any
way to map one of the csv fields to one's schema unique id?
e.g. A file with 3 fields (sku, product,price):
http://localhost:8983/solr/update/csv?stream.file=products.csv&stream.contentType=text/plain;charset=utf-8&header=true
Thanks to Jay, I have my code doing what I need it to do. If anybody
cares, this is my code:
SolrQuery query = new SolrQuery();
query.setQuery(searchTerm);
query.addFilterQuery(Chunk.SOLR_KEY_CONCEPT + ":" + concept);
query.addFilterQuery(Chunk.SOLR_KEY_CATEGORY +
13 matches
Mail list logo