Try using fieldtype "string" instead of "text" for the UserName
field. Then it will not be tokenized so it should only give exact
matches.
--
Steve
On Feb 2, 2009, at 2:27 AM, mahendra mahendra wrote:
Hi,
I have indexed my data as "custom123, customer, custom" for the
"UserName" fiel
Hi
I started rsyncd from one core bin folder.
But it doesn't work for the other core, how can I do without creating
several port for the others ?
Can I use the same port for my three cores ?
If yes how can I do?
Thanks a lot,
Sunny
Wish to everybody a very good week.
--
View this message in
Hi,
Sorry I know this exists ...
"If an API supports chunking (when the dataset is too large) multiple calls
need to be made to complete the process. XPathEntityprocessor supports this
with a transformer. If transformer returns a row which contains a field *
$hasMore* with a the value "true" the
Hello
As per several postings I noted that I can define variables
inside an invariants list section of the DIH handler of
solrconfig.xml:-
data-config.xml
/Volumes/spare/ts
I can also reference these variables within data-config.xml. This
works,
RegexTransformer does not replace the placeholders before processing the regex.
it has to be enhanced
On Mon, Feb 2, 2009 at 10:34 PM, Fergus McMenemie wrote:
> Hello
>
> As per several postings I noted that I can define variables
> inside an invariants list section of the DIH handler of
> solr
On Mon, Feb 2, 2009 at 10:34 PM, Fergus McMenemie wrote:
>
> Is there some simple escape or other syntax to be used or is
> this an enhancement?
>
I guess the problem is that we are creating the regex Pattern without first
resolving the variable. So we need to call VariableResolver.resolve on th
Yes I think what Jared mentions in the JIRA is what I was thinking about
when it is recommended to always return true for $hasMore ...
"The transformer must know somehow when $hasMore should be true. If the
transformer always give $hasMore a value "true", will there be infinite
requests made or wi
On Mon, Feb 2, 2009 at 11:01 PM, Jon Baer wrote:
> Yes I think what Jared mentions in the JIRA is what I was thinking about
> when it is recommended to always return true for $hasMore ...
>
> "The transformer must know somehow when $hasMore should be true. If the
> transformer always give $hasMore
See I think Im just misunderstanding how this entity is suppose to be setup
... for example, using the patch on 1.3 I ended up in a loop where .n is
never set ...
Feb 2, 2009 1:31:02 PM org.apache.solr.handler.dataimport.HttpDataSource
getData
INFO: Created URL to: http://subdomain.site.com/feed.r
On Mon, Feb 2, 2009 at 9:20 PM, Jon Baer wrote:
> Hi,
>
> Sorry I know this exists ...
>
> "If an API supports chunking (when the dataset is too large) multiple calls
> need to be made to complete the process. XPathEntityprocessor supports this
> with a transformer. If transformer returns a row w
Mark,
Use GUI (may be custom build one) to read files which are present on Solr
server. These files can be read using webservice/RMI call.
Do all manipulation on synonyms.txt contents and then call webservice/RMI
call to save that information. After saving information , just call RELOAD.
Chec
Hi all,
I recently created a Solr index to track some news articles that I follow
and I've noticed that I occasionally receive 500 errors when posting an
update. It doesn't happen every time and I can't seem to reproduce the
error. I should mention that I have another Solr index setup under the sam
Could you also provide us with the error you were getting?
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
mr...@zappos.com - 702-943-7833
On Feb 2, 2009, at 1:46 PM, Derek Springer wrote:
Hi all,
I recently created a Solr index to track some news articles that I
follow
and
Der, certainly!
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
SingleInstanceLock: write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at
org.apache.lucene.index.IndexWriter.
I apologize in advance for what's probably a foolish question, but I'm
trying to get a feel for how much memory a properly-configured Solr
instance should be using.
I have an index with 2.5 million documents. The documents aren't all that
large. Our index is 25GB, and optimized fairly often.
We'r
You shouldn't need and dont want to give tomcat anywhere near 14 of GB
or RAM. You also should certainly not being running out of memory with
that much RAM and that few documents. Not even close.
You want to leave plenty of RAM for the filesystem cache - so that a lot
of that 25 gig can be cac
: 1 support a query language, "songname + artist " or "artist + album" or "
: artist + album + songname", some guys would like to query like "because of
: you ne-yo". So I need to cut words in the proper way. How to modify the way
: of cutting words in solr ( recognize the song name or album or ar
: i have a doc which has more than one datefield. they are start and end. now
: i need the user to specify a date range, and i need to find all docs which
: user range is between the docs start and end date fields.
Assuming i'm understanding the question...
http://www.lucidimagination.com/search/
this patch must help
On Mon, Feb 2, 2009 at 10:49 PM, Shalin Shekhar Mangar
wrote:
> On Mon, Feb 2, 2009 at 10:34 PM, Fergus McMenemie wrote:
>
>>
>> Is there some simple escape or other syntax to be used or is
>> this an enhancement?
>>
>
> I guess the problem is that we are creating the regex
Shalin,
OK!
I got myself a JIRA account and opened solr-1000 and followed the
wiki instructions on creating a patch which I have now uploaded! Only
problem is that while the fix seems fine the test case I added to
TestFileListEntityProcessor.java fails. I need somebody who knows
what they are do
: I am trying to do just a commit via url:
: http://localhost:8084/nightly_web/es_jobs_core/update
: I have tryeid also:
: http://localhost:8084/nightly_web/es_jobs_core/update?commit=true
: And I am getting this error:
:
: 2009-01-20 11:27:50,424 [http-8084-Processor25] ERROR
: org.apache.solr.
On Mon, Feb 2, 2009 at 9:20 PM, Jon Baer wrote:
> Hi,
>
> Sorry I know this exists ...
>
> "If an API supports chunking (when the dataset is too large) multiple calls
> need to be made to complete the process. XPathEntityprocessor supports this
> with a transformer. If transformer returns a row wh
Hi Sagar,
Change dynamic field attribute( C,D and E)--stored = true and validate
if above suggestion is not working , can you share your schema and
solrconfig xml contents?
~Vikrant
Sagar Khetkade-2 wrote:
>
>
>
> Hi,
>
> I am trying out the dynamic field in schema.xml with its attribut
On Mon, Feb 2, 2009 at 10:08 PM, Fergus McMenemie wrote:
> Shalin,
>
> OK!
>
> I got myself a JIRA account and opened solr-1000 and followed the
> wiki instructions on creating a patch which I have now uploaded! Only
> problem is that while the fix seems fine the test case I added to
> TestFileLi
: Hey there, I would like to understand why distributed search doesn't suport
: facet dates. As I understand it would have problems because if the time of
: the servers is not syncronized, the results would not be exact but... In
: case I wouldn't mind if results are completley exacts... would be
Have you checked hte archive for other discussions about implementing
auto-complete functionality? it'snot something i deal with much, but i
kow it's been discussed.
your specific requirement that things starting with an exact match be
ordered alphabeticly seems odd to me ... i suspect sortin
Hi,
I am writing my first application using Solr and I was wondering if there is
any best practice or how are users implementing their JUnit or integration
tests.
Thanks!
Bruno
: I'm getting confused about the method Map
: toMultiMap(NamedList params) in SolrParams class.
toMultiMap probably shouldn't have ever been made public -- it' really
only ment to be use by toSolrParams (it's refactored out to make the code
easier to read)
: When some of your parameter is inst
: is there no other way then to use the patch?
the patch was commited a while back, but it will require experimenting
with the trunk.
: > If I understand correctly,
: > 1. You want to query for tagList:a AND tagList:b AND tagList:c
: > 2. At the same time, you want to request facets for tagList
: seems to be i cant do this. so my question is transforming to following:
:
: can i join multiple dismax queries into one? for instance if i'm looking for
: +WORD1 +(WORD2 WORD3)
: it can be translated into +WORD1 +WORD2 and +WORD1 +WORD3 query
can it be done? sure. you could do that in your c
: But we do not have an inbuilt TokenFilter which does that. Nor does
: DIH support it now . I have opened an issue for DIH
: (https://issues.apache.org/jira/browse/SOLR-980)
: Is it desirable to have TokenFilter which offers similar functionality?
Probably not (you would have to have a way of c
I've tried multiple times to unsubscribe from this list using the proper method
(mailto:solr-user-unsubscr...@lucene.apache.org), but it's not working! Can
anyone help with that?
: We use Solr1.3 and indexed some of our date fields in the format
: '1995-12-31T23:59:59Z' and as we know this is a UTC date. But we do want to
: index the date in IST which is +05:30hours so that extra conversion from
: UTC to IST across all our application is avoided.
There's no way to do thi
deja vu...
http://www.nabble.com/SOLR---indexing-Date-Time-in-local-format--to21663464.html
: We use Solr1.3 and indexed some of our date fields in the format
: '1995-12-31T23:59:59Z' and as we know this is a UTC date. But we do want to
: index the date in IST which is +05:30hours so that extr
: I need to configure solr, such that it doesn't do any fancy stuff like
: adding adding wildcard characters to normal query, check for existing
: fields, etc.
:
: I've modified lucene code for Term queries(can be multiple terms) and I need
: to process only term queries. But solr modifies queries
A separate problem: when I used the DIH in December, the xpath
implementation had few features. '[...@qualifier='Date']' may not be
supported.
On Mon, Feb 2, 2009 at 9:24 AM, Noble Paul നോബിള് नोब्ळ् <
noble.p...@gmail.com> wrote:
> this patch must help
>
> On Mon, Feb 2, 2009 at 10:49 PM,
How many total values are in the faceted fields? Not just in the faceted
query, but the entire index? A facet query builds a counter array for the
entire space of field values. This can take much more ram than normal
queries. Sorting is also a memory-eater.
On Mon, Feb 2, 2009 at 2:19 PM, Mark Mi
I am having the same issue can't get unsubscribed !!
On Mon, Feb 2, 2009 at 8:45 PM, Ross MacKinnon wrote:
> I've tried multiple times to unsubscribe from this list using the proper
> method (mailto:solr-user-unsubscr...@lucene.apache.org), but it's not
> working! Can anyone help with that
this syntax is supported /record/metadata/da...@qualifier='Date']. if
I am not wrong and there is a testcase also for that
On Tue, Feb 3, 2009 at 7:20 AM, Lance Norskog wrote:
> A separate problem: when I used the DIH in December, the xpath
> implementation had few features. '[...@qualifier='Dat
Hi, no the data_added field was one per document.
2009/2/1 Erik Hatcher
> Is your date_added field multiValued and you've assigned multiple to some
> documents?
>
>Erik
>
>
> On Jan 31, 2009, at 4:12 PM, James Brady wrote:
>
> Hi,I'm following the recipe here:
>>
>> http://wiki.apache.or
http://wiki.apache.org/solr/SchemaDesign
http://wiki.apache.org/solr/LargeIndexes
http://wiki.apache.org/solr/UniqueKey
These pages are based on my recent experience and some generalizations. They
are intended for new users who want to use Solr for a major project. Please
review them and send me
The solr data field is populated properly. So I guess that bit works.
I really wish I could use xpath="//para"
>A separate problem: when I used the DIH in December, the xpath
>implementation had few features. '[...@qualifier='Date']' may not be
>supported.
>
> dateTimeFormat="MMdd" />
>
>
On Tue, Feb 3, 2009 at 11:59 AM, Fergus McMenemie wrote:
> The solr data field is populated properly. So I guess that bit works.
> I really wish I could use xpath="//para"
>
>
The limitation comes from streaming the XML instead of creating a DOM.
XPathRecordReader is a custom streaming XPath pars
Hi Solr users,
Is there a method of retrieving a field range i.e. the min and max
values of that fields term enum.
For example I would like to know the first and last date entry of N
documents.
Regards,
-Ben
44 matches
Mail list logo