.
I thought of writing function exists in my qf but that is not working.
I am using edismax query parser.
Thanks
--
Rahul Ranjan
dynamic fields in Solr and
search them later based on those fields.
--
Rahul Ranjan
Hi,
I am a newbie to Apache Solr.
We are using ContentStreamUpdateRequest to insert into Solr. For eg,
ContentStreamUpdateRequest req = new ContentStreamUpdateRequest(
"/update/extract")
req.addContentStream(stream);
req.addContentStream(literal.name, na
Nevermind.. got the details from here..
http://wiki.apache.org/solr/ExtractingRequestHandler
Thanks..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-inserting-Multivalued-filelds-tp2406612p2411248.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I am using Solr (1.4.1) AutoSuggest feature using termsComponent.
Currently, if I type 'goo' means, Solr suggest words like 'google'.
But I would like to receive suggestions like 'google, google alerts, ..' .
ie, suggestions with single and multiple terms.
Not sure, whether I need to use e
Hi
I have added the following line in both the section and in section in
schema.xml.
filter class="solr.ShingleFilterFactory" maxShingleSize="2"
outputUnigrams="true" outputUnigramIfNoNgram="true"
And reindex my content. However, if I query solr for the multi work search
terms suggestion , it
hi..
thanks for your replies..
It seems I mistakenly put ShingleFilterFactory in another field. When I put
the factory in correct field it works fine now.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Autosuggest-help-tp2580944p2645780.html
Sent from the Sol
Hi,
I have received the following error, when I try to insert a document into
solr,
SEVERE: org.apache.solr.common.SolrException: ERROR: multiple values
encountered for non multiValued copy field id: 272327_1
In my schema.xml, I have specified,
id
In the query, I have passed as literal.uni
Hi,
thanks for your reply.
I have post that value only one time.
The following are the list of values that I have posted,
literal.uniqueid=272327_1&literal.urlid=272327&literal.url=http%3A%2F%2Fblogs.edweek.org%2Fteachers%2Fbook_whisperer%2F2009%2F03%2Fa_book_in_every_backpack_1.html&literal.ti
hi,,
seems I have identified the issue.
In the code I am using
ContentStreamBase.StringStream stream = new
ContentStreamBase.StringStream(streamData);
If the streamData contains name="ID" , ie, ID value then already I set
copyfield for uniqueid as id. Hence, It throws error.
Seems, it check
uniqueKey id uniqueKey
copyField source="uniqueid" dest="id"
Please note that, I have marked uniqueKey as bold, since I can't make it as
Tag and post in this forum. uniqueKey is the tag only..
Here, id and uniqueid both are declared as string. I only pass the uniqueid
in the query.
Please note,
Hi,
I am using Solrj as a Solr client in my project.
While searching, for a few words, it seems Solrj takes more time to send
response, for eg (8 - 12 sec). While searching most of the other words it
seems Solrj take less amount of time only.
For eg, if I post a search url in browser, it shows t
Hi,
I am using Solrj as a Solr client in my project.
While searching, for a few words, it seems Solrj takes more time to send
response, for eg (8 - 12 sec). While searching most of the other words it
seems Solrj take less amount of time only.
For eg, if I post a search url in browser, it shows t
Hi,
I am using Solrj as a Solr client in my project.
While searching, for a few words, it seems Solrj takes more time to send
response, for eg (8 - 12 sec). While searching most of the other words it
seems Solrj take less amount of time only.
For eg, if I post a search url in browser, it shows t
Hi,
Thanks for your information.
One simple question. Please clarify me.
In our setup, we are having Solr index in one machine. And Solrj client part
(java code) in another machine. Currently as you suggest, if it may be a
'not enough free RAM for the OS to cache' then whether I need to increase
thanks for all your info.
I will try increase the RAM and check it.
thanks,
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrj-performance-bottleneck-tp2682797p2692503.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
One more query.
Currently in the autosuggestion Solr returns words like below:
googl
googl _
googl search
googl chrome
googl map
The last letter seems to be missing in autosuggestion. I have send the query
as
"?qt=/terms&terms=true&terms.fl=mydata&terms.lower=goog&terms.prefix=goog".
The f
hi,
We have found that 'EnglishPorterFilterFactory' causes that issue. I believe
that is used for stemming words. Once we commented that factory, it works
fine.
And another thing, currently I am checking about how the word 'sci/tech'
will be indexed in solr. As mentioned in my previous email, if
Hi,
I have indexed some terms in our solr and I only used Solr for searching
purpose only. Currently, I dont use highlighting part (" and requesthandlers
like '/spell', and queryResponseWriters (since I use solrj, it uses default
javabin response writer).
I have little bit concerned about, if I c
thanks for your info.
On Sat, Apr 2, 2011 at 9:51 AM, Chris Hostetter-3 [via Lucene] <
ml-node+2766175-1482636196-340...@n3.nabble.com> wrote:
>
> : I have little bit concerned about, if I comment those unused modules from
>
> : solrconfig.xml, whether it will used to increase search performance
Hi All,
I just to want to share some findings which clearly identified the reason
for our performance bottleneck. we had looked into several areas for
optimization mostly directed at Solr configurations, stored fields,
highlighting, JVM, OS cache etc. But it turned out that the "main" culprit
was
ected results.
Thanks Victor, I appreciate the link to the Jquery example and we will look
into it as a reference.
Regards,
Rahul.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrj-performance-bottleneck-tp2682797p2779387.html
Sent from the Solr - User mailing list archi
hi,
I believe you are facing this 'Internal Server Error' while try to index the
pdf files.
Try adding the 'pdfbox and fontbox' jars in to your solr lib folder and try
restart Tomcat once.
I hope it will solve the issue.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.
Hi,
I am using Solr 1.4.1 in my environment. I have not used special solr
features such as replication/distributed searches. From the Solr 3.1 release
notes, I came to know, for upgrade , first we need to upgrade our slaves and
then need to update Master solr server. Since I have not used this, I
Hi,
Is there a way to find out Solr indexing size for a particular document. I
am using Solrj to index the documents.
Assume, I am indexing multiple fields like title, description, content, and
few integer fields in schema.xml, then once I index the content, is there a
way to identify the index
thanks for all your inputs.
On Fri, Apr 22, 2011 at 8:36 PM, Otis Gospodnetic-2 [via Lucene] <
ml-node+2851624-1936255218-340...@n3.nabble.com> wrote:
> Rahul,
>
> Here's a suggestion:
> Write a simple app that uses *Lucene* to create N indices, one for each of
> t
Hi,
Currently I am indexing documents by directly adding files as
'req.addFile(fi);' or by sending the content of the file like
'req.addContentStream(stream);' using solrj.
Assume, if the solrj client & Solr server are in different network (ie, Solr
server is in remote location) I need to transf
observation. Is this correct observation?
If we optimize it every day, the indexes will not be skewed right?
Please let me know if my understanding is correct.
Regards,
Rahul
On Mon, Dec 21, 2015 at 9:54 AM, Erick Erickson
wrote:
> You'll probably have to shard before you get to the TB r
Thanks Erick!
Rahul
On Mon, Dec 21, 2015 at 10:07 AM, Erick Erickson
wrote:
> Rahul:
>
> bq: we dont want the index sizes to grow too large and auto optimzie to
> kick in
>
> Not what quite what's going on. There is no "auto optimize". What
> there is
about the performance / high availability.
Please check the post, a detailed analysis and comparison between the two
has been given.
-Rahul
On Mon, Jan 11, 2016 at 4:58 PM, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> Hi guys,
>
>
>
> a customer need
commit happen on all the
nodes at the same time?
2. Can we say that, the search result will always contain the documents
before commit / or after commit . Or can it so happen that we get new
documents fron N1, N2 but old documents (i.e., before commit) from N3?
Thank you,
Rahul
We are using Solr Cloud with replication factor of 2 and no of shards as
either 2 or 3.
Thanks,
Rahul
On Mon, Jan 25, 2016 at 4:43 PM, Alessandro Benedetti wrote:
> Let me answer in line :
>
> On 25 January 2016 at 11:02, Rahul Ramesh wrote:
>
> > We are facing so
soft commit is not enabled.
-Rahul
On Mon, Jan 25, 2016 at 6:00 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi Rahul,
> It is good that you commit only once, but not sure how external commits
> can do something auto commit cannot.
> Can you give us bit more detai
Thank you Emir, Allesandro for the inputs. We use sematext for monitoring.
We understand that Solr needs more memory but unfortunately we have to move
towards an altogether new range of servers.
As you say eventually, we will have to upgrade our servers.
Thanks,
Rahul
On Mon, Jan 25, 2016 at 6
Hi Debraj,
I dont think increasing the timeout will help. Are you sure solr/ any other
program is not running on 8789? Please check the output of lsof -i :8789 .
Regards,
Rahul
On Tue, Dec 8, 2015 at 11:58 PM, Debraj Manna
wrote:
> Can someone help me on this?
> On Dec 7, 2015 7:55
We currently moved data from magnetic drive to SSD. We run Solr in cloud
mode. Only data is stored in the drive configuration is stored in ZK. We
start solr using the -s option specifying the data dir
Command to start solr
./bin/solr start -c -h -p -z -s
We followed the following steps to migr
collections. We have configured 8GB as heap and
the rest of the memory we will leave it to OS to manage. We do around 1000
(search + Insert)/second on the data.
I hope this helps.
Regards,
Rahul
On Tue, Dec 15, 2015 at 4:33 PM, zhenglingyun wrote:
> Hi, list
>
> I’m new to solr. R
fields to stored=”true” if I want to
use atomic update.Right?
Will it affect the performance of the Solr? if yes, then what is the best
practice to reduce performance degradation as much as possible?Thanks in
advance.
Thanks and Regards,
Rahul Bhooteshwar
Enterprise Software Engineer
HotWax Syst
Hi Yago Riveiro,
Thanks for your quick reply. I am using Solr for faceted search using *Solr**j.
*I am using facet queries and filter queries. I am new to Solr so I would
like to know what is the best practice to handle such scenarios.
Thanks and Regards,
Rahul Bhooteshwar
Enterprise Software
Hi,
I have tried to deploy solr.war from building it from 4.7.2 but it is
showing the below mentioned error. Has anyone faced the same? any lead
would also be appreciated.
Error Message:
{
"responseHeader": {
"status": 500,
"QTime": 33
},
"error": {
"msg": "parsing error",
response inline.
On Thu, May 7, 2015 at 7:01 PM, Shawn Heisey wrote:
> On 5/7/2015 3:43 AM, Rahul Singh wrote:
> > I have tried to deploy solr.war from building it from 4.7.2 but it is
> > showing the below mentioned error. Has anyone faced the same? any lead
> > woul
Hi everyone,
While tracing a bug in one of our systems we notices some interesting
behavior from Solr.
These two queries return different results. I fail to understand why the
second query returns empty results just by adding brackets. Can you please
help us understand this behavior?
*1. Without
Hi Myron,
Can you give me an example of this?
http://grokbase.com/t/lucene/solr-user/105jjpxa2x/minimum-should-match-on-subquery-level
<http://grokbase.com/t/lucene/solr-user/105jjpxa2x/minimum-should-match-on-subquery-level>
Regards,
Rahul
one of the measurement criteria is DCG.
http://en.wikipedia.org/wiki/Discounted_cumulative_gain
On Tue, Apr 1, 2014 at 11:44 AM, Floyd Wu wrote:
> Usually IR system is measured using Precision & Recall.
> But depends on what kind of system you are developing to fit what scenario.
>
> Take a lo
d2.
My expectation is that flist in above program should only return [ISC
(1077)].
Appreciate any pointers on this. Thank you
- Rahul
ching with that facet
is just going to give the same result set again. So when facet.missing does
not work with facet.mincount, it is a bit of a hassle for us Will work
on handling it in our program.Thank you for the clarification
- Rahul
On Wed, Jun 5, 2013 at 12:32 AM, Chris Hoste
there a better
way to achieve this ?
I use solrJ in Solr 3.4.
Thank you.
- Rahul
Fri, Jun 7, 2013 at 12:07 AM, Shawn Heisey wrote:
> On 6/6/2013 12:28 PM, Rahul R wrote:
>
>> I have recently enabled facet.missing=true in solrconfig.xml which gives
>> null facet values also. As I understand it, the syntax to do a faceted
>> search on a null value is
Thank you for the Clarification Shawn.
On Fri, Jun 7, 2013 at 7:34 PM, Jack Krupansky wrote:
> Yes, it SHOULD! And in the LucidWorks Search query parser it does. Why
> doesn't it in Solr? Ask Yonik to explain that!
>
> -- Jack Krupansky
>
> -Original Message---
FastVectorHighlighter ?
--
Thanks and Regards
Rahul A. Warawdekar
Hi Koji,
Thanks for the information !
I will try the patches provided by you.
On 9/8/11, Koji Sekiguchi wrote:
> (11/09/09 6:16), Rahul Warawdekar wrote:
>> Hi,
>>
>> I am currently evaluating the FastVectorHighlighter in a Solr search based
>> project and have a coup
h response times are very slow.
Would highly appreciate if someone can suggest other efficient ways to
address this kind of a requirement.
--
Thanks and Regards
Rahul A. Warawdekar
chment" docs that match on one of the main docIds in a
> special field, and use the results to note which attachment of each doc
> (if any) caused the match.
>
> -Hoss
>
--
Thanks and Regards
Rahul A. Warawdekar
imes are not in
> sync
> > so I might lose some changes.
> > Can we use something else other than last_index_time? Maybe something
> like
> > last_pk or something.
>
> One possible way is to edit dataimport.properties, manually or through
> a script, to put the last_index_time back to a "safe" value.
>
> Regards,
> Gora
>
--
Thanks and Regards
Rahul A. Warawdekar
hat might be the issue. is there any cache related problem
> at SOLR level
>
> thanks
> pawan
>
--
Thanks and Regards
Rahul A. Warawdekar
tion in my solR query's response.
>>>
>>> i've got a simple input text which allows me to query several fields in
>>> the
>>> same query.
>>>
>>> So my query looks like this
>>> "q=email:martyn+OR+name:****martynn+OR+commercial:martyn ..."
>>>
>>> Is it possible in the response to know the fields where "martynn" has
>>> been
>>> found ?
>>>
>>> Thanks a Lot :-)
>>>
>>>
>>>
>>
>>
>
>
--
Thanks and Regards
Rahul A. Warawdekar
omplex SQL queries and it takes a long time to index.
>
> I'm migrating from Lucene to Solr and the Lucene code uses threads so it
> takes little time to index, now in Solr if I add threads=xx to my
> rootEntity I get lots of errors about connections being closed.
>
>
>
> Thanks a lot,
>
> Maria
>
>
--
Thanks and Regards
Rahul A. Warawdekar
I am using Solr 3.1.
But you can surely try the patch with 3.3.
On Fri, Sep 23, 2011 at 1:35 PM, Vazquez, Maria (STM) <
maria.vazq...@dexone.com> wrote:
> Thanks Rahul.
> Are you using 3.3 or 3.4? I'm on 3.3 right now
> I will try the patch today
> Thanks again,
>
rQuery:"coke studio *? *mtv"
>
> Why the query did not matched any document even when there is a document
> with value of textForQuery as *Coke Studio at MTV*?
> Is this because of the stopword *at* present in stopwordList?
>
>
>
> --
> Thanks & Regards,
> Isan Fulia.
>
--
Thanks and Regards
Rahul A. Warawdekar
at 1:12 AM, Isan Fulia wrote:
> Hi Rahul,
>
> I also tried searching "Coke Studio MTV" but no documents were returned.
>
> Here is the snippet of my schema file.
>
> positionIncrementGap="100" autoGeneratePhraseQueries="true">
x27;m looking to solve/clarify here is the admin page - should
> that remain available and usable when using the multicore configuration or
> am I doing something wrong? Do I need to use the CoreAdminHandler type
> requests to manage multicore instead?
>
> Thanks,
> --
> Josh Miller
> Open Source Solutions Architect
> (425) 737-2590
> http://itsecureadmin.com/
>
>
--
Thanks and Regards
Rahul A. Warawdekar
Production as well as Archive indexes ?
What would be the best CPU/RAM/Disk configuration ?
How can I implement failover mechanism for sharded searches ?
Please let me know in case I need to share more information.
--
Thanks and Regards
Rahul A. Warawdekar
, I get search results
only from the first shard and not from the from the second one.
Am I missing any configuration ?
Also, can the urls with the shard parameter be load balanced for a failover
mechanism ?
--
Thanks and Regards
Rahul A. Warawdekar
ore term2 in the document"
>
> Thanks
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Ordered-proximity-search-tp3477946p3477946.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks and Regards
Rahul A. Warawdekar
them independent of Solr, will the same License apply ? Some
of the jar files - slf4j-api-1.6.1.jar, jcl-over-slf4j-1.6.1.jar etc - do
not have any License file inside the jar.
Regards
Rahul
masters and 6 slaves
(load balanced)
2. Master configuration
will be 4 CPU
On Tue, Oct 11, 2011 at 2:05 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> Hi Rahul,
>
> This is unfortunately not enough information for anyone to give you very
> precise answers, so I'
ing to use SAN instead of local storage to store Solr index.
And my questions are as follows:
Will 3 shards serve the purpose here ?
Is SAN a a good option for storing solr index, given the high index volume ?
On Mon, Nov 21, 2011 at 3:05 PM, Rahul Warawdekar <
rahul.warawde...@gmail.com&
yes ,Please suggest how to install surround.
currently we are using solr 3.1 .
Thanks & Regards
Rahul Mehta
:
org.apache.solr.common.SolrException: Error Instantiating
QParserPlugin, org.apache.lucene.queryParser.surround.parser.QueryParser
is not a org.apache.solr.search.QParserPlugin
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:425)
--
Thanks & Regards
Rahul Mehta
is being installed .
Means what query i can run.
--
Thanks & Regards
Rahul Mehta
urround plugin is being installed
> > .
> > Means what query i can run.
> >
>
> Rahul, you need to switch to solr-trunk, it is already there
> http://wiki.apache.org/solr/SurroundQueryParser
>
--
Thanks & Regards
Rahul Mehta
rowse/SOLR-1604
>
--
Thanks & Regards
Rahul Mehta
what i think that i didnt get the right plugin, can any body guide me
> from where
> > to get right plugin for surround query parser or how to accurately
> integrate
> > this plugin with solr.
> >
> >
> > thanx
> > Ahsan
> >
> >
> >
>
>
--
Thanks & Regards
Rahul Mehta
.
- tried finding sudo find / -name TestSurroundQueryParser.java which is
not found in the directory .
- and when m doing svn up giving me Skipped '.'
*Please suggest what should i do now ? *
On Wed, Nov 23, 2011 at 10:39 AM, Rahul Mehta wrote:
> How to apply thi
at
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
-
Please suggest what should i do ?
On Wed, Nov 23, 2011 at 11:19 AM, Rahul Mehta wrote:
> This what i tried:
>
>
>- Gone to the solr 3.1 directory which is downloaded from here.
>http://www.trieuvan.com/
repos/asf/lucene/dev/trunk
>
--
Thanks & Regards
Rahul Mehta
INFO]
Please suggest how to solve this error.
--
Thanks & Regards
Rahul Mehta
how can i get the result?
--
Thanks & Regards
Rahul Mehta
e weeks yet. The SurroundQParserPlugin is really all you need to make
> this work, just need to get the compilation bit fixed (as things changed
> from 3.x to trunk with contrib/modules).
>
> Rahul - if you'd like to see this done, feel free to take a stab at it.
> I'll tinker with it as I have time.
>
>Erik
>
>
--
Thanks & Regards
Rahul Mehta
am using solr 3.1.
is i need to install the patch ? or any thing else i need to do ?
On Thu, Nov 24, 2011 at 3:36 PM, Ahmet Arslan wrote:
> > I want to have result of a range query with highlighted
> > Result.
>
> http://wiki.apache.org/solr/HighlightingParameters#hl.highlightMultiTerm
>
--
Thanks & Regards
Rahul Mehta
tlighting.*
> >
> >
> http://localhsot:8983/solr/select?q=field1:[5000%20TO%206000]&fl=field2&hl=on&rows=5&wt=json&indent=on&hl.fl=field3&hl.highlightMultiTerm=true
> >
>
> As wiki says "If the SpanScorer is also being used..." which
you specify field1 in hl.fl parameter?
>
> Plus you need you mark field1 as indexed="true" and stored="true" to
> enable highlighting.
>
> http://wiki.apache.org/solr/FieldOptionsByUseCase
>
>
--
Thanks & Regards
Rahul Mehta
Any other Suggestion.
On Thu, Nov 24, 2011 at 5:30 PM, Rahul Mehta wrote:
> Yes, I tried with specifiying hl.fl=field1, and field1 is indexed and
> stored.
>
>
> On Thu, Nov 24, 2011 at 5:23 PM, Ahmet Arslan wrote:
>
>> > oh sorry forgot to tell you that i
>>
Any other Suggestion. as these suggestions are not working.
On Thu, Nov 24, 2011 at 5:44 PM, Rahul Mehta wrote:
> Any other Suggestion.
>
>
> On Thu, Nov 24, 2011 at 5:30 PM, Rahul Mehta wrote:
>
>> Yes, I tried with specifiying hl.fl=field1, and field1 is indexed and
>&
{
"lily.id":"UUID.102adde5-cbff-4ca6-acb1-426bb14fb579",
"rangefld":5753}]
},
"highlighting":{
"UUID.c5f00cd3-343a-47c1-ab16-ace104b2540f":{},
"UUID.ed69ece0-1b24-4829-afb6-22eb242939f2":{},
"UUID.afa0c654-2f26-4c5b-9fda-8b51c5ec080d":{},
"UUID.d92b405d-f41e-4c85-9014-1b89a986ec42":{},
"UUID.102adde5-cbff-4ca6-acb1-426bb14fb579":{}}}
Why rangefld is not coming in highlight result.
On Mon, Nov 28, 2011 at 12:47 PM, Ahmet Arslan wrote:
> > Any other Suggestion. as these
> > suggestions are not working.
>
> Could it be that you are using FastVectorHighlighter? What happens when
> you add &hl.useFastVectorHighlighter=false to your search URL?
>
--
Thanks & Regards
Rahul Mehta
t;hl.highlightMultiTerm":"true",
> > "fl":"lily.id,rangefld",
> > "indent":"on",
> >
> > "hl.useFastVectorHighlighter":"false",
> >"q":"rangefld:[5000 TO
> > 6000]",
> > "hl.fl":"*,rangefld",
>
> I don't think hl.fl parameter accepts * value. Please try &hl.fl=rangefld
>
>
>
--
Thanks & Regards
Rahul Mehta
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > />
> >
> >
> >
> > > query="select raw_tag, freetags.id,
> > freetagged_objects.object_id as siteId
> >from freetags
> >inner join freetagged_objects
> >on freetags.id=freetagged_objects.tag_id
> > where freetagged_objects.object_id='${site.siteId}'">
> >
> >
> >
> >
> >
>
--
Thanks and Regards
Rahul A. Warawdekar
Thanks for quick reply and addressing each point queried.
Additional asked information is mentioned below:
OS = Ubuntu 12.04 (64 bit)
Sun Java 7 (64 bit)
Total RAM = 8GB
SolrConfig.xml is available at http://pastebin.com/SEFxkw2R
populated while querying.
Correct me if I am wrong. Are there any caches that are populated while
indexing?
Thanks,
Rahul
On Sat, Jan 26, 2013 at 11:46 PM, Shawn Heisey wrote:
> On 1/26/2013 12:55 AM, Rahul Bishnoi wrote:
>
>> Thanks for quick reply and addressing each p
- Forwarded Message -
From: Rahul Mandaliya
To: "solr-user@lucene.apache.org"
Sent: Thursday, March 29, 2012 9:38 AM
Subject: Fw: confirm subscribe to solr-user@lucene.apache.org
hi,
i am giving confirmation for subscribtion to solr-user@lucene.apache.org
regards,
Rahul
file called text.docx using the following
> command:
>
> curl
> "
> http://localhost:8983/solr/update/extract?literal.id=doc1&uprefix=attr_&fmap.content=attr_content&commit=true
> "
> -F "myfile=@UIMA_sample_test.docx"
>
> When I searched the file I am not able to see the additional UIMA fields.
>
> Can you please help if you been able to solve the problem.
>
>
> With Regds & Thanks
> Divakar
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-with-UIMA-tp3863324p3923443.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks and Regards
Rahul A. Warawdekar
deployment on the master.
>
> this has me stumped - not sure what to check next.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/solr-replication-failing-with-error-Master-at-is-not-available-Index-fetch-failed-tp3932921p3935699.html
> S
master.
> 2012-04-24 13:02:59,991 INFO [org.a
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/solr-replication-failing-with-error-Master-at-is-not-available-Index-fetch-failed-tp3932921p3936107.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks and Regards
Rahul A. Warawdekar
. Thank you.
regards
Rahul
olaris
jdk : 1.5.0_14 32 bit
Solr : 1.3
App Server : Weblogic 10MP1
Thank you.
- Rahul
On Tue, Nov 15, 2011 at 10:49 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> I'm assuming the question was about how MANY documents have been indexed
> across all shards.
>
&
facet.field=F_P1946367030&facet.field=S_P1406453569&facet.field=S_P2017662626&facet.field=S_P1406389978&facet.field=F_P1946367024
My primary question here is, can Solr handle this kind of queries with so
many facet fields. I have tried using both enum and fc for facet.method and
there i
e web app. All solr jar files are
present in my webapp's WEB-INF\lib directory. I use EmbeddedSolrServer. So
is there a way I can get this information that the admin would show ?
Thank you for your time.
-Rahul
On Wed, May 2, 2012 at 5:19 PM, Jack Krupansky wrote:
> The FieldCache gets
;
> Thanks in Advance
> Srini
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/how-to-limit-solr-indexing-to-specific-number-of-rows-tp3960344.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks and Regards
Rahul A. Warawdekar
above have been done after 2 force GCs were done to
identify the free memory.
The progression of memory usage looks quite high with the above numbers. As
the number of searches widen, the speed of memory consumption decreases.
But at some point it does hit OOM.
- Rahul
On Thu, May 3, 2012 at 8
y I can improve faceting performance with all my fields as
multiValued fields ?
Appreciate any help on this. Thank you.
- Rahul
On Mon, May 7, 2012 at 7:23 PM, Rahul R wrote:
> Jack,
> Sorry for the delayed response:
> Total memory allocated : 3GB
> Free Memory on startup of appl
1 - 100 of 267 matches
Mail list logo