7.doInSolr(Unknown
Source)
at
org.springframework.data.solr.core.SolrTemplate.execute(SolrTemplate.java:167)
Regards
Ashish Athavale | Architect
ashish_athav...@persistent.com<mailto:ashish_athav...@persistent.com>| Cell:
+91-9881137580| Tel: +91-02067034708
Persis
o
stopword,But then we'll not be able to achieve both phrase and AND in a
single request.
Is there anyway ? can be removed from phrase or any suggestion for our
requirement.
Please suggest
Regards
Ashish
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
","smart",
"connected","connected",
"fator","faster"]},
"collation",{
"collationQuery":"sparc connected factory",
"hits":14,
"misspellingsAndCorrections":[
Hi,
We are trying to remove stopwords from analysis using edismax parser
parameter.The documentation says
*stopwords
A Boolean parameter indicating if the StopFilterFactory configured in the
query analyzer should be respected when parsing the query. If this is set to
false, then the StopFilterFa
Spellcheck configuration is default one..
solr.FileBasedSpellChecker
file
spellings.txt
UTF-8
./spellcheckerFile
default
jkdefault
file
on
true
10
5
5
true
10
true
10
5
Also the words are p
Spellcheck configuration is default one..
solr.FileBasedSpellChecker
file
spellings.txt
UTF-8
./spellcheckerFile
default
jkdefault
file
on
true
10
5
5
true
10
true
10
5
Also the words are
Please see the below requests and response
http://Sol:8983/solr/SCSpell/select?q="*internet of
things*"&defType=edismax&qf=spellcontent&wt=json&rows=1&fl=score,internet_of_things:query({!edismax
v='"*internet of things*"'}),instant_of_things:query({!edismax v='"instant
of things"'})
Response con
Can someone please explain the below behavior.For different q parameter
function query response differs although function queries are same
http://:8983/solr/SCSpell/select?q="*market
place*"&defType=edismax&qf=spellcontent&wt=json&rows=1&fl=internet_of_things:if(exists(query({!edismax
v='"internet
Hi,
I am seeing difference in file based spellcheck and index based spellcheck
implementations.
Using index based
http://:8983/solr/SCSpell/spell?q=*intnet of
things*&defType=edismax&qf=spellcontent&wt=json&rows=0&spellcheck=true&spellcheck.dictionary=*default*&q.op=AND
"suggestions":[
Thanks.I Agree.
Regards
Ashish
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
Currently score is calculated based on "Max Doc" instead of "Num Docs".Is
it possible to change it to "Num Docs"(i.e without deleted docs).Will it
require a code change or some config change.
Regards
Ashish
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
large segment.What impact this large
segment is going to have?
Our index ~30k documents i.e files with content(Segment size <1Gb as of now)
1.Do you recommend going for optimize in these situations?Probably it will
be done only when stats skew.Is it safe?
Regards
Ashish
--
Sent from: http:
using pagination but a infinite scroll which makes
it more noticeable.
Please suggest.
Regards
Ashish
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Erick,
To test this scenario I added replica again and from few days have been
monitoring metrics like Num Docs, Max Doc, Deleted Docs from *Overview*
section of core.Checked *Segments Info* section too.Everything looks in
sync.
http://:8983/solr/#/MyTestCollection_*shard1_replica_n7*/
http://
ed.Do you think it is possible or a good
idea?If yes is there a way in solr to know which replica served request?
Regards
Ashish
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
have tried the exact cache while debugging score difference during
sharding.Didn't help much.Anyhow that's a different topic.
Thanks again,
Regards
Ashish Bisht
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
still discrepancy was coming(no indexing also).So
I went ahead deleting the follower node(thinking leader replica should be in
correct state).After adding the new replica again,the issue is not
appearing.
We will monitor same if it appears in future.
Regards
Ashish
--
Sent from: http://lucene.4720
docs while doing all query.Its strange
that in query explain docCount and docFreq is differing.
Regards
Ashish
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello Everyone,
I am trying the new Solr 6.6 and using SolrPhpClient to create index having
info in an array.
$parts = array( '0' => array( 'id' => '0060248025', 'name' => 'Falling Up',
'author' => 'Shel Silverstein', 'inStock' => true, ), '1' => array( 'id' =>
'0679805273', 'name' => 'Oh, The
at
org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1435)
... 7 more
Usage: java -jar start.jar [options] [properties] [configs]
java -jar start.jar --help # for more information
[root@sys-77402 bin]#
Thanks
Ashish
above which I use the
documents from the result.
Has anyone done something like this before or would like to critique my
approach?
Regards,
Ashish
Thanks, everyone. Arcadius, that ticket is interesting.
I was wondering if an implementation of SolrClient could be based on
HttpAsyncClient
instead of HttpSolrClient. Just a thought right now, which needs to be
explored deeper.
- Ashish
On Mon, Aug 24, 2015 at 1:46 AM, Arcadius Ahouansou
Hello,
I want to run few Solr queries in parallel, which are being done in a
multi-threaded model now. I was wondering if there are any client libraries
to query Solr through a non-blocking I/O mechanism instead of a threaded
model. Has anyone attempted something like this?
Regards,
Ashish
Hello,
I am using Stemmer on a Ngram field. I am getting better results with
Stemmer factory after Ngram, but I was wondering what is the recommended
practice when using Stemmer on Ngram field?
Regards,
Ashish
Would like to do it during querying.
Thanks,
Ashish
On Tue, Mar 10, 2015 at 11:07 PM, Alexandre Rafalovitch
wrote:
> Is that during indexing or during query phase?
>
> Indexing has UpdateRequestProcessors (e.g.
> http://www.solr-start.com/info/update-request-processors/ )
ay be out of the box.
Is there any tutorial which describes how to wire together components like
this in a single handler?
Regards,
Ashish
.
Regards,
Ashish
video chat', 'audio chat' etc. without making another search request for
'chat'.
Can this be accomplished?
Regards,
Ashish
On Mon, Mar 9, 2015 at 2:50 AM, Aman Tandon wrote:
> Hi,
>
> AFAIK solr currently not providing this feature.
>
> Suppose a scenario
Hello,
I have enabled the Spellcheck component in Solr, which gives me spelling
suggestions. However, I would like those suggestions to be applied in the
same select request handler to retrieve additional results based on the
suggestions. How can this be achieved with Solr?
Regards,
Ashish
uot;myFile.txt"));
request.addContentStream(readFile);
SolrResponseBase response = null;
try {
response = (SolrResponseBase) request.process(server);
}catch(Exception e){
e.printStackTrace();
}
Regards,
Ashish
CAUTION - Disclaimer *
This e-ma
> Do I need to configure MorphineSolrSink?
>
Yes
>
> What is the mechanism's to do this or send this data over to Solr.
>
More details here
http://flume.apache.org/FlumeUserGuide.html#morphlinesolrsink
As suggested, please move further related question to Flume User ML.
>
> T
without using too much space.
Thanks
Ashish
Toby Cole-2 wrote:
>
> Hi Anish,
> Have you optimized your index?
> When you delete documents in lucene they are simply marked as
> 'deleted', they aren't physically removed from the disk.
> To get the disk sp
too much space.
Thanks
Ashish
Toby Cole-2 wrote:
>
> Hi Anish,
> Have you optimized your index?
> When you delete documents in lucene they are simply marked as
> 'deleted', they aren't physically removed from the disk.
> To get the disk space back y
straatweg 215c Fax. 050-3118124
> http://www.buyways.nl 9743 AD GroningenKvK 01074105
>
>
> On Tue, 2009-08-04 at 06:26 -0700, Ashish Kumar Srivastava wrote:
>
>> I am facing a problem in deleting solr data form disk space.
>> I had 80Gb of of solr data.
Sorry!! But this solution will not work because I deleted data by certain
query.
Then how can i know which files should be deleted. I cant delete whole data.
--
View this message in context:
http://www.nabble.com/Delete-solr-data-from-disk-space-tp24808676p24808868.html
Sent from the Solr - User
I am facing a problem in deleting solr data form disk space.
I had 80Gb of of solr data. I deleted 30% of these data by using query in
solr-php client and committed.
Now deleted data is not visible from the solr UI but used disk space is
still 80Gb for solr data.
Please reply if you have any solut
D:g*) gives results correctly.
But after adding OR the results are not correct, they seem to be like from
result of (spacegroupID:g*) remove the results of userID:g*.
Any idea on how to achieve the goal.
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/complex-
I don't want to set in solrConfig.xml. I want solr to take from my config
file or from system property.
Thanks,
Ashish
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> set the value in solrconfig.xml to what you like
>
> On Fri, Jun 12, 2009 at 10:38 AM, Ashish P
> wrote:
>>
d the same???
Please share...
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/change-data-dir-location-tp23992946p23992946.html
Sent from the Solr - User mailing list archive at Nabble.com.
embedded solr giving same
data dir. just trying to index.
Thanks,
Ashish
Shalin Shekhar Mangar wrote:
>
> On Thu, May 28, 2009 at 2:54 PM, Ashish P
> wrote:
>
>>
>> Hi,
>> I am committing to same index from two different embedded servers.
>> My lockty
Hi,
I am committing to same index from two different embedded servers.
My locktype is simple and writelocktimeout is commitLockTimeout is 10.
I read in a post "Update from multiple JVMs" where Hoss said this case is
supported but I am getting following error. I tried single lock also but
agai
Hi,
I have two instances of embedded server (no http) running on a network with
two separate indexes..
I want to replicate changes from one index to other.
Is there any way??
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/Index-replication-without-HTTP-tp23739156p23739156
Hi,
Any idea if documents from solr server are cleared even if commit fails or I
can still again try commit after some time??
Thanks,
Ashish
Ashish P wrote:
>
> If I add 10 document to solrServer as in solrServer.addIndex(docs) ( Using
> Embedded ) and then I commit and commit fail
If I add 10 document to solrServer as in solrServer.addIndex(docs) ( Using
Embedded ) and then I commit and commit fails for for some reason. Then can
I retry this commit lets say after some time or the added documents are
lost??
--
View this message in context:
http://www.nabble.com/commit-que
OK. And the replication available with solr 1.3 is only for unix right??
Thanks,
Ashish
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> On Fri, May 22, 2009 at 3:12 PM, Ashish P
> wrote:
>>
>> I want to add master slave configuration for solr. I have following solr
>> c
using
java..
Still is it possible to do solr replication using EmbeddedSolrServer on
windows??
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/solr-replication-1.3-tp23667360p23667360.html
Sent from the Solr - User mailing list archive at Nabble.com.
at
org.apache.lucene.index.StoredFieldsWriter.initFieldsWriter(StoredFieldsWriter.java:73)
I tried simple locktype also but it shows timeout exception when writing to
index.
Please help me out..
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/lock-problem-tp23663558p23663558.html
Sent from the Solr - User ma
what is the difference between query clause and filter query??
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/query-clause-and-filter-query-tp23629715p23629715.html
Sent from the Solr - User mailing list archive at Nabble.com.
Koji san,
Using CharStreamAwareCJKTokenizerFactory is giving me following error,
SEVERE: java.lang.ClassCastException: java.io.StringReader cannot be cast to
org.apache.solr.analysis.CharStream
May be you are typecasting Reader to subclass.
Thanks,
Ashish
Koji Sekiguchi-2 wrote:
>
>
After this should I be using same cjkAnalyzer or use charFilter??
Thanks,
Ashish
Koji Sekiguchi-2 wrote:
>
> Ashish P wrote:
>> I want to convert half width katakana to full width katakana. I tried
>> using
>> cjk analyzer but not working.
>> Does cjkAnalyzer d
I want to convert half width katakana to full width katakana. I tried using
cjk analyzer but not working.
Does cjkAnalyzer do it or is there any other way??
--
View this message in context:
http://www.nabble.com/half-width-katakana-tp23270186p23270186.html
Sent from the Solr - User mailing list
Right. But is there a way to track file updates and diffs.
Thanks,
Ashish
Noble Paul നോബിള് नोब्ळ् wrote:
>
> If you can check it out into a directory using SVN command then you
> may use DIH to index the content.
>
> a combination of FileListEntityProcessor and PlainTextEnti
Is there any way to index contents of SVN rep in Solr ??
--
View this message in context:
http://www.nabble.com/How-to-index-the-contents-from-SVN-repository-tp23240110p23240110.html
Sent from the Solr - User mailing list archive at Nabble.com.
I want to analyze a text based on pattern ";" and separate on whitespace and
it is a Japanese text so use CJKAnalyzer + tokenizer also.
in short I want to do:
Thanks Shalin.
Another question what is the meaning of this syntax
[* TO *]
Thanks,
Ashish
Shalin Shekhar Mangar wrote:
>
> On Fri, Apr 3, 2009 at 1:32 PM, Ashish P wrote:
>
>>
>> I want to query all documents where name:somevalue and actionuser value
>> is
&
Consider, I have following 3 fields
I want to query all documents where name:somevalue and actionuser value is
not equal to creationuser value.
Can we do this???
--
View this message in context:
http://www.nabble.com/filter-query-question-tp22863789p22863789.html
Sent from the Solr - User m
Actually what I meant was if there are 100 indexed fields. So there are 100
facet fields right..
So whenever I create solrQuery, I have to do addFacetField("fieldName")
can I avoid this and just get all facet fields.
Sorry for the confusion.
Thanks again,
Ashish
Shalin Shekhar Ma
Similar to getting range facets for date where we specify start, end and gap.
Can we do the same thing for numeric facets where we specify start, end and
gap.
--
View this message in context:
http://www.nabble.com/numeric-range-facets-tp22698330p22698330.html
Sent from the Solr - User mailing li
Can I get all the facets in QueryResponse??
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/get-all-facets-tp22693809p22693809.html
Sent from the Solr - User mailing list archive at Nabble.com.
also.
Is there a way to achieve this???
Thanks in advance,
Ashish
--
View this message in context:
http://www.nabble.com/search-individual-words-but-facet-on-delimiter-tp22676007p22676007.html
Sent from the Solr - User mailing list archive at Nabble.com.
my documents (products) have a price field, and I want to have
a "dynamically" calculated range facet for that in the response.
E.g. I want to have this in the response
price:[* TO 20] -> 23
price:[20 TO 40] -> 42
price:[40 TO *] -> 33
if prices are between 0 and 60
but
price:[* TO 100]
Hey it works. Can you please tell me the reason??
Thanks,
Ashish
Koji Sekiguchi-2 wrote:
>
> Ashish P wrote:
>> I have created a field,
>>
>>
>>
>>
>>
>>
>>
> Set c
I have created a field,
The pattern is "_" (Underscore)
When I do field analysis using solr admin, it shows it correctly. Have a
look at attached image. e.g. cric_info
http://www.nabble.com/file/p22594575/field%2Banal
Is there any api in SolrJ that calls the dataImportHandler to execute
commands like full-import and delta-import.
Please help..
Ashish P wrote:
>
> Is it possible to index DB data directly to solr using EmbeddedSolrServer.
> I tried using data-Config File and Full-import commad, it
Is it possible to index DB data directly to solr using EmbeddedSolrServer. I
tried using data-Config File and Full-import commad, it works. So assuming
using CommonsHttpServer will also work. But can I do it with
EmbeddedSolrServer??
Thanks in advance...
Ashish
--
View this message in context
Yes cleaning up works...
But not sure how to avoid this happening again??
-Ashish
jonbaer wrote:
>
> Id suggest what someone else mentioned to just do a full clean up of
> the index. Sounds like you might have kill -9 or stopped the process
> manually while indexing (would be
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:440)
at org.apache.lucene.index.FieldsWriter.(FieldsWriter.java:62)
at
org.apache.lucene.index.StoredFieldsWriter.initFieldsWriter(StoredFieldsWriter.java:65)
Please help..
Ashish P wrote:
>
> Thanks man.
> I just tried what u sugges
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:938)
at
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:116)
Any ideas???
-Ashish
Noble Paul നോബിള് नोब्ळ् wrote:
>
> String xml = null;//load the
o use DirectXmlRequest?? any example
Thanks in advance...
Ashish
--
View this message in context:
http://www.nabble.com/SolrJ-XML-indexing-tp22450845p22450845.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Shalin,
Got the answer. I had uniquekey defined in schema.xml but that was not
present in any columns hence problem for indexing.
Thanks a lot for your help buddy.
Cheers,
Ashish
Ashish P wrote:
>
> yes I did full import. so previous docs are gone as you said.
> But when I
:
>
> On Tue, Mar 10, 2009 at 11:01 AM, Ashish P
> wrote:
>
>>
>> Oh looks like some other big problem, Now I am not able to see other text
>> data I indexed before adding DB data to index...
>> Can not search any data...But I am sure I was able to search be
:48 AM, Ashish P
> wrote:
>
>>
>>
>> In schema xml, I have defined following...
>>
>>> stored="true" />
>>> stored="true" />
>> Thanks,
>> Ashish
>>
>>
> If you search for *:* from
In schema xml, I have defined following...
Thanks,
Ashish
Shalin Shekhar Mangar wrote:
>
> On Tue, Mar 10, 2009 at 10:31 AM, Ashish P
> wrote:
>
>> now I am able to view data that is indexed using URL
>> http://localhost:8080/solr/admin/dataimpor
/admin/dataimport.jsp to see the data as
-
user1
-
0
-
CN=user1,OU=R&D
But when I search user_name:user1 then the result is not returned at all.
Am I missing something here??? Please help.
Thanks,
Ashish
--
View this message in context:
http://www.nabble.com/Queryin
Hi
Following is the data-config.xml
Not sure what is the error here, really stuck...
Thanks,
Ashish
Noble Paul നോബിള് नोब्ळ् wrote:
>
> can u paste the data-config.xml
> looks like there
I am getting following exception on configuring dataImporthandler in
SolrConfig.xml
INFO: Processing configuration from solrconfig.xml: {config=data-config.xml}
[Fatal Error] :1:1: Content is not allowed in prolog.
Mar 9, 2009 12:01:37 PM org.apache.solr.handler.dataimport.DataImportHandler
inf
o
hmm. I think I will just do that.
Thanks for clearing my doubt...
-Ashish
Shalin Shekhar Mangar wrote:
>
> On Fri, Mar 6, 2009 at 10:53 AM, Ashish P
> wrote:
>
>>
>> OK. so basically what you are saying is when you use copyField, it will
>> copy
>> th
tent field and part of data to go into
tsdatetime field. But that looks like not possible.
The field "condition" is actually mix of multiple data values.
Shalin Shekhar Mangar wrote:
>
> On Fri, Mar 6, 2009 at 7:40 AM, Ashish P wrote:
>
>>
>> I have a multi valued
It works
thanks
Ashish
Shalin Shekhar Mangar wrote:
>
> On Fri, Mar 6, 2009 at 7:03 AM, Ashish P wrote:
>
>>
>> I want to search on single date field
>> e.g. q=creationDate:2009-01-24T15:00:00.000Z&rows=10
>>
>> But I think the query gets terminate
What are the types of documents types ( Mime types ) supported for indexing
and searching in solr.
--
View this message in context:
http://www.nabble.com/supported-document-types-tp22366114p22366114.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have a multi valued field as follows:
I want to index the data from this field into following fields
How can this be done?? Any ideas...
--
View this message in context:
http://www.nabble.com/index-multi-valued-field-into-multiple-fields-tp22364915p22364915.html
Sent from the Solr - User
I want to search on single date field
e.g. q=creationDate:2009-01-24T15:00:00.000Z&rows=10
But I think the query gets terminated after T15 as ':' ( COLON ) is taken as
termination character.
Any ideas on how to search on single date or for that matter if query data
contains COLON then how to sea
82 matches
Mail list logo