solr server.
I have also tested it with this code:
server.add(solrDoc);
server.commit(false,false);
I track the commit method in DirectUpdateHandler2 class and it is called
and works correctly .
Regards,
Parisa
P.S. I use Apache Solr 1.3.0
--
View this message in context:
http:/
I should mention that I have already added this his tag in my SolrConfig.xml
of all cores.
and It works in single core but unfortunately doesn't work in multi core .
--
View this message in context:
http://www.nabble.com/immediatley-commit-of-docs-doesnt-work-in-multiCore-case-tp20072378p2
every this is alright with first url but the immediately committed doc
in search result .
Parisa
--
View this message in context:
http://www.nabble.com/immediatley-commit-of-docs-doesnt-work-in-multiCore-case-tp20072378p20172973.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have a problem with solrj delete By Id . If I search a keyword and it has
more than 1 result no (for example 7) then I delete on of the resulted doc
with solrj (server.deleteById ) , I search this keyword again the result
no is zero . and it 's not correct because it should be 6 . It should sh
Shalin Shekhar Mangar wrote:
>
> Did you call commit after the delete?
>
> Ofcourse I call commit and I test both commit(false,false) and
> commit(true,true) in both cases the result is the same.
>
> On Tue, Jan 13, 2009 at 4:12 PM, Parisa wrote:
>
>>
>>
Is there any solution for fixing this bug ?
--
View this message in context:
http://www.nabble.com/solrj-delete-by-Id-problem-tp21433056p21661131.html
Sent from the Solr - User mailing list archive at Nabble.com.
I found how the issue is created .when solr warm up the new searcher with
cacheLists , if the queryResultCache is enable the issue is created.
notice:as I mentioned before I commit with waitflush=false and
waitsearcher=false
so it has problem in case the queryResultCache is on,
but I don't know
I should say that we also have this problem when we commit with waitflush =
true and waitsearcher = true
because it again close the old searcher and open a new one. so it has
warming up process with the queryResultCache.
besides , I need to commit waitFlush = false and waitSearcher=false to
I know that I can see the search result after the commit and it is ok,
I can disable the queryResultCache and the problem will be fixed . but I
need the queryResultCache because my index Size is big and I need good
performance .
so I am trying to find how to fix the bug or may be the solr guys
You may try this
(({!join from=inner_id to=outer_id fromIndex=othercore v=$joinQuery}
And pass another parameter joinQuery=(city:"Stara Zagora" AND prod:214)
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Fri, Mar 21, 2014 at 4:47 AM, Marcin Rzewucki wr
My example should also work, am I missing something?
&q=({!join from=inner_id to=outer_id fromIndex=othercore
v=$joinQuery})&joinQuery=(city:"Stara Zagora" AND prod:214)
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Fri, Mar 21, 2014 at 2:11 PM, Y
glad the suggestions are working for you!
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Mon, Mar 24, 2014 at 4:10 AM, Marcin Rzewucki wrote:
> Hi,
>
> Yonik, thank you for explaining me the reason of the issue. The workarounds
> you suggested are working fi
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Apr 17, 2014 at 5:25 PM, Matt Kuiper wrote:
> Ok, that makes sense.
>
> Thanks again,
> Matt
>
> Matt Kuiper - Software Engineer
> Intelligent Software Solutions
> p. 719.452.7721 | matt.kui...@issin
cool, thanks.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Apr 17, 2014 at 11:37 PM, Erick Erickson wrote:
> No, the 5 most recently used in a query will be used to autowarm.
>
> If you have things you _know_ are going to be popular fqs, you could
>
that doesn't work
on the fly. you will need to write custom code
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Apr 24, 2014 at 6:11 AM, hungctk33 wrote:
> Pls! Help me.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.
use the FQs so that you
could hit the cache and hence the second call will be fast) and fetch the
documents
- use them for building the response
out of the box Solr won't do this for you..
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Mon, May 12, 2014 at 7:05 AM
o use field level boosting with the above query, example
exactMatch:"160 Associates LP"^10 OR text:""160 Associates LP"^5
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Oct 31, 2013 at 4:00 PM, Susheel Kumar <
susheel.ku...@thedigitalgroup.net
Ray,
FYI: there are more sophisticated joins available via
https://issues.apache.org/jira/browse/SOLR-4787
not on trunk yet, but worth taking a look.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Jan 2, 2014 at 8:05 PM, Ray Cheng wrote:
> Hi Chris,
>
> &
please point me to the jira link
otherwise I can open an issue if this needs some analysis?
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
I was trying with the [* TO *] as an example, the real use case is OR
query between 2/more range queries of timestamp fields (saved in
milliseconds). So I can't use FQs as they are ANDed by definition.
Am I missing something here?
Thanks,
Kranti K. Parisa
http://www.linkedin.c
yes thats the key, these time ranges change frequently and hitting
filtercache then is a problem. I will try few more samples and probably
debug thru it. thanks.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Wed, Jan 8, 2014 at 12:11 PM, Erick Erickson wrote:
> W
did you try this?
q={!func}customfunc($v1)&v1=somevalue&qf=fieldname
more info
http://wiki.apache.org/solr/FunctionQuery
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Wed, Jan 8, 2014 at 2:22 AM, Mukundaraman valakumaresan <
muk...@8kmiles.com> wrot
Thank you, will take a look at it.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Jan 9, 2014 at 10:25 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello,
>
> Here is workaround for caching separate clauses in OR filters.
> http://blo
which will support Nested Joins and obviously hit filter cache.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Tue, Jan 14, 2014 at 2:20 PM, heaven wrote:
> Can someone shed some light on this?
>
>
>
> --
> View this message in context:
> http://lucene.47
cool, np.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Jan 16, 2014 at 11:30 AM, heaven wrote:
> Nvm, figured it out.
>
> To match profiles that have "test entry" in own attributes or in related
> rss
> entries it is possible to use ({!jo
ed by disabling the replication. If threshold checks are passed,
enable the replication.
This way you can configure the caches, other settings that you need for
indexing and configure something else on your Query Engine.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Fri, Jan 17,
can you post the complete solrconfig.xml file and schema.xml files to
review all of your settings that would impact your indexing performance.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Sat, Jan 25, 2014 at 12:56 AM, Susheel Kumar <
susheel.ku...@thedigitalgroup.
Why don't you do parallel indexing and then merge everything into one and
replicate that from the master to the slaves in SolrCloud?
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Wed, Feb 19, 2014 at 3:04 PM, Susheel Kumar <
susheel.ku...@thedigitalgroup.net> w
thats what I do. precreate JSONs following the schema, saving that in
MongoDB, this is part of the ETL process. after that, just dump the JSONs
into Solr using batching etc. with this you can do full and incremental
indexing as well.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in
Hi All,
I am trying with Solr Highlighting. I have problem in highlighting phrases
consists of special characters
for example, if I search for a phrase like "united. states. usa" then the
results are displayed matching the exact phrase and also without special
characters means "united states us
;);
also I was reading something about (I didn't use it yet)
true
On Wed, Aug 18, 2010 at 2:06 PM, Kranti K K Parisa
wrote:
> Hi All,
>
> I am trying with Solr Highlighting. I have problem in highlighting phrases
> consists of special characters
>
> for example,
e PDF as i
am storing the field along with indexing.
Is there any such limitations with SOLR indexing, please let me know at the
earliest.
Thanks in advance!
Best Regards,
Kranti K K Parisa
not getting generated completely because of which my search not
working for the full content.
Please suggest.
Best Regards,
Kranti K K Parisa
On Tue, Jan 19, 2010 at 8:27 PM, Mark Miller wrote:
> Kranti™ K K Parisa wrote:
> > Hi All,
> >
> > I have a problem using SOLR
Hi Mark,
As you see my config file contains the value as 10,000
1
But when I check thru Lukeall jar file I can see the Term count around
3,000.
Please suggest.
Best Regards,
Kranti K K Parisa
2010/1/19 Mark Miller
> It limits the number of tokens that will be indexed.
>
> Kr
Hi Mark,
I changed the value to 1,000,000,000 to just test my luck.
But unfortunately I am still not getting the index for all Token.
Please suggest.
Best Regards,
Kranti K K Parisa
2010/1/19 Kranti™ K K Parisa
> Hi Mark,
>
> As you see my config file contains the value as 10,00
Can anyone suggest/guide me on this.
Best Regards,
Kranti K K Parisa
2010/1/19 Kranti™ K K Parisa
> Hi Mark,
>
> I changed the value to 1,000,000,000 to just test my luck.
>
> But unfortunately I am still not getting the index for all Token.
>
> Please suggest.
>
>
the index to perform the search.
What would be the suggestible Analyzers, filters that I should check with?
Currently I am using the following:
Please suggest
Best Regards,
Kranti K K Parisa
On Tue, Jan 19, 2010 at 9:03 PM, Erick Erickson
rick. I appreciate your help.
<http://search.lucidimagination.com/search/document/30616a061f8c4bf6/solr_ignoring_maxfieldlength>
Best Regards,
Kranti K K Parisa
2010/1/19 Kranti™ K K Parisa
> Hi Erik,
>
> Yes, i deleted the index and re-indexed after increasing the value (i have
/document types
Best Regards,
Kranti K K Parisa
yes tika indexes all formats.
but i am specifically looking for OCR (thru java) atleast for PDF or JPEG
images
any clues?
Best Regards,
Kranti K K Parisa
On Thu, Feb 4, 2010 at 8:29 PM, mike anderson wrote:
> There might be an OCR plugin for Apache Tika (which does exactly this out
&
Parisa
nd of wrapper would help all others for these kind of
requirements.
Best Regards,
Kranti K K Parisa
On Tue, Feb 16, 2010 at 9:02 PM, Erick Erickson wrote:
> Unless you have *evidence* that the indexing each pdf with
> the form data as a single SOLR document is a problem,
> I would
actions, as there would be some delay, users cant see the data
they saved in their repository until its indexed.
that is the reason I am planning to use SOLR xml post request to update the
index silently but how about multiple users writing on same index?
Best Regards,
Kranti K K Parisa
Hi Ron,
Thanks for the reply. So does this mean that writer lock is nothing to do
with concurrent writes?
Best Regards,
Kranti K K Parisa
On Tue, Mar 2, 2010 at 4:19 PM, Ron Chan wrote:
> as long as the document id is unique, concurrent writes is fine
>
> if for same reason the sa
and also about the time when two update requests come at the same time. Then
whichever request comes first will be updating the index while other
requests wait until the locktimeout that we have configured??
Best Regards,
Kranti K K Parisa
2010/3/2 Kranti™ K K Parisa
> Hi Ron,
>
&g
specified for F2?
Best Regards,
Kranti K K Parisa
may be this is one very imp feature to be considered for next releases of
SOLR. sometimes these kind of cases would come up.
Best Regards,
Kranti K K Parisa
On Thu, Mar 4, 2010 at 3:01 PM, Andrzej Bialecki wrote:
> On 2010-03-04 07:41, Walter Underwood wrote:
>
>> No. --wund
again, we have to display the results
along with the tags attached by that user (previously). and also display
some facets for the tags.
please give some ideas/suggestions.
Best Regards,
Kranti K K Parisa
2010/2/23 André Maldonado
> Hi all.
>
> I have 2 indexes with some similar fields
java:171)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:416)
... 5 more
=
Best Regards,
Kranti K K Parisa
Hi,
Is it possible to execute multiple SOLR queries (basically same
structure/fields but due to the headersize limitations for long query URLs,
thinking of having multiple SOLR queries)
on single index like a batch or so?
Best Regards,
Kranti K K Parisa
Parisa
Thanks Paul, I shall continue doing some more R&D with your inputs.
Best Regards,
Kranti K K Parisa
On Tue, May 25, 2010 at 12:54 PM, Paul Dhaliwal wrote:
> It depends on what kind of load you are talking about and what your
> expertise is.
>
> NGINX does perform better than
52 matches
Mail list logo