hi,
My requirement is to write the index data into S3, we have solr installed
on aws instances. Please let me know if there is any documentation on how
to achieve writing the index data to s3.
Thanks
Hi,
Is there a way to disable jvm properties from the solr UI.
It has some information which we don’t want to expose. Any pointers would
be helpful.
Thanks
Hi,
Is there any way i can exclude stop words from the collations and
sugesstions from spell check component ?
Regards,
Naveen Pajjuri.
Here sample is the name of my collection.
Thanks
On Sun, Aug 7, 2016 at 3:10 PM, Naveen Pajjuri
wrote:
> Hi,
> I'm trying to move to solr-6.1.0. it was working fine and i cleaned up zk
> data (version folder) and restarted solr and zookeeper. I started gettin
ption:
Specified config does not exist in ZooKeeper: sample.
Please let me know what i'm missing.
Regards,
Naveen Reddy.
Hi,
Im trying to move from 4.10.4 to 6.1.0.
I want to define and use custom field types. but i read that its not
advisable to modify managed-schema file. how do i create custom field types
??
Thanks in advance,
Naveen Reddy
Hi,
While sending updates to solr cloud i randomly send updates to one of the
node (in my cloud) directly using httpSolrServer. if i use cloudSolrServer
(by passing zk ip's), instead of httpSolrServer can i expect any improvment
in performance.
my baisc question is how does updates propagate when
Hi,
If i apply some sorting order on solr. when are the Documents sorted.
1. are documents sorted after fetching the results ?
2. or we get sorted documents ?
Regards,
Naveen
: right now i'm instantiating CloudSolrServer with one of the zookeeper
machine's ip from the cluster. But if zookeeper on this machine dies my
production systems may break.
Thanks,
Naveen.
earphones => ear phones in my synonyms.txt and the datatype definition for
that field keywords is REGARDS,
Naveen
Thanks *Shawn.*
i was using older version of solrj. upgrading it to newer version worked.
Thank you.
On Thu, Jun 9, 2016 at 11:41 AM, Shawn Heisey wrote:
> On 6/8/2016 11:44 PM, Naveen Pajjuri wrote:
> > Trying to migrate from HttpSolrServer to CloudSolrServer. getting the
>
er.java:46)
whereas my cluterstate json says --
"maxShardsPerNode":"1",
"router":{"name":"compositeId"},
"replicationFactor":"1".
please advice.
PS : i'm using solr 4.10.4.
Thanks,
Naveen.
Hi,
I am writing a Solr Application, can anyone please let me know how to Unit test
the application?
I see we have MiniSolrCloudCluster class available in Solr, but I am confused
about how to use that for Unit testing.
How should I create a embedded server for unit testing?
Thanks,
Naveen
Hi,
I am using MiniSolrCloudCluster class in writing unit test cases for testing
the solr application.
Looks like there is a HTTPClient library mismatch with the solr version and I
am getting the below error,
java.lang.VerifyError: Bad return type
Exception Details:
Location:
org/apache
Hi Nagendra,
Thanks a lot .. i will start working on NRT today.. meanwhile old settings
(increased warmSearcher in Master) have not given me trouble till now ..
but NRT will be more suitable to us ... Will work on that one and will
analyze the performance and share with you.
Thanks
Naveen
2011
Nagendra
You wrote,
Naveen:
*NRT with Apache Solr 3.3 and RankingAlgorithm does need a commit for a
document to become searchable*. Any document that you add through update
becomes immediately searchable. So no need to commit from within your
update client code. Since there is no commit, the
options which we can
apply in order to optimize.?
Thanks
Naveen
On Sun, Aug 14, 2011 at 9:42 PM, Erick Erickson wrote:
> Ah, thanks, Mark... I must have been looking at the wrong JIRAs.
>
> Erick
>
> On Sun, Aug 14, 2011 at 10:02 AM, Mark Miller
> wrote:
> >
> > On A
Hi,
Most of the settings are default.
We have single node (Memory 1 GB, Index Size 4GB)
We have a requirement where we are doing very fast commit. This is kind of
real time requirement where we are polling many threads from third party and
indexes into our system.
We want these results to be av
, we were using
commitWithin as 10 secs, which was the root cause of taking so long for
indexing the document because of so many segments to be committed.
On separate commit command using curl solved the issue.
The performance improved from 3 mins to 1.5 secs :)
Thanks a lot
Naveen
On Thu, Aug
the index data, it is taking 9 secs
What would be approach to have better indexing performance as well as index
size should also at the same time.
The index size was around 4.5 GB
Thanks
Naveen
On Thu, Aug 11, 2011 at 3:47 PM, Peter Sturge wrote:
> Hi,
>
> When you get this exceptio
prior to finalize(), indicates a bug
-- POSSIBLE RESOURCE LEAK!!!
Kindly tell me where it is failing
We have increased timelockout. But still it is giving the same problem
Thanks
Naveen
it should take 40 secs
to index 100,000 docs (if you have 10-12 fields defined). I forgot the link.
They talked about increasing the merge factor.
Thanks
Naveen
On Thu, Aug 4, 2011 at 7:05 AM, Erick Erickson wrote:
> What version of Solr are you using? If it's a recent versi
more thing, we have CPU utilization (20-25 % in all 4 cores) (using
htop)
Thanks
Naveen
On Thu, Aug 4, 2011 at 7:05 AM, Erick Erickson wrote:
> What version of Solr are you using? If it's a recent version, then
> optimizing is not that essential, you can do it during off hours, perhap
Sorry for 15k Docs, it is taking 3 mins.
On Thu, Aug 4, 2011 at 10:07 PM, Naveen Gupta wrote:
> Hi,
>
> We are having a requirement where we are having almost 100,000 documents to
> be indexed (atleast 20 fields). These fields are not having length greater
> than 10 KB.
>
>
factors we need to consider ?
When should we consider optimize ?
Any other deviation from default would help us in achieving the target.
We are allocating JVM max heap size allocation 512 MB, default concurrent
mark sweep is set for garbage collection.
Thanks
Naveen
Can somebody answer this?
What should be the best strategy for optimize (when million of messages we
are indexing for a new registered user)
Thanks
Naveen
On Tue, Aug 2, 2011 at 5:36 PM, Naveen Gupta wrote:
> Hi
>
> We have a requirement where we are indexing all the messages of a
10k
threads, commit is called). we are not calling commit after every doc.
Secondly how can we use multi threading from solr perspective in order to
improve jvm and other utilization ?
Thanks
Naveen
amp; field3
but not working.
Can you help in this regard?
What other config should i consider in terms of given context ?
Thanks
Naveen
mand
can we do it using post
Regards
Naveen
--
View this message in context:
http://lucene.472066.n3.nabble.com/ERROR-on-posting-update-request-using-CURL-in-php-tp3047312p3047372.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
curl http://localhost:8983/solr/update?commit=true -H "Content-Type:
text/xml" --data-binary 'testdoc'
Regards
Naveen
On Fri, Jun 10, 2011 at 10:18 AM, Naveen Gupta wrote:
> Hi
>
> This is my document
>
> in php
>
> $xmldoc = 'F_146 name=&
olog; expected '<'
at [row,col {unknown-source}]: [1,1]description The
request sent by the client was syntactically incorrect (Unexpected character
''' (code 39) in prolog; expected '<'
at [row,col {unknown-source}]: [1,1]).Apache Tomcat/6.0.18
Thanks
Naveen
. Is this working for
you, because we are always getting number format exception. I posted as well
in the community, but till now no response has some.
Thanks
Naveen
On Thu, Jun 9, 2011 at 6:43 PM, Gary Taylor wrote:
> Naveen,
>
> Not sure our requirement matches yours, but one of the
this ... the concept of snippet kind of thing
...
Thanks
Naveen
On Wed, Jun 8, 2011 at 1:45 PM, Gary Taylor wrote:
> Naveen,
>
> For indexing Zip files with Tika, take a look at the following thread :
>
>
> http://lucene.472066.n3.nabble.com/Extracting-contents-of-zipped-files-w
at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
> at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
> at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
>
er.process(Http11Protocol.java:583)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:619)
Thanks
Naveen
using API)
I am basically a Java Guy, so i can feel the problem
Thanks
Naveen
2011/6/6 Tomás Fernández Löbbe
> 1. About the commit strategy, all the ExtractingRequestHandler (request
> handler that uses Tika to extract content from the input file) will do is
> extract the content of you
fields which are already defined in schema and few
of the them are required earlier, but for this purpose, we don't want, how
to have two requirement together in the same schema?
3. since it is frequent commit, how to use solr multicore for write and read
operations separately ?
Thanks
Naveen
Yes,
that one i used and it is working fine .thanks to nabble ..
Thanks
Naveen
On Fri, Jun 3, 2011 at 4:02 PM, Gora Mohanty wrote:
> On Fri, Jun 3, 2011 at 3:55 PM, Naveen Gupta wrote:
> > Hi
> >
> > We want to post to solr server with some of the files (rtf,doc,etc) usi
Hi Pravesh
We don't have that setup right now .. we are thinking of doing that
for writes we are going to have one instance and for read, we are going to
have another...
do you have other design in mind .. kindly share
Thanks
Naveen
On Fri, Jun 3, 2011 at 2:50 PM, pravesh wrote:
Hi
We want to post to solr server with some of the files (rtf,doc,etc) using
php .. one way is to post using curl
is there any client like java client (solrcell)
urls will also help
Thanks
Naveen
/CommonQueryParameters
Callback is the method name which you will define .. after getting response,
this method will be called (callback mechanism)
using the response from solr (json format), you need to show the response or
analyze the response as per your business need.
Thanks
Naveen
On Fri
different company name based on some
heuristic (hashing) (if it grows furhter)
i want to do in the same solr instance. can it be possible ?
Thanks
Naveen
solr...
since i am newbie, so can you please tell me if we can have some settings
which can keep track of incremental indexing?
Thanks
Naveen
version, which is already there in lib folder ..
i have been finding a lot of jars to be deployed .. i am afraid if that is
causing the problem ..
Has somebody experienced the same ?
Thanks
Naveen
On Fri, Jun 3, 2011 at 2:41 AM, Juan Grande wrote:
> Hi Naveen,
>
> Check if there is a dyna
eld 'attr_meta'description The
> request sent by the client was syntactically incorrect (ERROR:unknown field
> 'attr_meta').Apache
> Tomcat/6.0.18root@weforpeople:/usr/share/solr1/lib#
>
>
> Please note
>
> i integrated apacha tika 0.9 with apache-solr-1.4 locally on windows
> machine and using solr cell
>
> calling the program works fine without any changes in configuration.
>
> Thanks
> Naveen
>
>
pacha tika 0.9 with apache-solr-1.4 locally on windows machine
and using solr cell
calling the program works fine without any changes in configuration.
Thanks
Naveen
46 matches
Mail list logo