Re: solr caching problem

2009-09-22 Thread satya
First of all , thanks a lot for the clarification.Is there any way to see,
how this cache is working internally and what are the objects being stored
and how much memory its consuming,so that we can get a clear picture in
mind.And how to test the performance through cache.

On Tue, Sep 22, 2009 at 11:19 PM, Fuad Efendi  wrote:

> > 1)Then do you mean , if we delete a perticular doc ,then that is going to
> be
> > deleted from
> >   cache also.
>
> When you delete document, and then COMMIT your changes, new caches will be
> warmed up (and prepopulated by some key-value pairs from old instances),
> etc:
>
>  
>  class="solr.LRUCache"
>  size="512"
>  initialSize="512"
>  autowarmCount="0"/>
>
> - this one won't be 'prepopulated'.
>
>
>
>
> > 2)In solr,is  cache storing the entire document in memory or only the
> > references to
> >documents in memory.
>
> There are many different cache instances, DocumentCache should store  Document> pairs, etc
>
>
>


Re: solr caching problem

2009-09-23 Thread satya
Is there any way to analyze or see that which documents are getting cached
by documentCache -

  



On Wed, Sep 23, 2009 at 8:10 AM, satya  wrote:

> First of all , thanks a lot for the clarification.Is there any way to see,
> how this cache is working internally and what are the objects being stored
> and how much memory its consuming,so that we can get a clear picture in
> mind.And how to test the performance through cache.
>
>
> On Tue, Sep 22, 2009 at 11:19 PM, Fuad Efendi  wrote:
>
>> > 1)Then do you mean , if we delete a perticular doc ,then that is going
>> to
>> be
>> > deleted from
>> >   cache also.
>>
>> When you delete document, and then COMMIT your changes, new caches will be
>> warmed up (and prepopulated by some key-value pairs from old instances),
>> etc:
>>
>>  
>>>  class="solr.LRUCache"
>>  size="512"
>>  initialSize="512"
>>  autowarmCount="0"/>
>>
>> - this one won't be 'prepopulated'.
>>
>>
>>
>>
>> > 2)In solr,is  cache storing the entire document in memory or only the
>> > references to
>> >documents in memory.
>>
>> There are many different cache instances, DocumentCache should store > Document> pairs, etc
>>
>>
>>
>


shingle query matching keyword tokenized field

2016-09-08 Thread Gandham, Satya
Can anyone help with this question that I posted on stackOverflow.

http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field

Thanks in advance.


help with field definition

2016-09-13 Thread Gandham, Satya
HI,

  I need help with defining a field ‘singerName’ with the right 
tokenizers and filters such that it gives me the below described behavior:

I have a few documents as given below:

Doc 1
  singerName: Justin Beiber
Doc 2:
  singerName: Justin Timberlake
…


Below is the list of quries and the corresponding matches:

Query 1: “My fav artist Justin Beiber is very impressive”
Docs Matched : Doc1

Query 2: “I have a Justin Timberlake poster on my wall”
Docs Matched: Doc2

Query 3: “The name Bieber Justin is unique”
Docs Matched: None

Query 4: “Timberlake is a lake of timber..?”
Docs Matched: None.

I have this described a bit more detailed here: 
http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field

I’d appreciate any help in addressing this problem.

Thanks !!



Re: help with field definition

2016-09-16 Thread Gandham, Satya
Hi Emir,

   Thanks for your reply. But I’m afraid I’m not seeing the expected 
response. I’ve included the query and the corresponding debug portion of the 
response:

select?q=Justin\ Beiber&df=exactName_noAlias_en_US
 
 Debug:
 
"rawquerystring":"Justin\\ Beiber",
"querystring":"Justin\\ Beiber",
"parsedquery":"+((exactName_noAlias_en_US:justin 
exactName_noAlias_en_US:justin beiber)/no_coord) 
+exactName_noAlias_en_US:beiber",
"parsedquery_toString":"+(exactName_noAlias_en_US:justin 
exactName_noAlias_en_US:justin beiber) +exactName_noAlias_en_US:beiber",
"explain":{},


Satya.

On 9/16/16, 2:46 AM, "Emir Arnautovic"  wrote:

Hi,

I missed that you already did define field and you are having troubles 
with query (did not read stackoverflow). Added answer there, but just in 
case somebody else is having similar troubles, issue is how query is 
written - space has to be escaped:

   q=Justin\ Bieber

Regards,
Emir

On 13.09.2016 23:27, Gandham, Satya wrote:
> HI,
>
>I need help with defining a field ‘singerName’ with the right 
tokenizers and filters such that it gives me the below described behavior:
>
> I have a few documents as given below:
>
> Doc 1
>singerName: Justin Beiber
> Doc 2:
>singerName: Justin Timberlake
> …
>
>
> Below is the list of quries and the corresponding matches:
>
> Query 1: “My fav artist Justin Beiber is very impressive”
> Docs Matched : Doc1
>
> Query 2: “I have a Justin Timberlake poster on my wall”
> Docs Matched: Doc2
>
> Query 3: “The name Bieber Justin is unique”
> Docs Matched: None
>
> Query 4: “Timberlake is a lake of timber..?”
> Docs Matched: None.
>
> I have this described a bit more detailed here: 
http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field
>
> I’d appreciate any help in addressing this problem.
>
> Thanks !!
>

-- 
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/





Re: help with field definition

2016-09-16 Thread Gandham, Satya
Great, that worked. Thanks Ray and Emir for the solutions.



On 9/16/16, 3:49 PM, "Ray Niu"  wrote:

Just add q.op=OR to change default operator to OR and it should work

2016-09-16 12:44 GMT-07:00 Gandham, Satya :

> Hi Emir,
>
>Thanks for your reply. But I’m afraid I’m not seeing the
> expected response. I’ve included the query and the corresponding debug
> portion of the response:
>
> select?q=Justin\ Beiber&df=exactName_noAlias_en_US
>
>  Debug:
>
> "rawquerystring":"Justin\\ Beiber",
> "querystring":"Justin\\ Beiber",
> "parsedquery":"+((exactName_noAlias_en_US:justin
> exactName_noAlias_en_US:justin beiber)/no_coord) +exactName_noAlias_en_US:
> beiber",
> "parsedquery_toString":"+(exactName_noAlias_en_US:justin
> exactName_noAlias_en_US:justin beiber) +exactName_noAlias_en_US:beiber",
> "explain":{},
>
>
> Satya.
>
> On 9/16/16, 2:46 AM, "Emir Arnautovic" 
> wrote:
>
> Hi,
>
> I missed that you already did define field and you are having troubles
> with query (did not read stackoverflow). Added answer there, but just
> in
> case somebody else is having similar troubles, issue is how query is
> written - space has to be escaped:
>
>q=Justin\ Bieber
>
> Regards,
> Emir
>
> On 13.09.2016 23:27, Gandham, Satya wrote:
> > HI,
> >
> >I need help with defining a field ‘singerName’ with the
> right tokenizers and filters such that it gives me the below described
> behavior:
> >
> > I have a few documents as given below:
> >
> > Doc 1
> >singerName: Justin Beiber
> > Doc 2:
> >singerName: Justin Timberlake
> > …
> >
> >
> > Below is the list of quries and the corresponding matches:
> >
> > Query 1: “My fav artist Justin Beiber is very impressive”
> > Docs Matched : Doc1
> >
> > Query 2: “I have a Justin Timberlake poster on my wall”
> > Docs Matched: Doc2
> >
> > Query 3: “The name Bieber Justin is unique”
> > Docs Matched: None
> >
> > Query 4: “Timberlake is a lake of timber..?”
> > Docs Matched: None.
> >
> > I have this described a bit more detailed here:
> http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-
> keyword-tokenized-field
> >
> > I’d appreciate any help in addressing this problem.
> >
> > Thanks !!
> >
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
>




Out of Memory Errors

2017-06-14 Thread Satya Marivada
Hi,

I am getting Out of Memory Errors after a while on solr-6.3.0.
The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
just kills the jvm right after.
Using Jconsole, I see the nice triangle pattern, where it uses the heap and
being reclaimed back.

The heap size is set at 3g. The index size hosted on that particular node
is 17G.

java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
-XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000

Looking at the solr_gc.log.0, the eden space is being used 100% all the
while and being successfully reclaimed. So don't think that has go to do
with it.

Apart from that in the solr.log, I see exceptions that are aftermath of
killing the jvm

org.eclipse.jetty.io.EofException: Closed
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
at
org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)
at
org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:213)
at
org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:206)
at
org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:136)

Any suggestions on how to go about it.

Thanks,
Satya


Re: Out of Memory Errors

2017-06-14 Thread Satya Marivada
Susheel, Please see attached. There  heap towards the end of graph has
spiked



On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar 
wrote:

> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
> or so and share.
>
> Thnx
>
> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am getting Out of Memory Errors after a while on solr-6.3.0.
> > The
> -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
> > just kills the jvm right after.
> > Using Jconsole, I see the nice triangle pattern, where it uses the heap
> > and being reclaimed back.
> >
> > The heap size is set at 3g. The index size hosted on that particular node
> > is 17G.
> >
> > java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> > -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> > -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
> > CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> >
> > Looking at the solr_gc.log.0, the eden space is being used 100% all the
> > while and being successfully reclaimed. So don't think that has go to do
> > with it.
> >
> > Apart from that in the solr.log, I see exceptions that are aftermath of
> > killing the jvm
> >
> > org.eclipse.jetty.io.EofException: Closed
> > at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
> > at org.apache.commons.io.output.ProxyOutputStream.write(
> > ProxyOutputStream.java:90)
> > at org.apache.solr.common.util.FastOutputStream.flush(
> > FastOutputStream.java:213)
> > at org.apache.solr.common.util.FastOutputStream.flushBuffer(
> > FastOutputStream.java:206)
> > at org.apache.solr.common.util.JavaBinCodec.marshal(
> > JavaBinCodec.java:136)
> >
> > Any suggestions on how to go about it.
> >
> > Thanks,
> > Satya
> >
>


node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Hi,

I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
are joining the cloud, the fourth one is coming up separately and not
joining the other 3 nodes.
Please see below in the picture on admin screen how fourth node is not
joining. Any suggestions.

Thanks,
satya


[image: image.png]


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Here is the image:

https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0

There is a node on 002: 15101 port missing from the cloud.

On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson 
wrote:

> Images don't come through the mailer, they're stripped. You'll have to put
> it somewhere else and provide a link.
>
> Best,
> Erick
>
>
> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada  >
> wrote:
>
> > Hi,
> >
> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
> nodes
> > are joining the cloud, the fourth one is coming up separately and not
> > joining the other 3 nodes.
> > Please see below in the picture on admin screen how fourth node is not
> > joining. Any suggestions.
> >
> > Thanks,
> > satya
> >
> >
> > [image: image.png]
> >
>


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Never mind. I had a different config for zookeeper on second vm which
brought a different cloud.

On Fri, Jun 16, 2017, 8:48 PM Satya Marivada 
wrote:

> Here is the image:
>
> https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0
>
> There is a node on 002: 15101 port missing from the cloud.
>
> On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson 
> wrote:
>
>> Images don't come through the mailer, they're stripped. You'll have to put
>> it somewhere else and provide a link.
>>
>> Best,
>> Erick
>>
>>
>> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada <
>> satya.chaita...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
>> nodes
>> > are joining the cloud, the fourth one is coming up separately and not
>> > joining the other 3 nodes.
>> > Please see below in the picture on admin screen how fourth node is not
>> > joining. Any suggestions.
>> >
>> > Thanks,
>> > satya
>> >
>> >
>> > [image: image.png]
>> >
>>
>


cpu utilization high

2017-07-18 Thread Satya Marivada
Hi All,

We are using solr-6.3.0 with external zookeeper. Setup is as below. Poi is
the collection which is big about 20G with each shard at 10G. Each jvm is
having 3G and the vms have 70G of RAM. The processors are at 6.

The cpu utilization when running queries is reaching more than 100%. Any
suggestions. Should I increase the number of cpus on each vm. Is it true
that a shard can be searched by a cpu and not multiple cpus can be put to
work on the same shard as it does not support multiple threading?

Or should the shard be split further? Each poi shard now is at 10G with
about 8 million documents.


[image: image.png]


indexed vs queried documents count mismatch

2017-08-24 Thread Satya Marivada
Hi,

I have a weird situation, when I index the documents from admin console by
doing "clean, commit and optimize", when the indexing is completed, it
showed 17,920,274 documents are indexed.

When queried from the solr admin console from the query tab for all the
documents in that collection, it shows 17,948,826 documents. It is so weird
that the query for all documents returns 28552 documents more.

Any suggestions/thoughts.

Thanks,
Satya


Re: indexed vs queried documents count mismatch

2017-08-24 Thread Satya Marivada
Source is database: 17,920,274  records in db
Indexed documents from admin screen: 17,920,274
Query the collection: 17,948,826

Thanks,
Satya

On Thu, Aug 24, 2017 at 3:44 PM Susheel Kumar  wrote:

> Does this happen again if you repeat above? How much total docs does DIH
> query/source shows to compare with Solr?
>
> On Thu, Aug 24, 2017 at 3:23 PM, Satya Marivada  >
> wrote:
>
> > Hi,
> >
> > I have a weird situation, when I index the documents from admin console
> by
> > doing "clean, commit and optimize", when the indexing is completed, it
> > showed 17,920,274 documents are indexed.
> >
> > When queried from the solr admin console from the query tab for all the
> > documents in that collection, it shows 17,948,826 documents. It is so
> weird
> > that the query for all documents returns 28552 documents more.
> >
> > Any suggestions/thoughts.
> >
> > Thanks,
> > Satya
> >
>


solr index replace with index from another environment

2017-08-28 Thread Satya Marivada
Hi there,

We are using solr-6.3.0 and have the need to replace the solr index in
production with the solr index from another environment on periodical
basis. But the jvms have to be recycled for the updated index to take
effect. Is there any way this can be achieved without restarting the jvms?

Using aliases as described below, there is an alternative, but I dont think
it is useful in my case, where I have the index from other environment
ready. If I build new collection and replace index, again, the jvms need to
be restarted for the new index to take effect.

https://stackoverflow.com/questions/45158394/replacing-old-indexed-data-with-new-data-in-apache-solr-with-zero-downtime

Any other suggestions please.

Thanks,
satya


solr warning - filling logs

2017-02-26 Thread Satya Marivada
Hi All,

I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
right in all config files. Not sure, where is it coming from. Any
suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud with
embedded zookeeper. Could it be some bug with solr-6.3.0 or am I missing
some configuration?

2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse exception:
java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
at
org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io
.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)


Re: solr warning - filling logs

2017-02-26 Thread Satya Marivada
May I ask about the port scanner running? Can you please elaborate?
Sure, will try to move out to external zookeeper

On Sun, Feb 26, 2017 at 7:07 PM Dave  wrote:

> You shouldn't use the embedded zookeeper with solr, it's just for
> development not anywhere near worthy of being out in production. Otherwise
> it looks like you may have a port scanner running. In any case don't use
> the zk that comes with solr
>
> > On Feb 26, 2017, at 6:52 PM, Satya Marivada 
> wrote:
> >
> > Hi All,
> >
> > I have configured solr with SSL and enabled http authentication. It is
> all
> > working fine on the solr admin page, indexing and querying process. One
> > bothering thing is that it is filling up logs every second saying no
> > authority, I have configured host name, port and authentication
> parameters
> > right in all config files. Not sure, where is it coming from. Any
> > suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud
> with
> > embedded zookeeper. Could it be some bug with solr-6.3.0 or am I missing
> > some configuration?
> >
> > 2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
> > r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse exception:
> > java.lang.IllegalArgumentException: No Authority for
> > HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
> > java.lang.IllegalArgumentException: No Authority
> > at
> >
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
> > at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
> > at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
> > at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
> > at
> >
> org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
> > at
> >
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
> > at org.eclipse.jetty.io
> > .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> > at
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
> > at org.eclipse.jetty.io
> > .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> > at org.eclipse.jetty.io
> > .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> > at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> > at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> > at java.lang.Thread.run(Thread.java:745)
>


solr warning - filling logs

2017-02-27 Thread Satya Marivada
Hi All,

I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
right in all config files. Not sure, where is it coming from. Any
suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud with
embedded zookeeper. Could it be some bug with solr-6.3.0 or am I missing
some configuration?

2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse exception:
java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
at
org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io
.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)


Re: solr warning - filling logs

2017-03-03 Thread Satya Marivada
Dave and All,

The below exception is not happening anymore when I change the startup port
to something else apart from that I had in original startup. The original
starup, if I have started without ssl enabled and then startup on the same
port with ssl enabled, it is when this warning is happening. But I really
need to use the original port that I had. Any suggestion for getting around.

Thanks,
Satya

java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@a01eef8{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)


On Sun, Feb 26, 2017 at 8:00 PM Dave  wrote:

> I don't know about your network setup but a port scanner sometimes can be
> an it security device that, well, scans ports looking to see if they're
> open.
>
> > On Feb 26, 2017, at 7:14 PM, Satya Marivada 
> wrote:
> >
> > May I ask about the port scanner running? Can you please elaborate?
> > Sure, will try to move out to external zookeeper
> >
> >> On Sun, Feb 26, 2017 at 7:07 PM Dave 
> wrote:
> >>
> >> You shouldn't use the embedded zookeeper with solr, it's just for
> >> development not anywhere near worthy of being out in production.
> Otherwise
> >> it looks like you may have a port scanner running. In any case don't use
> >> the zk that comes with solr
> >>
> >>> On Feb 26, 2017, at 6:52 PM, Satya Marivada  >
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I have configured solr with SSL and enabled http authentication. It is
> >> all
> >>> working fine on the solr admin page, indexing and querying process. One
> >>> bothering thing is that it is filling up logs every second saying no
> >>> authority, I have configured host name, port and authentication
> >> parameters
> >>> right in all config files. Not sure, where is it coming from. Any
> >>> suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud
> >> with
> >>> embedded zookeeper. Could it be some bug with solr-6.3.0 or am I
> missing
> >>> some configuration?
> >>>
> >>> 2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
> >>> r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse
> exception:
> >>> java.lang.IllegalArgumentException: No Authority for
> >>> HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
> >>> java.lang.IllegalArgumentException: No Authority
> >>> at
> >>>
> >>
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
> >>> at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
> >>> at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
> >>> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at
> >>
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at org.eclipse.jetty.io
> >>> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> >>> at java.lang.Thread.run(Thread.java:745)
> >>
>


Re: solr warning - filling logs

2017-03-03 Thread Satya Marivada
There is nothing else running on port that I am trying to use: 15101. 15102
works fine.

On Fri, Mar 3, 2017 at 2:25 PM Satya Marivada 
wrote:

> Dave and All,
>
> The below exception is not happening anymore when I change the startup
> port to something else apart from that I had in original startup. The
> original starup, if I have started without ssl enabled and then startup on
> the same port with ssl enabled, it is when this warning is happening. But I
> really need to use the original port that I had. Any suggestion for getting
> around.
>
> Thanks,
> Satya
>
> java.lang.IllegalArgumentException: No Authority for
> HttpChannelOverHttp@a01eef8{r=0,c=false,a=IDLE,uri=null}
> java.lang.IllegalArgumentException: No Authority
> at
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
>
>
> On Sun, Feb 26, 2017 at 8:00 PM Dave  wrote:
>
> I don't know about your network setup but a port scanner sometimes can be
> an it security device that, well, scans ports looking to see if they're
> open.
>
> > On Feb 26, 2017, at 7:14 PM, Satya Marivada 
> wrote:
> >
> > May I ask about the port scanner running? Can you please elaborate?
> > Sure, will try to move out to external zookeeper
> >
> >> On Sun, Feb 26, 2017 at 7:07 PM Dave 
> wrote:
> >>
> >> You shouldn't use the embedded zookeeper with solr, it's just for
> >> development not anywhere near worthy of being out in production.
> Otherwise
> >> it looks like you may have a port scanner running. In any case don't use
> >> the zk that comes with solr
> >>
> >>> On Feb 26, 2017, at 6:52 PM, Satya Marivada  >
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I have configured solr with SSL and enabled http authentication. It is
> >> all
> >>> working fine on the solr admin page, indexing and querying process. One
> >>> bothering thing is that it is filling up logs every second saying no
> >>> authority, I have configured host name, port and authentication
> >> parameters
> >>> right in all config files. Not sure, where is it coming from. Any
> >>> suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud
> >> with
> >>> embedded zookeeper. Could it be some bug with solr-6.3.0 or am I
> missing
> >>> some configuration?
> >>>
> >>> 2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
> >>> r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse
> exception:
> >>> java.lang.IllegalArgumentException: No Authority for
> >>> HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
> >>> java.lang.IllegalArgumentException: No Authority
> >>> at
> >>>
> >>
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
> >>> at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
> >>> at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
> >>> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at
> >>
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at org.eclipse.jetty.io
> >>> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> >>> at java.lang.Thread.run(Thread.java:745)
> >>
>
>


solr directories

2017-03-09 Thread Satya Marivada
Hi,

We had solr running on embedded zookeeper. Moved to external zookeeper, as
part of this setup on the same vm, had done a fresh solr distribution
setup, zookeeper distribution and created new solrdata folder to hold the
nodes. All the old folders are archived (zipped and backed up). What
wonders is the new deploy pointing to external zookeeper still shows the
old collections that were created on embedded zookeeper on the solr admin
console. How is that possible, when I did a clean fresh install of the solr
distribution and zookeeper distribution. Is that the old collection
information stored in another location as well apart from solr-6.3.0
distribution package, zookeeper-3.4.9 package and the solrdata folder?
Would solr write into any other directories?

Thanks,
Satya


solr-6.3.0 error port is running already

2017-05-02 Thread Satya Marivada
Hi,

I am getting the below exception all of a sudden with solr-6.3.0.
"null:org.apache.solr.common.SolrException: A previous ephemeral live node
still exists. Solr cannot continue. Please ensure that no other Solr
process using the same port is running already."

We are using external zookeeper and have restarted solr many times. There
is no solr running on those ports already. Any suggestions. Looks like a
bug. Had started using jmx option and then started getting it. Turned jmx
off, still getting the same issue.

We are in crunch of time, any workaround to get it started would be
helpful. Not sure where solr is seeing that port, when everything is
started clean.

Thanks,
Satya


Re: solr-6.3.0 error port is running already

2017-05-02 Thread Satya Marivada
Any ideas?  "null:org.apache.solr.common.SolrException: A previous
ephemeral live node still exists. Solr cannot continue. Please ensure that
no other Solr process using the same port is running already."

Not sure, if JMX enablement has caused this.

Thanks,
Satya

On Tue, May 2, 2017 at 3:10 PM Satya Marivada 
wrote:

> Hi,
>
> I am getting the below exception all of a sudden with solr-6.3.0.
> "null:org.apache.solr.common.SolrException: A previous ephemeral live node
> still exists. Solr cannot continue. Please ensure that no other Solr
> process using the same port is running already."
>
> We are using external zookeeper and have restarted solr many times. There
> is no solr running on those ports already. Any suggestions. Looks like a
> bug. Had started using jmx option and then started getting it. Turned jmx
> off, still getting the same issue.
>
> We are in crunch of time, any workaround to get it started would be
> helpful. Not sure where solr is seeing that port, when everything is
> started clean.
>
> Thanks,
> Satya
>


SessionExpiredException

2017-05-03 Thread Satya Marivada
Hi,

I see below exceptions in my logs sometimes. What could be causing it?

org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /overseer
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)

Thanks,
Satya


Re: solr-6.3.0 error port is running already

2017-05-03 Thread Satya Marivada
Hi Rick and Erick,

Thanks for responding. I made sure that there is no other process running
on that port.

Also when this happened, the admin page had all the nodes as live nodes
though some of them are down. So I went ahead and emptied the zookeeper
data directory where all the configuration is stored in version-2 folder. I
then had to upload the configuration again and placed my index fresh in the
solr.

It then came up fine. I was playing with JMX parameters to be passed to jvm
before this started to happen. Not sure if this got to do something with it.

Thanks,
Satya

On Wed, May 3, 2017 at 7:01 AM Rick Leir  wrote:

> Here it is on Fedora/Redhat/Centos (similar on Unix like systems). Look
> for other processes which might already be listening on the port you
> want to listen on:
>
> $ sudo netstat --inet -lp -4
> Active Internet connections (only servers)
> Proto Recv-Q Send-Q Local Address   Foreign Address
> State   PID/Program name
> tcp0  0 0.0.0.0:imaps 0.0.0.0:*   LISTEN
> 2577/dovecot
> tcp0  0 0.0.0.0:pop3s 0.0.0.0:*   LISTEN
> 2577/dovecot
>
> ...
>
> $ grep pop3s /etc/services
> pop3s   995/tcp # POP-3 over SSL
> pop3s   995/udp # POP-3 over SSL
>
> I mention pop3s just because you will run into misleading port numbers
> which alias some well known services which are listed in the services file.
>
> cheers -- Rick
>
> On 2017-05-02 05:09 PM, Rick Leir wrote:
> > Satya
> > Say netstat --inet -lP
> > You might need to add -ipv4 to that command. The P might be lower case
> (I am on the bus!). And the output might show misleading service names, see
> /etc/services.
> > Cheers-- Rick
> >
> > On May 2, 2017 3:10:30 PM EDT, Satya Marivada 
> wrote:
> >> Hi,
> >>
> >> I am getting the below exception all of a sudden with solr-6.3.0.
> >> "null:org.apache.solr.common.SolrException: A previous ephemeral live
> >> node
> >> still exists. Solr cannot continue. Please ensure that no other Solr
> >> process using the same port is running already."
> >>
> >> We are using external zookeeper and have restarted solr many times.
> >> There
> >> is no solr running on those ports already. Any suggestions. Looks like
> >> a
> >> bug. Had started using jmx option and then started getting it. Turned
> >> jmx
> >> off, still getting the same issue.
> >>
> >> We are in crunch of time, any workaround to get it started would be
> >> helpful. Not sure where solr is seeing that port, when everything is
> >> started clean.
> >>
> >> Thanks,
> >> Satya
>
>


solr 6.3.0 monitoring

2017-05-03 Thread Satya Marivada
Hi,

We stood up solr 6.3.0 with external zookeeper 3.4.9. We are moving to
production and setting up monitoring for solr, to check on all cores of a
collection to see they are up. Similary any other pointers towards the
entire collection monitoring or any other suggestions would be useful.

For zookeeper, planning to use MNTR command to check on its status.

Thanks,
Satya


solr authentication error

2017-05-04 Thread Satya Marivada
Hi,


Can someone please say what I am missing in this case? I have solr
6.3.0, and enabled http authentication, the configuration has been
uploaded to zookeeper. But I do see below error in logs sometimes. Are
the nodes not able to ciommunicate because of this error? I am not
seeing any functionality loss.

Authentication for the admin screen works great.


In solr.in.sh, should I set? SOLR_AUTHENTICATION_OPTS=""



There was a problem making a request to the
leader:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at https://:15111/solr: Expected mime type
application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/admin/cores. Reason:
require authentication



at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:561)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1647)
at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:471)
at org.apache.solr.cloud.ZkController.access$500(ZkController.java:119)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:335)
at 
org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:168)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:57)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:142)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)


Re: SessionExpiredException

2017-05-08 Thread Satya Marivada
Hi Piyush and Shawn,

May I ask what is the solution for it, if it is the long gc pauses? I am
skeptical about the same problem in our case too. We have started with 3G
of memory for the heap.
Did you have to adjust some of the memory allotted? Very much appreciated.

Thanks,
Satya

On Sat, May 6, 2017 at 12:36 PM Piyush Kunal 
wrote:

> We already faced this issue and found out the issue to be long GC pauses
> itself on either client side or server side.
> Regards,
> Piyush
>
> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey  wrote:
>
> > On 5/3/2017 7:32 AM, Satya Marivada wrote:
> > > I see below exceptions in my logs sometimes. What could be causing it?
> > >
> > > org.apache.zookeeper.KeeperException$SessionExpiredException:
> >
> > Based on my limited research, this would tend to indicate that the
> > heartbeats ZK uses to detect when sessions have gone inactive are not
> > occurring in a timely fashion.
> >
> > Common causes seem to be:
> >
> > JVM Garbage collections.  These can cause the entire JVM to pause for an
> > extended period of time, and this time may exceed the configured
> timeouts.
> >
> > Excess client connections to ZK.  ZK limits the number of connections
> > from each client address, with the idea of preventing denial of service
> > attacks.  If a client is misbehaving, it may make more connections than
> > it should.  You can try increasing the limit in the ZK config, but if
> > this is the reason for the exception, then something's probably wrong,
> > and you may be just hiding the real problem.
> >
> > Although we might have bugs causing the second situation, the first
> > situation seems more likely.
> >
> > Thanks,
> > Shawn
> >
> >
>


Re: SessionExpiredException

2017-05-08 Thread Satya Marivada
The 3g memory is doing well, performing a gc at 600-700 MB.

-XX:+UseConcMarkSweepGC -XX:+UseParNewGC

Here are my jvm start up

The start up parameters are:

java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
-XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled
-XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
-Xloggc:/sanfs/mnt/vol01/solr/solr-6.3.0/server/logs/solr_gc.log
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
-DzkClientTimeout=15000 ...

On Mon, May 8, 2017 at 11:50 AM Walter Underwood 
wrote:

> Which garbage collector are you using? The default GC will probably give
> long pauses.
>
> You need to use CMS or G1.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> > On May 8, 2017, at 8:48 AM, Erick Erickson 
> wrote:
> >
> > 3G of memory should not lead to long GC pauses unless you're running
> > very close to the edge of available memory. Paradoxically, running
> > with 6G of memory may lead to _fewer_ noticeable pauses since the
> > background threads can do the work, well, in the background.
> >
> > Best,
> > Erick
> >
> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
> >  wrote:
> >> Hi Piyush and Shawn,
> >>
> >> May I ask what is the solution for it, if it is the long gc pauses? I am
> >> skeptical about the same problem in our case too. We have started with
> 3G
> >> of memory for the heap.
> >> Did you have to adjust some of the memory allotted? Very much
> appreciated.
> >>
> >> Thanks,
> >> Satya
> >>
> >> On Sat, May 6, 2017 at 12:36 PM Piyush Kunal 
> >> wrote:
> >>
> >>> We already faced this issue and found out the issue to be long GC
> pauses
> >>> itself on either client side or server side.
> >>> Regards,
> >>> Piyush
> >>>
> >>> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey 
> wrote:
> >>>
> >>>> On 5/3/2017 7:32 AM, Satya Marivada wrote:
> >>>>> I see below exceptions in my logs sometimes. What could be causing
> it?
> >>>>>
> >>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
> >>>>
> >>>> Based on my limited research, this would tend to indicate that the
> >>>> heartbeats ZK uses to detect when sessions have gone inactive are not
> >>>> occurring in a timely fashion.
> >>>>
> >>>> Common causes seem to be:
> >>>>
> >>>> JVM Garbage collections.  These can cause the entire JVM to pause for
> an
> >>>> extended period of time, and this time may exceed the configured
> >>> timeouts.
> >>>>
> >>>> Excess client connections to ZK.  ZK limits the number of connections
> >>>> from each client address, with the idea of preventing denial of
> service
> >>>> attacks.  If a client is misbehaving, it may make more connections
> than
> >>>> it should.  You can try increasing the limit in the ZK config, but if
> >>>> this is the reason for the exception, then something's probably wrong,
> >>>> and you may be just hiding the real problem.
> >>>>
> >>>> Although we might have bugs causing the second situation, the first
> >>>> situation seems more likely.
> >>>>
> >>>> Thanks,
> >>>> Shawn
> >>>>
> >>>>
> >>>
>
>


OutOfMemoryError and Too many open files

2017-05-08 Thread Satya Marivada
Hi,

Started getting below errors/exceptions. I have listed the resolution
inline. Could you please see if I am headed right?

The below error basically says that there are no more threads can be
created as the limit has reached. We have big index and I assume the
threads are being created outside of jvm and could not be because of low
ulimit setting of nproc (4096). It has been increased to 131072. This
number can be found by ulimit -u

java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1368)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:214)
at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at
org.apache.solr.common.cloud.SolrZkClient$3.process(SolrZkClient.java:268)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

The below error basically says that there are no more files can be opened
as the limit has reached. It has been increased to 65536 from 4096. This
number can be found by ulimit -Hn, ulimit -Sn

java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:382)
at
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:593)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)

Thanks,
Satya


Re: SessionExpiredException

2017-05-08 Thread Satya Marivada
This is on solr-6.3.0 and external zookeeper 3.4.9

On Wed, May 3, 2017 at 11:39 PM Zheng Lin Edwin Yeo 
wrote:

> Are you using SolrCloud with external ZooKeeper, or Solr's internal
> ZooKeeper?
>
> Also, which version of Solr are you using?
>
> Regards,
> Edwin
>
> On 3 May 2017 at 21:32, Satya Marivada  wrote:
>
> > Hi,
> >
> > I see below exceptions in my logs sometimes. What could be causing it?
> >
> > org.apache.zookeeper.KeeperException$SessionExpiredException:
> > KeeperErrorCode = Session expired for /overseer
> > at
> > org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
> > at
> > org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> > at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
> >
> > Thanks,
> > Satya
> >
>


Re: SessionExpiredException

2017-05-11 Thread Satya Marivada
For the sessionexpiredexception, the solr is throwing this exception and
then the shard goes down.

>From the following discussion, it seems to be that the solr is loosing
connection to zookeeper and throws the exception. In the zoo keeper
configuration file, zoo.cfg, is it safe to increase the synclimit shown in
below snippet.


# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/sanfs/mnt/vol01/solr/zookeeperdata/2
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60

Thanks,
Satya

On Mon, May 8, 2017 at 12:04 PM Satya Marivada 
wrote:

> The 3g memory is doing well, performing a gc at 600-700 MB.
>
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>
> Here are my jvm start up
>
> The start up parameters are:
>
> java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled
> -XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
> -Xloggc:/sanfs/mnt/vol01/solr/solr-6.3.0/server/logs/solr_gc.log
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
> -DzkClientTimeout=15000 ...
>
> On Mon, May 8, 2017 at 11:50 AM Walter Underwood 
> wrote:
>
>> Which garbage collector are you using? The default GC will probably give
>> long pauses.
>>
>> You need to use CMS or G1.
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>>
>>
>> > On May 8, 2017, at 8:48 AM, Erick Erickson 
>> wrote:
>> >
>> > 3G of memory should not lead to long GC pauses unless you're running
>> > very close to the edge of available memory. Paradoxically, running
>> > with 6G of memory may lead to _fewer_ noticeable pauses since the
>> > background threads can do the work, well, in the background.
>> >
>> > Best,
>> > Erick
>> >
>> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
>> >  wrote:
>> >> Hi Piyush and Shawn,
>> >>
>> >> May I ask what is the solution for it, if it is the long gc pauses? I
>> am
>> >> skeptical about the same problem in our case too. We have started with
>> 3G
>> >> of memory for the heap.
>> >> Did you have to adjust some of the memory allotted? Very much
>> appreciated.
>> >>
>> >> Thanks,
>> >> Satya
>> >>
>> >> On Sat, May 6, 2017 at 12:36 PM Piyush Kunal 
>> >> wrote:
>> >>
>> >>> We already faced this issue and found out the issue to be long GC
>> pauses
>> >>> itself on either client side or server side.
>> >>> Regards,
>> >>> Piyush
>> >>>
>> >>> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey 
>> wrote:
>> >>>
>> >>>> On 5/3/2017 7:32 AM, Satya Marivada wrote:
>> >>>>> I see below exceptions in my logs sometimes. What could be causing
>> it?
>> >>>>>
>> >>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>> >>>>
>> >>>> Based on my limited research, this would tend to indicate that the
>> >>>> heartbeats ZK uses to detect when sessions have gone inactive are not
>> >>>> occurring in a timely fashion.
>> >>>>
>> >>>> Common causes seem to be:
>> >>>>
>> >>>> JVM Garbage collections.  These can cause the entire JVM to pause
>> for an
>> >>>> extended period of time, and this time may exceed the configured
>> >>> timeouts.
>> >>>>
>> >>>> Excess client connections to ZK.  ZK limits the number of connections
>> >>>> from each client address, with the idea of preventing denial of
>> service
>> >>>> attacks.  If a client is misbehaving, it may make more connections
>> than
>> >>>> it should.  You can try increasing the limit in the ZK config, but if
>> >>>> this is the reason for the exception, then something's probably
>> wrong,
>> >>>> and you may be just hiding the real problem.
>> >>>>
>> >>>> Although we might have bugs causing the second situation, the first
>> >>>> situation seems more likely.
>> >>>>
>> >>>> Thanks,
>> >>>> Shawn
>> >>>>
>> >>>>
>> >>>
>>
>>


Re: SessionExpiredException

2017-05-11 Thread Satya Marivada
> For the sessionexpiredexception, the solr is throwing this exception and
> then the shard goes down.
>
> From the following discussion, it seems to be that the solr is loosing
> connection to zookeeper and throws the exception. In the zoo keeper
> configuration file, zoo.cfg, is it safe to increase the synclimit shown in
> below snippet.
>
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=10
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=5
> # the directory where the snapshot is stored.
> # do not use /tmp for storage, /tmp here is just
> # example sakes.
> dataDir=/sanfs/mnt/vol01/solr/zookeeperdata/2
> # the port at which the clients will connect
> clientPort=2181
> # the maximum number of client connections.
> # increase this if you need to handle more clients
> #maxClientCnxns=60
>
> Thanks,
> Satya
>
> On Mon, May 8, 2017 at 12:04 PM Satya Marivada 
> wrote:
>
>> The 3g memory is doing well, performing a gc at 600-700 MB.
>>
>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>>
>> Here are my jvm start up
>>
>> The start up parameters are:
>>
>> java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
>> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
>> -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
>> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
>> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
>> -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled
>> -XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC
>> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
>> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
>> -Xloggc:/sanfs/mnt/vol01/solr/solr-6.3.0/server/logs/solr_gc.log
>> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
>> -DzkClientTimeout=15000 ...
>>
>> On Mon, May 8, 2017 at 11:50 AM Walter Underwood 
>> wrote:
>>
>>> Which garbage collector are you using? The default GC will probably give
>>> long pauses.
>>>
>>> You need to use CMS or G1.
>>>
>>> wunder
>>> Walter Underwood
>>> wun...@wunderwood.org
>>> http://observer.wunderwood.org/  (my blog)
>>>
>>>
>>> > On May 8, 2017, at 8:48 AM, Erick Erickson 
>>> wrote:
>>> >
>>> > 3G of memory should not lead to long GC pauses unless you're running
>>> > very close to the edge of available memory. Paradoxically, running
>>> > with 6G of memory may lead to _fewer_ noticeable pauses since the
>>> > background threads can do the work, well, in the background.
>>> >
>>> > Best,
>>> > Erick
>>> >
>>> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
>>> >  wrote:
>>> >> Hi Piyush and Shawn,
>>> >>
>>> >> May I ask what is the solution for it, if it is the long gc pauses? I
>>> am
>>> >> skeptical about the same problem in our case too. We have started
>>> with 3G
>>> >> of memory for the heap.
>>> >> Did you have to adjust some of the memory allotted? Very much
>>> appreciated.
>>> >>
>>> >> Thanks,
>>> >> Satya
>>> >>
>>> >> On Sat, May 6, 2017 at 12:36 PM Piyush Kunal >> >
>>> >> wrote:
>>> >>
>>> >>> We already faced this issue and found out the issue to be long GC
>>> pauses
>>> >>> itself on either client side or server side.
>>> >>> Regards,
>>> >>> Piyush
>>> >>>
>>> >>> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey 
>>> wrote:
>>> >>>
>>> >>>> On 5/3/2017 7:32 AM, Satya Marivada wrote:
>>> >>>>> I see below exceptions in my logs sometimes. What could be causing
>>> it?
>>> >>>>>
>>> >>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>>> >>>>
>>> >>>> Based on my limited research, this would tend to indicate that the
>>> >>>> heartbeats ZK uses to detect when sessions have gone inactive are
>>> not
>>> >>>> occurring in a timely fashion.
>>> >>>>
>>> >>>> Common causes seem to be:
>>> >>>>
>>> >>>> JVM Garbage collections.  These can cause the entire JVM to pause
>>> for an
>>> >>>> extended period of time, and this time may exceed the configured
>>> >>> timeouts.
>>> >>>>
>>> >>>> Excess client connections to ZK.  ZK limits the number of
>>> connections
>>> >>>> from each client address, with the idea of preventing denial of
>>> service
>>> >>>> attacks.  If a client is misbehaving, it may make more connections
>>> than
>>> >>>> it should.  You can try increasing the limit in the ZK config, but
>>> if
>>> >>>> this is the reason for the exception, then something's probably
>>> wrong,
>>> >>>> and you may be just hiding the real problem.
>>> >>>>
>>> >>>> Although we might have bugs causing the second situation, the first
>>> >>>> situation seems more likely.
>>> >>>>
>>> >>>> Thanks,
>>> >>>> Shawn
>>> >>>>
>>> >>>>
>>> >>>
>>>
>>>


file descriptors and threads differing

2017-05-12 Thread Satya Marivada
Hi All,

We have a weird problem, with the threads being opened many and crashing
the app in PP7 (one of our environment). It is the same index, same version
of solr (6.3.0) and zookeeper (3.4.9) in both environments.
Java minor version is different (1.8.0_102 in PP8 (one of our environment)
shown below vs 1.8.0_121 in PP7(one other environment)).

Any ideas around why files open and threads are more in one environment vs
other.

Files open is found by looking for fd in /proc/ directory and threads found
by ps -elfT | wc -l.

[image: pasted1]

Thanks,
Satya


Re: file descriptors and threads differing

2017-05-12 Thread Satya Marivada
We have the same ulimits in both cases.

/proc/2/fd:
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 124 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 125 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 126 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 127 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 128 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 129 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar


The same process is opening many files. In linux, isn't it that only one fd
should be opened and referenced by all threads in the process.

In case of other environment, where the jvm is different in minor version,
it opens less number of multiple files. Trying to set both of them at the
same jvm level and see how it goes.


Thanks,
Satya


On Fri, May 12, 2017 at 3:41 PM Erick Erickson 
wrote:

> Check the system settings with ulimit. Differing numbers of user processes
> or open files can cause things like this to be different on different
> boxes.
>
> Can't speak to the Java version.
>
> Best,
> Erick
>
> On Fri, May 12, 2017 at 11:56 AM, Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > We have a weird problem, with the threads being opened many and crashing
> > the app in PP7 (one of our environment). It is the same index, same
> version
> > of solr (6.3.0) and zookeeper (3.4.9) in both environments.
> > Java minor version is different (1.8.0_102 in PP8 (one of our
> environment)
> > shown below vs 1.8.0_121 in PP7(one other environment)).
> >
> > Any ideas around why files open and threads are more in one environment
> vs
> > other.
> >
> > Files open is found by looking for fd in /proc/ directory and threads
> > found by ps -elfT | wc -l.
> >
> > [image: pasted1]
> >
> > Thanks,
> > Satya
> >
>


indexing with pdf files problem

2010-07-12 Thread satya swaroop
hi all,
  i am working with solr on tomcat. the indexing is good for xml files
but when i send the docs or html files or pdf's through curl i get the error
as lazy error. can u telll me the way. the output is as follows when i send
a pdf file  i am working in ubuntu. solr home is /opt/example
  tomcat is /opt/tomcat6


Apache Tomcat/6.0.26 - Error report
HTTP Status 500 - lazy loading error

org.apache.solr.common.SolrException: lazy loading error
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.getWrappedHandler(RequestHandlers.java:249)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:231)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.solr.common.SolrException:
java.lang.NullPointerException
at
org.apache.solr.handler.extraction.ExtractingRequestHandler.inform(ExtractingRequestHandler.java:76)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.getWrappedHandler(RequestHandlers.java:244)
... 16 more
Caused by: java.lang.NullPointerException
at
org.apache.tika.mime.MimeTypesFactory.create(MimeTypesFactory.java:73)
at
org.apache.tika.mime.MimeTypesFactory.create(MimeTypesFactory.java:90)
at org.apache.tika.config.TikaConfig.(TikaConfig.java:99)
at org.apache.tika.config.TikaConfig.(TikaConfig.java:84)
at org.apache.tika.config.TikaConfig.(TikaConfig.java:61)
at
org.apache.solr.handler.extraction.ExtractingRequestHandler.inform(ExtractingRequestHandler.java:74)
... 17 more
type Status
reportmessage lazy loading error

org.apache.solr.common.SolrException: lazy loading error
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.getWrappedHandler(RequestHandlers.java:249)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:231)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:6

Re: indexing with pdf files problem

2010-07-13 Thread satya swaroop
hi,
   I installed tika and made its jar files into solr home library and also
gave the path to the tika configuration file. But the error is same.  the
tika config file is as follows:::







 
 http://purl.org/dc/elements/1.1/
 application/xml

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 application/msword
 
 
 
 
 
 
 
 

 
 

 application/vnd.ms-excel
 
 
 
 
 
 
 
 

 
 

 application/vnd.ms-powerpoint
 
 
 
 
 
 
 
 
 
 
 

 
 

text/html
 application/x-asp
 
 
 
 
 
 
 
 
 

 
 
 

 application/rtf
 
 
 
 
 
 
 
 

 
 

 application/pdf
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 

 text/plain
 
 
 
 
 
 
 
 


 
 application/vnd.sun.xml.writer
application/vnd.oasis.opendocument.text
 
 

 
 
 
 
 



 
 

 
 

 

 

 
 
 



with regards,
swaroop


indexing rich documents

2010-07-13 Thread satya swaroop
Hi all,
 i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also,  but i didnt get it.The error comes when i send an pdf/html
file is a lazy error. can anyone give some detail description about how to
make richer documents indexable
 i use tomcat and working in ubuntu. The home directory for solr is
/opt/solr/example and catalina home is /opt/tomcat6.


thanks & regards,
 swaroop


indexing rich documents

2010-07-13 Thread satya swaroop
Hi all,
 i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also,  but i didnt get it.The error comes when i send an pdf/html
file is a lazy error. can anyone give some detail description about how to
make richer documents indexable
 i use tomcat and working in ubuntu. The home directory for solr is
/opt/solr/example and catalina home is /opt/tomcat6.


thanks & regards,
 swaroop


Re: indexing rich documents

2010-07-13 Thread satya swaroop
hi,
yes i followed the wiki and can now tell me the procedure for it
  regards,
   swaroop


Re: indexing rich documents

2010-07-13 Thread satya swaroop
ya i checked the extraction request handler but couldnt get the
info... i installed tika-0.7 and copied the jar files into the solr
home library.. i started sending the pdf/html files then i get a lazy
error. i am using tomcat and solr 1.4


problem with storing??

2010-07-15 Thread satya swaroop
Hi all,
   i am new to solr and i followed d wiki and got everything going
right. But when i send any html/txt/pdf documents the response is as
follows:::



0576


but when i search in the solr i dont find the result can any one tell me
what to be done..??
The curl i used for the above o/p is

curl '
http://localhost:8080/solr/update/extract?literal.id=doc1000&commit=true&fmap.content=text'
-F "myfi...@java.pdf"

regards,
 satya


Re: problem with storing??

2010-07-15 Thread satya swaroop
hi,
   i sent the commit after adding the documents. but the problem is same

regards,
  satya


no response

2010-07-15 Thread satya swaroop
Hi all,
i Have a problem with the solr. when i send the documents(.doc) i am
not getting the response.
  example:
 sa...@geodesic-desktop:~/Desktop$  curl "
http://localhost:8080/solr/update/extract?stream.file=/home/satya/Desktop/InvestmentDecleration.doc&stream.contentType=application/msword&;
literal.id=Invest.doc"
sa...@geodesic-desktop:~/Desktop$


could any body tell me what to do??


Re: no response

2010-07-16 Thread satya swaroop
hi,
   i am sorry the mail u sent was in sent mail... I didnt look it I am
going to check now.. I will definetely tell u the entire thing

regards,
  satya


Re: problem with storing??

2010-07-16 Thread satya swaroop
hi,
I checked out the admin page and it is indexing for others.In the log
files i dont get anything when i send the documents. I checked out the log
in catalina(tomcat). I changed the dismax handler from q=*:* to q=   . I
atleast get the response when i send pdf/html files but dont even get for
the doc files


regards,
  swaroop


Re: problem with storing??

2010-07-18 Thread satya swaroop
hi all,
   now solr is working good.i am working in ubuntu and i was indexing
the documents which dont hav permissions . so the problem was that. i thank
all of u for ur reply to my queries.
  thanking you,
   satya


spell checking....

2010-07-26 Thread satya swaroop
hi all,
i am a new one to solr and able to implement indexing the documents
by following the solr wiki. now i am trying to add the spellchecking. i
followed the spellcheck component in wiki but not getting the suggested
spellings. i first build it by spellcheck.build=true,...

here i give u the example:::

http://localhost:8080/solr/spell?q=javs&spellcheck=true&spellcheck.collate=true



-








here the response should actualy suggest the "java" but didnt..

can any one guide me about it...
 i am using solr 1.4, tomcat in ubuntu





Regards,
swarup


Re: spell checking....

2010-07-26 Thread satya swaroop
This is in solrconfig.xml:::


  
  default

  solr.IndexBasedSpellChecker

  spell
   ./spellchecker
   0.7
 true
true



  jarowinkler
  lowerfilt
  org.apache.lucene.search.spell.JaroWinklerDistance
  ./spellchecker
  true
  true


  textSpell






 i added the following in standard request handler::



 
   explicit
   
  default
  
  false
  
  false
  
  1

 
  spellcheck


  


spell checking problem

2010-07-29 Thread satya swaroop
hi all,
  i need some help in spellchecking.i configured my solrconfig and
schema by looking the usermailing list and here i give you the configuration
i made..

my schema.xml::

 
  




  
  





  


 





my solrconfig.xml:
--
  

  default
  false
  false
  5



  spellcheck

  



 

spellText


  default
  name   
  ./spell
  true




  jarowinkler
  spell
  org.apache.lucene.search.spell.JaroWinklerDistance
  ./spellcheckerjaro



  




1)the problem here is for the default dictionary the index is getting
created and if i write "jawa" the suggestions it gives are data,sata.. but
the actual sugest is "java". I nearly have 20 java docs indexed
2)another problem ::: if i make build to jarowinkler dictionary which is
using the "spell" field is not going to create the dictionary and i only see
segments.gen and segments_1 in its directory


regards,
satya


indexing???

2010-08-12 Thread satya swaroop
Hi all,
   The indexing part of solr is going good,but i got a error on indexing
a single pdf file. when i searched for the error in the mailing list i found
that the error was due to copyright of that file. can't we index a file
which has copy rights or any digital rights???

regards,
  satya


Re: indexing???

2010-08-16 Thread satya swaroop
hi all,
   the error i got is ""Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@8210fc"" when i indexed a file similar
to the one in
   https://issues.apache.org/jira/browse/PDFBOX-709/samplerequestform.pdfcant
we index those type files in solr???

regards,
satya


stream.url problem

2010-08-17 Thread satya swaroop
hi all,
   i am indexing the documents to solr that are in my system. now i need
to index the files that are in remote system, i enabled the remote streaming
to true in solrconfig.xml and when i use the stream.url it shows the error
as ""connection refused"" and the detail of the error is:::

when i sent the request in my browser as::

http://localhost:8080/solr/update/extract?stream.url=http://remotehost/home/san/Desktop/programming_erlang_armstrong.pdf&literal.id=schb2

i get the error as

HTTP Status 500 - Connection refused java.net.ConnectException: Connection
refused at sun.reflect.GeneratedConstructorAccessor11.newInstance(Unknown
Source) at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368)
at java.security.AccessController.doPrivileged(Native Method) at
sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016)
at
org.apache.solr.common.util.ContentStreamBase$URLStream.getStream(ContentStreamBase.java:88)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:161)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at
java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at
java.net.Socket.connect(Socket.java:525) at
java.net.Socket.connect(Socket.java:475) at
sun.net.NetworkClient.doConnect(NetworkClient.java:163) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at
sun.net.www.http.HttpClient.(HttpClient.java:233) at
sun.net.www.http.HttpClient.New(HttpClient.java:306) at
sun.net.www.http.HttpClient.New(HttpClient.java:323) at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:860)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:801)
at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
at
sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
at java.net.URLConnection.getContentType(URLConnection.java:485) at
org.apache.solr.common.util.ContentStreamBase$URLStream.(ContentStreamBase.java:81)
at
org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:136)
at
org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:116)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
...


if any body know
please help me with this

regards,
satya


Re: indexing???

2010-08-17 Thread satya swaroop
hi,

1) i use tika 0.8...

2)the url is  https://issues.apache.org/jira/browse/PDFBOX-709 and the
file is samplerequestform.pdf

 3)the entire error is::;
curl "
http://localhost:8080/solr/update/extract?stream.file=/home/satya/my_workings/satya_ebooks/8-Linux/samplerequestform.pdf&literal.id=linuxc
"



  Apache Tomcat/6.0.26 - Error
report<!--H1
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;}
H2
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;}
H3
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;}
BODY
{font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;}
P
{font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A
{color : black;}A.name {color : black;}HR {color : #525D76;}-->
HTTP Status 500 - org.apache.tika.exception.TikaException:
Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@1d688e2

org.apache.solr.common.SolrException:
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@1d688e2
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:214)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.tika.exception.TikaException: Unexpected
RuntimeException from org.apache.tika.parser.pdf.pdfpar...@1d688e2
at
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:144)
at
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:112)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:193)
... 18 more
Caused by: java.lang.ClassCastException:
org.apache.pdfbox.pdmodel.font.PDFontDescriptorAFM cannot be cast to
org.apache.pdfbox.pdmodel.font.PDFontDescriptorDictionary
at
org.apache.pdfbox.pdmodel.font.PDTrueTypeFont.ensureFontDescriptor(PDTrueTypeFont.java:167)
at
org.apache.pdfbox.pdmodel.font.PDTrueTypeFont.<init>(PDTrueTypeFont.java:117)
at
org.apache.pdfbox.pdmodel.font.PDFontFactory.createFont(PDFontFactory.java:140)
at
org.apache.pdfbox.pdmodel.font.PDFontFactory.createFont(PDFontFactory.java:76)
at org.apache.pdfbox.pdmodel.PDResources.getFonts(PDResources.java:115)
at
org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:225)
at
org.apache.pdfbox.util.PDFStreamEngine.processStream(PDFStreamEngine.java:207)
at
org.apache.pdfbox.util.PDFTextStripper.processPage(PDFTextStripper.java:367)
at
org.apache.pdfbox.util.PDFTextStripper.processPages(PDFTextStripper.java:291)
at
org.apache.pdfbox.util.PDFTextStripper.writeText(PDFTextStripper.java:247)
at
org.apache.pdfbox.util.PDFTextStripper.getText(PDFTextStripper.java:180)
at org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:56)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:79)
at
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:142)
... 21 more
type Status
reportmessage org.apache.tika.exception.TikaException:
Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@1d688e2

org.apache

solr working...

2010-08-18 Thread satya swaroop
hi all,
i am very intrested to know the working of solr. can anyone tell me
which modules or classes that gets invoked when we start the servlet
container like tomcat or when we send any requests to solr like sending pdf
files or what files get invoked at the start of solr.??

regards,
satya


/update/extract

2010-08-19 Thread satya swaroop
Hi all,
   when we handle extract request handler what class gets invoked.. I
need to know the navigation of classes when we send any files to solr.
can anybody tell me the classes or any sources where i can get the answer..
or can anyone tell me what classes get invoked when we start the
solr... I be thankful if anybody can help me with regarding this..

Regards,
satya


Re: stream.url problem

2010-08-24 Thread satya swaroop
>
> Hi all,
> I got the solution for my problem. I changed my port number and i
> kept the old one in the stream.url... so problem was that...
> thanks all
>
> Now i got another problem, it is when i send any requests to remote
> system for the files that have names with escape characters like " &,space
> ". For example= Tom&Jerry.pdf  i get a problem as "Unexpected end of
> file from server"...
>
> the request i sent is::
>
> curl "
> http://localhost:8080/solr/update/extract?stream.url=http://remotehost:8011/file_download.yaws?file=Wireless%20Lan.pdf&literal.id=su8
> "
>
> here file_download.yaws is a module that fetches the file and gives to
> solr.
>
> solr is able to index the files that doesnt contain the escape characters
> in the remote system.. example:: apache.txt, solr_apache.pdf
>
> the error i got is:::
>
> HTTP Status 500 - Unexpected end of file from server
> java.net.SocketException: Unexpected end of file from server at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
> sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368)
> at java.security.AccessController.doPrivileged(Native Method) at
> sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362)
> at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016)
> at
> org.apache.solr.common.util.ContentStreamBase$URLStream.getStream(ContentStreamBase.java:88)
> at
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:161)
> at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:57)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:133)
> at
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1355) at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:340)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
> at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
> at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:619) Caused by:
> java.net.SocketException: Unexpected end of file from server at
> sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) at
> sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
> sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:766) at
> sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1072)
> at
> sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
> at java.net.URLConnection.getContentType(URLConnection.java:485) at
> org.apache.solr.common.util.ContentStreamBase$URLStream.(ContentStreamBase.java:81)
> at
> org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:138)
> at
> org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:117)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
> ...
>



Regards,
 satya


reduce the content???

2010-08-25 Thread satya swaroop
Hi all,
  i indexed nearly 100 java pdf files which are of large size(min 1MB).
The solr is showing the results with the entire content that it indexed
which is taking time to show the results.. cant we reduce the content it
shows or can i just have the file names and ids instead of the entire
content in the results

Regards,
satya


solr working...

2010-08-26 Thread satya swaroop
Hi all,
  I am intrested to see the working of solr.
1)Can anyone tell me how to start with to know its working 

Regards,
satya


Re: solr working...

2010-08-26 Thread satya swaroop
Hi peter,
I am already working on solr and it is working good. But i want
to understand the code and know where the actual working is going on, and
how indexing is done and how the requests are parsed and how it is
responding and all others. TO understand the  code i asked how to start???

Regards,
satya


Re: solr working...

2010-08-26 Thread satya swaroop
Hi all,

  Thanks for ur response and information. I used slf4j log and i kept
log.info method in every class of solr module to know which classes get
invoke on particular requesthandler or on start of solr I was able to
keep it only in solr Module but not in lucene module... i get error when i
use it in dat module.. can any one tell me other ways like this to track the
path solr

Regards,
  satya


stream.url

2010-09-02 Thread satya swaroop
Hi all,

  I am using stream.url to index the files in the remote system. when i
use the url as
1) curl "
http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=yaws_presentation.pdf&literal.id=schb4
"
it works and i get the response as the file got indexed.

but when i use
2) curl "
http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=solr&;
apache.pdf&
literal.id=schb5"
i get the error in the solr... i replaced the escaped characters with %20
for space and %26 for &, but the error is same saying

""Unexpected end of file from server java.net.SocketException..""

when i used without solr as http://remotehost:port/file_download.yaws?file=solr
& apache.pdf then i get the file downloaded to my system.

I here enclose the entire error=

HTTP Status 500 - Unexpected end of file from server
java.net.SocketException: Unexpected end of file from server at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368)
at java.security.AccessController.doPrivileged(Native Method) at
sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016)
at
org.apache.solr.common.util.ContentStreamBase$URLStream.getStream(ContentStreamBase.java:88)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:169)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:57)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:133)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1355) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:340)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619) Caused by:
java.net.SocketException: Unexpected end of file from server at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) at
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:766) at
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1072)
at
sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
at java.net.URLConnection.getContentType(URLConnection.java:485) at
org.apache.solr.common.util.ContentStreamBase$URLStream.(ContentStreamBase.java:81)
at
org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:138)
at
org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:117)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
... 12 more


can anybody provide information regarding this??


Regards,
Satya


Re: stream.url

2010-09-02 Thread satya swaroop
Hi stefan,
   I used escape charaters and made it... It is not problem for
a single file of 'solr &apache' but it shows the same problem for the files
like Wireless lan.ppt, Tom info.pdf.

the curl i sent is::

curl "
http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws%3Ffile=solr<http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=solr>
%20%26%20apache.pdf&literal.id=schb5"

Regards,
satya


Re: stream.url

2010-09-02 Thread satya swaroop
Hi,
I made the curl from the shell(command prompt or terminal) with the
escaping characters but the error is same when i saw in the remote
system the request is not getting there Is there anything to be changed
in config file inorder to enable the escaping characters for stream.url

Did anybody try indexing files in remote system through stream.url,  where
the files name contain escape characters like &,space

regards,
satya


Re: stream.url

2010-09-03 Thread satya swaroop
Hi all,

  I am unable to index the files of remote system that contains escaped
characters in  their file names i think there is a problem in solr for
indexing the files of escaped characters in remote system...
Has anybody tried to index the files in remote system that contain the
escaped characters But solr is working good for files that has no
escaped characters in their name.


I sent the request through the curl by encoding the filename in url format
but the problem is same...

Regards,
satya


Re: stream.url

2010-09-08 Thread satya swaroop
Hi Hoss,

 Thanks for reply and it got working The reason was as you
said i was not double escaping i used %2520 for whitespace and it is
working now

Thanks,
satya


cloud or zookeeper

2010-09-14 Thread satya swaroop
Hi All,
   What is the difference of using shards,solr cloud and zookeeper..
which is the best way to scale the solr..
 I need to reduce the index size in every system and reduce the search time
for a query...

Regards,
satya


SolrCloud new....

2010-09-20 Thread satya swaroop
Hi all,
I  am having 4 instances of solr in 4 systems.Each system has a
single instance of solr.. I want the result from all these servers. I came
to know using of solrcloud. I read about it and worked on the example and it
was working as given in wiki.
I am using solr 1.4 and apache tomcat. In order to implement cloud in the
solr trunk wat procedure should be followed.
1)Should i copy the libraries from cloud to trunk???
2)should i keep the cloud module in every system???
3) I am not using any cores in the solr. It is a single solr in every
system.can solrcloud support it??
4) the example is given in jetty.Is it the same way to make it in tomcat???

Regards,
satya


ant package

2010-09-21 Thread satya swaroop
Hi all,
i want to build the package of my solr and i found it can be done
using ant. When i type ant package in solr module i get an error as:::\


sa...@swaroop:~/temporary/trunk/solr$ ant package
Buildfile: build.xml

maven.ant.tasks-check:

BUILD FAILED
/home/satya/temporary/trunk/solr/common-build.xml:522:
##
  Maven ant tasks not found.
  Please make sure the maven-ant-tasks jar is in ANT_HOME/lib, or made
  available to Ant using other mechanisms like -lib or CLASSPATH.
  ##

Total time: 0 seconds


can anyone tell me the procedure to build it or give any information about
it..

Regards,
satya


Re: ant package

2010-09-21 Thread satya swaroop
HI ,
  ya i dont have the jar file in the ant/lib where can i get the jar
file or wat is the procedure to make that maven-artifact-ant-2.0.4-dep.jar??

regards,
satya


Re: ant package

2010-09-21 Thread satya swaroop
Hi erick,
 thanks for reply and i got the jar file downloaded and kept it
in ant library
now when i make ant package command it getting error in the middle of build
in generate-maven-artifacts... and the error is

sa...@geodesic-desktop:~/temporary/trunk/solr$ sudo  ant  package
---
---
---
generate-maven-artifacts:
[mkdir] Created dir: /home/satya/temporary/trunk/solr/build/maven
[mkdir] Created dir: /home/satya/temporary/trunk/solr/dist/maven
 [copy] Copying 1 file to
/home/satya/temporary/trunk/solr/build/maven/src/maven
[artifact:install-provider] Installing provider:
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-2

BUILD FAILED
/home/satya/temporary/trunk/solr/build.xml:853: The following error occurred
while executing this line:
/home/satya/temporary/trunk/solr/common-build.xml:373: artifact:deploy
doesn't support the "uniqueVersion" attribute

Total time: 1 minute 51 seconds
sa...@desktop:~/temporary/trunk/solr$

Regards,
satya


ant build problem

2010-10-04 Thread satya swaroop
Hi all,
i updated my solr trunk to revision 1004527. when i go for compiling
the trunk with ant i get so many warnings, but the build is successful. the
warnings are here:::
common.compile-core:
[mkdir] Created dir:
/home/satya/temporary/trunk/lucene/build/classes/java
[javac] Compiling 475 source files to
/home/satya/temporary/trunk/lucene/build/classes/java
[javac] warning: [path] bad path element
"/usr/share/ant/lib/hamcrest-core.jar": no such file or directory
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:455:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar >> 8);
[javac]   ^
    [javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:705:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar >> 8);
[javac]   ^
    [javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:812:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar >> 8);
[javac]       ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:983:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar >> 8);
[javac]   ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/search/FieldCacheImpl.java:209:
warning: [unchecked] unchecked cast
[javac] found   : java.lang.Object
[javac] required: T
[javac] key.creator.validate( (T)value, reader);
[javac]  ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/search/FieldCacheImpl.java:278:
warning: [unchecked] unchecked call to
Entry(java.lang.String,org.apache.lucene.search.cache.EntryCreator) as a
member of the raw type org.apache.lucene.search.FieldCacheImpl.Entry
[javac] return (ByteValues)caches.get(Byte.TYPE).get(reader, new
Entry(field, creator));
ptionList.addAll(exceptions);

||

[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files additionally use unchecked or unsafe
operations.
[javac] 100 warnings

BUILD SUCCESSFUL
Total time: 19 seconds


here i placed only the starting stage of warnings.
After the compiling i thought to check with the ant test and performed but
it is failed..

i didnt find any hamcrest-core.jar in my ant library....
i use ant 1.7.1


Regards,
satya


solr requirements

2010-10-18 Thread satya swaroop
Hi All,
I am planning to have a separate server for solr and regarding
hardware requirements i have a doubt about what configuration to be needed.
I know it will be hard to tell but i just need a minimum requirement for the
particular situation as follows::


1) There are 1000 regular users using solr and Every day each user indexes
10 files of 1KB each and totally it leads to a size of 10MB for a day and it
goes on...???

2)How much of RAM is used by solr in genral???

Thanks,
satya


Re: solr requirements

2010-10-18 Thread satya swaroop
Hi,
   here is some more info about it. I use Solr to output only the file
names(file id's). Here i enclose the fields in my schema.xml and presently i
have only about 40MB of indexed data.


   
   
   

   
   
   
   

   
   
   
   

   
   
   
   


   
   
   
   
   
   
   
   
   
   
   

   


   
   

   
   

   
   
   

   

 



Regards,
satya


RAM increase

2010-10-20 Thread satya swaroop
Hi all,
  I increased my RAM size to 8GB and i want 4GB of it to be used
for solr itself. can anyone tell me the way to allocate the RAM for the
solr.


Regards,
satya


solr result....

2010-10-27 Thread satya swaroop
Hi ,
  Can the result of solr show the only a part of the content of a
document that got in the result.
example

if i send a query for to search tika then the result should be as follows:::


-
0
79

-

-
   text/html

 1html
-
-
   Apache Tomcat/6.0.26 - Error reportHTTP Status 500 -
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@cc9d70

org.apache.solr.common.SolrException:
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@cc9d70
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:214)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)...

 
   



The result should not show the entire content of a file. It should show up
only a part of the content where the query word is present..As like the
google result and like search result in the lucidimagionation

Regards,
satya


Re: solr result....

2010-10-28 Thread satya swaroop
Hi Lance,
  I actually copied tika exceptions in one html file and indexed
it. It is just a content of a file and here i tell u  what i mean::


if i post a query like *java* then the result or response from solr should
hit only a part of the content like as follows::

http://localhost:8456/solr/select/?q=java&version=2.2&start=10&rows=10&indent=on

-
-
0
453

-
-
-
application/pdf

javaebuk
2001-07-02T11:54:10Z
-
-

A Java program with two main methods  The following is an example of a java
program with two main methods with different signatures.
Program 3
public class TwoMains
{
/** This class has two main methods with
* different signatures */
public static void main (String args[])  .
  
 
.






the doc in the result should not contain the entire content of a file. It
should have only a part of the content.The content should be the first hit
of the word java in that file...


Regards,
satya


Re: RAM increase

2010-10-29 Thread satya swaroop
Hi All,

 Thanks for your reply.I have a doubt whether to increase the ram or
heap size to java or to tomcat where the solr is running


Regards,
satya


Google like search

2010-12-14 Thread satya swaroop
Hi All,
 Can we get the results like google  having some data  about the
search... I was able to get the data that is the first 300 characters of a
file, but it is not helpful for me, can i be get the data that is having the
first found key in that file

Regards,
Satya


Re: Google like search

2010-12-14 Thread satya swaroop
Hi Tanguy,
  I am not asking for highlighting.. I think it can be
explained with an example.. Here i illustarte it::

when i post the query like dis::

http://localhost:8080/solr/select?q=Java&version=2.2&start=0&rows=10&indent=on

i Would be getting the result as follows::

-
-
0
1

-
-
Java%20debugging.pdf
122
-
-
Table of Contents
If you're viewing this document online, you can click any of the topics
below to link directly to that section.
1. Tutorial tips 2
2. Introducing debugging  4
3. Overview of the basics 6
4. Lessons in client-side debugging 11
5. Lessons in server-side debugging 15
6. Multithread debugging 18
7. Jikes overview 20






Here the str field contains the first 300 characters of the file as i kept a
field to copy only 300 characters in schema.xml...
But i dont want the content like dis.. Is there any way to make an o/p as
follows::

 Java is one of the best language,java is easy to learn...


where this content is at start of the chapter,where the first word of java
is occured in the file...


Regards,
Satya


Re: Google like search

2010-12-14 Thread satya swaroop
Hi Tanguy,
 Thanks for ur reply. sorry to ask this type of question.
how can we index each chapter of a file as seperate document.As for i know
we just give the path of file to solr to index it... Can u provide me any
sources for this type... I mean any blogs or wiki's...

Regards,
satya


Re: Google like search

2010-12-16 Thread satya swaroop
Hi All,

 Thanks for your suggestions.. I got the result of what i expected..

Cheers,
Satya


Testing Solr

2010-12-16 Thread satya swaroop
Hi All,

 I built solr successfully and i am thinking to test it  with nearly
300 pdf files, 300 docs, 300 excel files,...and so on of each type with 300
files nearly
 Is there any dummy data available to test for solr,Otherwise i need to
download each and every file individually..??
Another question is there any Benchmarks of solr...??

Regards,
satya


Different Results..

2010-12-22 Thread satya swaroop
Hi All,
 i am getting different results when i used with some escape keys..
for example:::
1) when i use this request
http://localhost:8080/solr/select?q=erlang!ericson
   the result obtained is
   

2) when the request is
 http://localhost:8080/solr/select?q=erlang/ericson
the result is
  


My query here is, do solr consider both the queries differently and what do
it consider for !,/ and all other escape characters.


Regards,
satya


error in html???

2010-12-23 Thread satya swaroop
Hi All,

 I am able to get the response in the success case in json format by
stating wt=json in the query. But as in case if any errors i am geting in
html format.
 1) Is there any specified reason to get in html format??
  2)cant we get the error result in json format??

Regards,
satya


Re: error in html???

2010-12-23 Thread satya swaroop
Hi Erick,
   Every result comes in xml format. But when you get any errors
like http 500 or http 400 like wise we will get in html format. My query is
cant we make that html file into json or vice versa..

Regards,
satya


spell suggest response

2011-01-11 Thread satya swaroop
Hi All,
 can we get just suggestions only without the files response??
Here I state an example
when i query
http://localhost:8080/solr/spellcheckCompRH?q=java daka
usar&spellcheck=true&spellcheck.count=5&spellcheck.collate=true

i get some result of java files and then the suggestions for the words
daka-data , usar-user. But actually i need only the spell suggestions.
But here time is getting consumed for displaying of files and then giving
spell suggestions. Cant we post a query to solr where we can get
the response as only spell suggestions???

Regards,
satya


Re: spell suggest response

2011-01-11 Thread satya swaroop
Hi Gora,
   I am using solr for file indexing and searching, But i have a
module where i dont need any files result but only the spell suggestions, so
i asked is der anyway in solr where i would get the spell suggestion
responses only.. I think it is clear for u now.. If not tell me I will try
to explain still furthur...

Regards,
satya


Re: spell suggest response

2011-01-11 Thread satya swaroop
Hi Stefan,
  Ya it works :). Thanks...
  But i have a question... can it be done only getting spell
suggestions even if the spelled word is correct... I mean near words to
it...
   ex:-

http://localhost:8080/solr/spellcheckCompRH?q=java&rows=0&spellcheck=true&spellcheck.count=10
   In the o/p the suggestions will not be coming as
java is a word that spelt correctly...
  But cant we get near suggestions as javax,javacetc.., ???

Regards,
satya


Re: spell suggest response

2011-01-12 Thread satya swaroop
Hi stefan,
I need the words from the index record itself. If java is given
then the relevant or similar or near words in the index should be shown.
Even the given keyword is true... can it be possible???


ex:-

http://localhost:8080/solr/spellcheckCompRH?q=java&rows=0&spellcheck=true&spellcheck.count=10
   In the o/p the suggestions will not be coming as
java is a word that spelt correctly...
  But cant we get near suggestions as javax,javacetc.., ???(the
terms in the index)

I read  about  suggester in solr wiki at
http://wiki.apache.org/solr/Suggester . But i tried to implement it but got
errors as

*error loading class org.apache.solr.spelling.suggest.suggester*

Regards,
satya


Re: spell suggest response

2011-01-12 Thread satya swaroop
Hi Juan,
 yeah.. i tried of onlyMorePopular and got some results but are
not similar words or near words to the word i have given in the query..
Here i state you the output..

http://localhost:8080/solr/spellcheckCompRH?q=java&rows=0&spellcheck=true&spellcheck.collate=true&spellcheck.onlyMorePopular=true&spellcheck.count=20

the o/p i get is
-
data
have
can
any
all
has
each
part
make
than
also




but this words are not similar to the given word 'java' the near words
would be javac,javax,data,java.io... etc.., the stated words are present in
the index..


Regards,
satya


Re: spell suggest response

2011-01-16 Thread satya swaroop
Hi Grijesh,
As you said you are implementing this type. Can you tell how
did you made in brief..

Regards,
satya


Re: spell suggest response

2011-01-17 Thread satya swaroop
Hi Grijesh,
   Though i use autosuggest i maynot get the exact results, the
order is not accurate.. As for example if i type
http://localhost:8080/solr/terms/?terms.fl=spell&terms.prefix=solr&terms.sort=index&terms.lower=solr&terms.upper.incl=true
 i get results as...
solr
solr.amp
solr.datefield
solr.p
solr.pdf
   like that.But this may not lead to getting accurate results as we get in
spellchecking,

i require suggestions for any word irrespective of whether it is correct or
not, is there anything to be changed in solr to get suggestions as we get
when we type a wrong word in spellchecking... If so please let me know...

Regards,
satya


Re: spell suggest response

2011-01-17 Thread satya swaroop
Hi Grijesh,
i added both the termscomponent and spellcheck component to the
terms requesthandler, when i send a query as
http://localhost:8080/solr/terms?terms.fl=text&terms.prefix=java&&rows=7&omitHeader=true&spellcheck=true&spellcheck.q=java&spellcheck.count=20

the result i get is

-

-

6
6
6
6
6
6


-







when i send this
http://localhost:8080/solr/terms?terms.fl=text&terms.prefix=jawa&&rows=5&omitHeader=true&spellcheck=true&spellcheck.q=jawa&spellcheck.count=20
i get the result as


-



-

-

-

int name="numFound">20
0
4
-

java
away
jav
jar
ara
apa
ana
ajax


Now i need to know how to make ordering of the terms as in the 1st query the
result obtained is inorder and i want only javax, javac,javascript but not
javas,javabas how can it be done??

Regards,
satya


spellchecking even the key is true....

2011-01-17 Thread satya swaroop
Hi All,
can we get the spellchecking results even when the keyword is true.
As for spellchecking will give only to the wrong keywords, cant we get
similar and near words of the keyword though the spellcheck.q is true..
as an example
http://localhost:8080/solr/spellcheck?q=java&spellcheck=true&spellcheck.count=5
the result will be

1)-

-






can we get the result as
2)

-


javax
javac
javabean
javascript



NOTE:: all the keywords in the 2nd result is are in index...

Regards,
satya


is solr dynamic calculation??

2011-02-17 Thread satya swaroop
Hi All,
 I have a query whether the solr shows the results of documents by
calculating the score on dynamic or is it pre calculating and supplying??..

for example:
if a query is made on q=solr in my index... i get a results of 25
documents... what is it calculating?? i am very keen to know its way of
calculation of score and ordering of results


Regards,
satya


Re: is solr dynamic calculation??

2011-02-17 Thread satya swaroop
Hi Markus,
As far i gone through the scoring of solr. The scoring is
done during searching on the use of boost values which were given during the
indexing.
I have a query now if i search for a keyword java then
1)if for a term named "java" in index contain 50,000 documents then do solr
calculate the score value for each and every document and filter them and
then sort it and   server results??? if it does the dynamic calculation
for each and every document then it takes a long time, but how can solr
reduced it??
 Am i right??? or if any wrong please tell me???

Regards,
satya


solr indexing

2011-02-22 Thread satya swaroop
Hi all,
   to my keen intrest on solr indexing mechanism i started mining the
code of solr indexing (/update/extract), i read the indexing file formats,
scoring procedure, i have some queries regarding this..
1) the scoring is performed on the dynamic and precalculated value(doc
boost, field boost, lengthnorm). In calculating the score if suppose a term
in the index consits nearly one million docs then is solr calculating the
score for each and every doc present for the term and getting the top docs
from the index??? or is it undergoing any mechanism such that limiting the
calculation of score to only a particular docs???

If anybody know about it or any documentation regarding this please inform
me...


Regards,
satya


Solr coding

2011-03-23 Thread satya swaroop
Hi All,
  As for my project Requirement i need to keep privacy for search of
files so that i need to modify the code of solr,

for example if there are 5 users and each user indexes some files as
  user1 -> java1, c1,sap1
  user2 -> java2, c2,sap2
  user3 -> java3, c3,sap3
  user4 -> java4, c4,sap4
  user5 -> java5, c5,sap5

   and if a user2 searches for the keyword "java" then it should be display
only  the file java2 and not other files

so inorder to keep this filtering inside solr itself may i know where to
modify the code... i will access a database to check the user indexed files
and then filter the result... i didnt have any cores.. i indexed all files
in a single index...

Regards,
satya


Re: Solr coding

2011-03-23 Thread satya swaroop
Hi Jayendra,
I forgot to mention the result also depends on the group of
user too It is some wat complex so i didnt tell it.. now i explain the
exact way..

  user1, group1 -> java1, c1,sap1
  user2 ,group2-> java2, c2,sap2
  user3 ,group1,group3-> java3, c3,sap3
  user4 ,group3-> java4, c4,sap4
  user5 ,group3-> java5, c5,sap5

 user1,group1 means user1 belong to group1


Here the filter includes the group too.., if for eg: user1 searches for
"java" then the results should show as java1,java3 since java3 file is
acessable to all users who are related to the group1, so i thought of to
edit the code...

Thanks,
satya


  1   2   >