Re: Replacing a document in Solr5

2015-12-19 Thread Andrea Gazzarini
That has nothing to do with your topic: addField adds a new value for a
given field in a SolrInputDocument, while setField replaces any existing
value (of a given field, regardless what is the existing value, I mean,
regardless if that field has zero, one or more values).

SolrInputDocument document = new SolrInputDocument();

document.set("id", 32872382); // the id field has now one value:  32872382

document.add("author", "B. Meyer") // the author field has one value. In
this case, being the first value, add() and set() behave in the the same way

document.add("author", "A. Yersu") // Now the author field has two values
document.set("author", "I.UUhash") // That will replace the existing two
values with this value.


solrClient.add(document); // here, You are sending  document with 1 id and
1 author



Those are methods of SolrInputDocument; when you call them, you're changing
the state of a local transfer object (the SolrInputDocument instance).
Before sending that to Solr using solrClient.add(SolrInputDocument) you can
do whatever you want with that instance (i.e. removing, adding, setting
values). The "document" representation that Solr will see is the state of
the instance that you pass to solrClient.add(...)

Best,
Andrea


2015-12-19 8:48 GMT+01:00 Debraj Manna :

> Ok. Then what is the difference between addField
> <
> http://github.com/apache/lucene-solr/tree/lucene_solr_5_3_1/solr/solrj/src/java/org/apache/solr/common/SolrInputDocument.java#L150
> >
> & setField
> <
> http://www.solr-start.com/javadoc/solr-lucene/org/apache/solr/common/SolrInputDocument.html#setField-java.lang.String-java.lang.Object-float-
> >
> ?
>
> On Sat, Dec 19, 2015 at 1:04 PM, Andrea Gazzarini 
> wrote:
>
> > As far as I know, this is how Solr works (e.g. it replaces the whole
> > document): how do you replace only a part of a document?
> >
> > Just send a SolrInputDocument with an existing (i.e. already indexed) id
> > and the document (on Solr) will be replaced.
> >
> > Andrea
> >
> > 2015-12-19 8:16 GMT+01:00 Debraj Manna :
> >
> > > Can someone let me know how can I replace a document on each update in
> > Solr
> > > 5.2.1 using SolrJ? I don;t want to update parts of the document. On
> doing
> > > update it should replace the entire document.
> > >
> >
>


problem with solr plugin

2015-12-19 Thread sara hajili
hi i wanna to have own normalization .
i write 2 class one class form normalization filter factort that extends
token filter factory and imoplement multiTermAwarecomponent
and an other one class is normalization factory that extends token filter.
then i create a jar from this classes with dependencies .
and i add this jar file to solr_home/dist and
solr_home/contrib/extraction/lib
and i use this class in schema in this way:


 

   

but i get this error when i added a core to solr:

org.apache.solr.common.SolrException: Could not load conf for core
post: Plugin init failure for [schema.xml] fieldType
"sample_normalizer": Plugin init failure for [schema.xml]
analyzer/filter: Error loading class
'com.ponila.set.textanalyzer.PersianCustomNormalizerFilterFactory'.
Schema file is D:\solr-5.3.1\example\poinila\solr\post\conf\schema.xml
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:80)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:721)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:697)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:629)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:214)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:194)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:675)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:443)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.solr.common.SolrException: Plugin init failure
for [schema.xml] fieldType "sample_normalizer": Plugin init failure
for [schema.xml] analyzer/filter: Error loading class
'com.ponila.set.textanalyzer.PersianCustomNormalizerFilterFactory'.
Schema file is D:\solr-5.3.1\example\poinila\solr\post\conf\schema.xml
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:596)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:175)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:55)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:69)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:104)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:75)
... 30 more
Caused by: org.apache.solr.common.SolrException: Plugin init failure
for [schema.xml] fieldType "sample_normalizer": Plugin init failure
for [schema.xml] analyzer/filter: Error loading class
'com.ponila.set.textanalyzer.PersianCustomNormalizerFilterFactory'
at 
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:178)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:489)
... 35 more
Caused by: org.apache.solr.common.SolrException: Plu

Re: problem with solr plugin

2015-12-19 Thread davidphilip cherian
Hi Sara,

The error is clear: class not found exception, which means solr couldn't
locate that jar file.

If you are not using solr-cloud then place that custom jar under
solr_home/lib folder.
You can also hard code the path of this jar file in solrconfig.xml under
/lib element.

If you are using solr-cloud, I think you should upload that zookeeper.



On Sat, Dec 19, 2015 at 5:58 PM, sara hajili  wrote:

> hi i wanna to have own normalization .
> i write 2 class one class form normalization filter factort that extends
> token filter factory and imoplement multiTermAwarecomponent
> and an other one class is normalization factory that extends token filter.
> then i create a jar from this classes with dependencies .
> and i add this jar file to solr_home/dist and
> solr_home/contrib/extraction/lib
> and i use this class in schema in this way:
>  positionIncrementGap="100">
> 
>  
>  class="com.ponila.set.textanalyzer.PersianCustomNormalizerFilterFactory"/>
>
> 
> but i get this error when i added a core to solr:
>
> org.apache.solr.common.SolrException: Could not load conf for core
> post: Plugin init failure for [schema.xml] fieldType
> "sample_normalizer": Plugin init failure for [schema.xml]
> analyzer/filter: Error loading class
> 'com.ponila.set.textanalyzer.PersianCustomNormalizerFilterFactory'.
> Schema file is D:\solr-5.3.1\example\poinila\solr\post\conf\schema.xml
> at
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:80)
> at
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:721)
> at
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:697)
> at
> org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:629)
> at
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:214)
> at
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:194)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:675)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:443)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.solr.common.SolrException: Plugin init failure
> for [schema.xml] fieldType "sample_normalizer": Plugin init failure
> for [schema.xml] analyzer/filter: Error loading class
> 'com.ponila.set.textanalyzer.PersianCustomNormalizerFilterFactory'.
> Schema file is D:\solr-5.3.1\example\poinila\solr\post\conf\schema.xml
> at
> org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:596)
> at org.apache.solr.schema.IndexSchema.(IndexSchema.java:175)
> at
> org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:55)
> at
> org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:69)
> at
> org.apache.solr.core.C

Re: Some problems when upload data to index in cloud environment

2015-12-19 Thread Shawn Heisey
On 12/18/2015 6:16 PM, 周建二 wrote:
> I am building a solr cloud production environment. My solr version is 5.3.1. 
> The environment consists three nodes running CentOS 6.5. First I build the 
> zookeeper environment by the three nodes, and then run solr on the three 
> nodes, and at last build a collection consists of three shards and each shard 
> has two replicas. After that we can see that cloud structure on the Solr 
> Admin page.



> HTTP ERROR 404
> 
> Problem accessing /solr/cloud-test/update/extract. Reason:

One of two problems is likely:  Either there is no collection named
"cloud-test" on your cloud, or the /update/extract handler is not
defined in that collection's solrconfig.xml file.  The active version of
this file lives in zookeeper when you're running SolrCloud.

If you're sure a collection with this name exists, how exactly did you
create it?  Was it built with one of the sample configs or with a config
that you built yourself?

Of the three configsets included with the Solr dowbload,
data_driven_schema_configs and sample_techproducts_configs contain the
/update/extract handler.  The configset named basic_configs does NOT
contain the handler.

Thanks,
Shawn



Re: faceting is unusable slow since upgrade to 5.3.0

2015-12-19 Thread William Bell
Can we add method=uif back when not using the JSON Facet API too?

That would help a lot of people.

On Thu, Dec 17, 2015 at 7:17 AM, Yonik Seeley  wrote:

> On Wed, Dec 16, 2015 at 4:57 AM, Vincenzo D'Amore 
> wrote:
> > Hi all,
> >
> > given that solr 5.4 is finally released, is this what's more stable and
> > efficient version of solrcloud ?
> >
> > I have a website which receives many search requests. It serve normally
> > about 2000 concurrent requests, but sometime there are peak from 4000 to
> > 1 requests in few seconds.
> >
> > On January I'll have a chance to upgrade my old SolrCloud 4.8.1 cluster
> to
> > a new brand version, but following this thread I read about the problems
> > that can occur upgrading to latest version.
> >
> > I have seen that issue SOLR-7730 "speed-up faceting on doc values fields"
> > is fixed in 5.4.
> >
> > I'm using standard faceting without docValues. Should I add docValues in
> > order to benefit of such fix?
>
> You'll have to try it I think...
> DocValues have a lot of advantages (much less heap consumption, and
> much smaller overhead when opening a new searcher), but they can often
> be slower as well.
>
> Comparing 4x to 5x non-docvalues, top-level field caches were removed
> by lucene, and while that benefits certain things like NRT (opening a
> new searcher very often), it will hurt performance for other
> configurations.
>
> The JSON Facet API currently allows you to pick your strategy via the
> "method" param for multi-valued string fields without docvalues:
> "uif" (UninvertedField) gets you the top-level strategy from Solr 4,
> while "dv" (DocValues built on-the-fly) gets you the NRT-friendly
> "per-segment" strategy.
>
> -Yonik
>



-- 
Bill Bell
billnb...@gmail.com
cell 720-256-8076


Re: faceting is unusable slow since upgrade to 5.3.0

2015-12-19 Thread Jamie Johnson
Bill,

Check out the patch attached to
https://issues.apache.org/jira/browse/SOLR-8096.  I had considered making
the method uif after I had done most of the work, it would be trivial to
change and would probably be more aligned with not adding unexpected
changes to people that are currently using fc.

-Jamie

On Sat, Dec 19, 2015 at 11:03 PM, William Bell  wrote:

> Can we add method=uif back when not using the JSON Facet API too?
>
> That would help a lot of people.
>
> On Thu, Dec 17, 2015 at 7:17 AM, Yonik Seeley  wrote:
>
> > On Wed, Dec 16, 2015 at 4:57 AM, Vincenzo D'Amore 
> > wrote:
> > > Hi all,
> > >
> > > given that solr 5.4 is finally released, is this what's more stable and
> > > efficient version of solrcloud ?
> > >
> > > I have a website which receives many search requests. It serve normally
> > > about 2000 concurrent requests, but sometime there are peak from 4000
> to
> > > 1 requests in few seconds.
> > >
> > > On January I'll have a chance to upgrade my old SolrCloud 4.8.1 cluster
> > to
> > > a new brand version, but following this thread I read about the
> problems
> > > that can occur upgrading to latest version.
> > >
> > > I have seen that issue SOLR-7730 "speed-up faceting on doc values
> fields"
> > > is fixed in 5.4.
> > >
> > > I'm using standard faceting without docValues. Should I add docValues
> in
> > > order to benefit of such fix?
> >
> > You'll have to try it I think...
> > DocValues have a lot of advantages (much less heap consumption, and
> > much smaller overhead when opening a new searcher), but they can often
> > be slower as well.
> >
> > Comparing 4x to 5x non-docvalues, top-level field caches were removed
> > by lucene, and while that benefits certain things like NRT (opening a
> > new searcher very often), it will hurt performance for other
> > configurations.
> >
> > The JSON Facet API currently allows you to pick your strategy via the
> > "method" param for multi-valued string fields without docvalues:
> > "uif" (UninvertedField) gets you the top-level strategy from Solr 4,
> > while "dv" (DocValues built on-the-fly) gets you the NRT-friendly
> > "per-segment" strategy.
> >
> > -Yonik
> >
>
>
>
> --
> Bill Bell
> billnb...@gmail.com
> cell 720-256-8076
>