Solr 7.3.0 loading OpenNLPExtractNamedEntitiesUpdateProcessorFactory

2018-04-08 Thread Ryan Yacyshyn
Hi all,

I'm running into a small problem loading
the OpenNLPExtractNamedEntitiesUpdateProcessorFactory class, getting an
error saying it's not found. I'm loading all the required jar files,
according to the readme:

*solrconfig.xml*

  
  

  ...

  

  en-ner-person.bin
  opennlp-en-tokenization
  text
  people_ss

  

*managed-schema*

  

  

  

I have the three *.bin files in my conf directory, but when I try to reload
with this config I get this error:

```
{
  "responseHeader": {
"status": 500,
"QTime": 390
  },
  "error": {
"metadata": [
  "error-class",
  "org.apache.solr.common.SolrException",
  "root-error-class",
  "java.lang.ClassNotFoundException"
],
"msg": "Error handling 'reload' action",
"trace": "org.apache.solr.common.SolrException: Error handling 'reload'
action\n\tat
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$2(CoreAdminOperation.java:112)\n\tat
org.apache.solr.handler.admin.CoreAdminOperation$$Lambda$103/335708295.execute(Unknown
Source)\n\tat
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:358)\n\tat
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)\n\tat
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
org.eclipse.jetty.server.Server.handle(Server.java:530)\n\tat
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)\n\tat
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)\n\tat
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)\n\tat
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)\n\tat
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)\n\tat
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)\n\tat
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)\n\tat
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)\n\tat
java.lang.Thread.run(Thread.java:745)\nCaused by:
org.apache.solr.common.SolrException: Unable to reload core [nlp]\n\tat
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1311)\n\tat
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$2(CoreAdminOperation.java:110)\n\t...
43 more\nCaused by: org.apache.solr.common.SolrException: Error load

Re: Solr 7.3.0 loading OpenNLPExtractNamedEntitiesUpdateProcessorFactory

2018-04-08 Thread Shawn Heisey

On 4/8/2018 2:36 AM, Ryan Yacyshyn wrote:

I'm running into a small problem loading
the OpenNLPExtractNamedEntitiesUpdateProcessorFactory class, getting an
error saying it's not found. I'm loading all the required jar files,
according to the readme:


You've got a  element to load analysis-extras jars, but are you 
certain it's actually loading anything?


Can you share a solr.log file created just after a Solr restart?  Not 
just a reload -- I'm asking for a restart so the log is more complete.  
With that, I can see what's happening and then ask more questions that 
may pinpoint something.


Thanks,
Shawn



Use TopicStream as percolator

2018-04-08 Thread SOLR4189
Hi all,

I need to implement percolator functionality in SOLR (i.e. get all indexed
docs that are matched to monitored query). How can I do this? 

I found out in Solr TopicStream class. If I understand right, using
TopicStream with DaemonStream will give me percolator functionality, won't
it? (like here  Continuous Pull Streaming

 
) 

Is it a good idea to use *Continuous Pull Streaming* in production? How many
queries can I monitor in such way? ( I need up to 1000 queries and I have up
to million indexed docs per day)

And one more thing, I debug DaemonStream/TopicStream code and I don't
understand what is the advantage of this over simple loop in which I'll send
queries each X seconds/minutes/hours to SOLR? Will it work faster than
simple loop? If yes, why? Due to filter query on checkpoint version
(*solrParams.add("fq", "{!frange cost=100 incl=false
l="+checkpoint+"}_version_")*)? I'll be happy to understand all advantages
of using DaemonStream/TopicStream.

Thank you.
 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Use TopicStream as percolator

2018-04-08 Thread Joel Bernstein
Some interesting articles on uses of the topic, that might answer the
questions you have above:

http://joelsolr.blogspot.com/2017/01/deploying-ai-alerting-system-with-solrs.html
http://joelsolr.blogspot.com/2017/01/deploying-solrs-new-parallel-executor.html

If you have a smaller number of alerts that you want to constantly monitor
the first approach would work well.

For massively scalable bulk alerting the second approach will work better.
In the second approach each alert would be stored as an expression in the
index. An executor would run the expressions and distribute the alert. You
would need to write an expression that performed the push. For example the
expression you stored in Solr to be executed could be something like:

push(topic(...))

You would provide the logic for the push expression.








Joel Bernstein
http://joelsolr.blogspot.com/

On Sun, Apr 8, 2018 at 11:33 AM, SOLR4189  wrote:

> Hi all,
>
> I need to implement percolator functionality in SOLR (i.e. get all indexed
> docs that are matched to monitored query). How can I do this?
>
> I found out in Solr TopicStream class. If I understand right, using
> TopicStream with DaemonStream will give me percolator functionality, won't
> it? (like here  Continuous Pull Streaming
>  decorators>
> )
>
> Is it a good idea to use *Continuous Pull Streaming* in production? How
> many
> queries can I monitor in such way? ( I need up to 1000 queries and I have
> up
> to million indexed docs per day)
>
> And one more thing, I debug DaemonStream/TopicStream code and I don't
> understand what is the advantage of this over simple loop in which I'll
> send
> queries each X seconds/minutes/hours to SOLR? Will it work faster than
> simple loop? If yes, why? Due to filter query on checkpoint version
> (*solrParams.add("fq", "{!frange cost=100 incl=false
> l="+checkpoint+"}_version_")*)? I'll be happy to understand all advantages
> of using DaemonStream/TopicStream.
>
> Thank you.
>
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


[SECURITY] CVE-2018-1308: XXE attack through Apache Solr's DIH's dataConfig request parameter

2018-04-08 Thread Uwe Schindler
CVE-2018-1308: XXE attack through Apache Solr's DIH's dataConfig request 
parameter

Severity: Major

Vendor:
The Apache Software Foundation

Versions Affected:
Solr 1.2 to 6.6.2
Solr 7.0.0 to 7.2.1

Description:
The details of this vulnerability were reported to the Apache Security mailing 
list. 

This vulnerability relates to an XML external entity expansion (XXE) in the
`&dataConfig=` parameter of Solr's DataImportHandler. It can be
used as XXE using file/ftp/http protocols in order to read arbitrary local
files from the Solr server or the internal network. See [1] for more details.

Mitigation:
Users are advised to upgrade to either Solr 6.6.3 or Solr 7.3.0 releases both
of which address the vulnerability. Once upgrade is complete, no other steps
are required. Those releases disable external entities in anonymous XML files
passed through this request parameter. 

If users are unable to upgrade to Solr 6.6.3 or Solr 7.3.0 then they are
advised to disable data import handler in their solrconfig.xml file and
restart their Solr instances. Alternatively, if Solr instances are only used
locally without access to public internet, the vulnerability cannot be used
directly, so it may not be required to update, and instead reverse proxies or
Solr client applications should be guarded to not allow end users to inject
`dataConfig` request parameters. Please refer to [2] on how to correctly
secure Solr servers.

Credit:
麦 香浓郁

References:
[1] https://issues.apache.org/jira/browse/SOLR-11971
[2] https://wiki.apache.org/solr/SolrSecurity

-
Uwe Schindler
uschind...@apache.org 
ASF Member, Apache Lucene PMC / Committer
Bremen, Germany
http://lucene.apache.org/




Match a phrase like "Apple iPhone 6 32GB white" with "iphone 6"

2018-04-08 Thread Sami al Subhi
I have a set of products' docs with field name_en for example
name_en:"Apple iPhone 6 32GB white" 
when a user chooses this product to view, I use the product name as a search
phrase and retrieve all the product accessories docs that have "iphone 6"
(not "iphone 7") in the field of compatible_with 

I am really struggling with how to tokenize and filter the field
compatible_with in order to achieve a match by the relative product name.

can you please help me! 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Match a phrase like "Apple iPhone 6 32GB white" with "iphone 6"

2018-04-08 Thread Mikhail Khludnev
It reminds edismax (see mm, pf params) and MLT (more like this) features,
but unless you _extract_ an _entity_ to search in compatible_with field
it's can hardly avoid false positive matches like "apple white".

On Sun, Apr 8, 2018 at 11:01 PM, Sami al Subhi  wrote:

> I have a set of products' docs with field name_en for example
> name_en:"Apple iPhone 6 32GB white"
> when a user chooses this product to view, I use the product name as a
> search
> phrase and retrieve all the product accessories docs that have "iphone 6"
> (not "iphone 7") in the field of compatible_with
>
> I am really struggling with how to tokenize and filter the field
> compatible_with in order to achieve a match by the relative product name.
>
> can you please help me!
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>



-- 
Sincerely yours
Mikhail Khludnev


Re: Match a phrase like "Apple iPhone 6 32GB white" with "iphone 6"

2018-04-08 Thread Sami al Subhi
I think this filter will output the desired result:


   
   
   


   
   
   


indexing:
"iPhone 6" will be indexed as "iphone 6" (always a single token)

querying:
so this will analyze "Apple iPhone 6 32GB white" to "apple", "apple iphone",
"iphone", "iphone 6" and so on...
then here a match will be achieved using the 4th token.


 I dont see how this will result in false positive matching.




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Urgent! How to retrieve the whole message in the Solr search result?

2018-04-08 Thread Stefan Matheis
Raymond,

please don't get this the wrong way - it's entirely possible that you
didn't mean it like it sounded to me ...

"Urgent" is a term you can use with your colleagues or a supplier of yours
.. but not a mailing list like this. We're doing this in our spare time,
trying to help out others working in the same area.

Pointers regarding your questions:

> 1. the order_id: the term of "order_id"
> must be displayed instead of its
> actual tag 37;
> 2. the trd_date: the term of "trd_date"
> must be displayed in the result;

Than you either have to index it like that or apply such a mapping at
runtime. Solr does return whatever data it is that you store while you're
indexing it.

You are responsible to map the source fields to the names you want - Solr
doesn't care about that. It's just field names and their values ...

> 3. the whole message: the whole and
> raw message must be displayed in the
> result;

Again, you're responsible for it. If you need it, introduce another field
in your schema where you store the original message in whatever format you
like.

> 4. the two fields of order_id and
> trd_date must be highlighted.

In Solr "highlighting" typically means that you're trying that part(s) of
a/multiple words that matched the term(s) using for querying .. but given
your sample, which is just a simple filter, I don't think we're talking
about the same .. do we?

What is it you're thinking about when you say those fields need to be
highlighted?

HTH,
Stefan

On Fri, Apr 6, 2018, 1:07 PM Raymond Xie  wrote:

> I am using Solr for the following search need:
>
> raw data: in FIX format, it's OK if you don't know what it is, treat it as
> csv with a special delimiter.
>
> parsed data: from raw data, all in the same format of a bunch of JSON
> format with all 100+ fields.
>
> Example:
>
> Raw data: delimiter is \u001:
>
> 8=FIX.4.4 9=653 35=RIO 1=TEST 11=3379122 38=1 44=2.0 39=A 40=2
> 49=VIPER 50=JPNIK01 54=1 55=JNI253D8.OS 56=XSVC 59=0 75=20180350 100=XOSE
> 10039=viperooe 10241=viperooe 150=A 372=D 122=20180320-08:08:35.038
> 10066=20180320-08:08:35.038 10436=20180320-08:08:35.038 202=25375.0
> 52=20180320-08:08:35.088 60=20180320-08:08:35.088
> 10071=20180320-08:08:35.088 11210=3379122 37=3379122
> 10184=3379122 201=1 29=4 10438=RIO.4.5 10005=178 10515=178
> 10518=178 581=13 660=102 1133=G 528=P 10104=Y 10202=APMKTMAKING
> 10208=APAC.VIPER.OOE 10217=Y 10292=115 11032=-1 382=0 10537=XOSE 15=JPY
> 167=OPT 48=179492540 455=179492540 22=101 456=101 151=1.0 421=JPN 10=200
>
> Parsed data: in json:
>
> {"122": "20180320-08:08:35.038", "49": "VIPER", "382": "0", "151": "1.0",
> "9": "653", "10071": "20180320-08:08:35.088", "15": "JPY", "56": "XSVC",
> "54": "1", "10202": "APMKTMAKING", "10537": "XOSE", "10217": "Y", "48":
> "179492540", "201": "1", "40": "2", "8": "FIX.4.4", "167": "OPT", "421":
> "JPN", "10292": "115", "10184": "3379122", "456": "101", "11210":
> "3379122", "1133": "G", "10515": "178", "10": "200", "11032": "-1",
> "10436": "20180320-08:08:35.038", "10518": "178", "11":
> "3379122", *"75":
> "20180320"*, "10005": "178", "10104": "Y", "35": "RIO", "10208":
> "APAC.VIPER.OOE", "59": "0", "60": "20180320-08:08:35.088", "528": "P",
> "581": "13", "1": "TEST", "202": "25375.0", "455": "179492540", "55":
> "JNI253D8.OS", "100": "XOSE", "52": "20180320-08:08:35.088", "10241":
> "viperooe", "150": "A", "10039": "viperooe", "39": "A", "10438": "RIO.4.5",
> "38": "1", *"37": "3379122"*, "372": "D", "660": "102", "44":
> "2.0", "10066": "20180320-08:08:35.038", "29": "4", "50": "JPNIK01", "22":
> "101"}
>
> The fields used for searching is order_id (tag 37) and trd_date(tag 75). I
> will create the schema with the two fields added to it
>
>  multiValued="true"/>
>  multiValued="true"/>
>
> At the moment I can get the result by:
> http://192.168.112.141:8983/solr/fix_messages/select?q=37:3379122
> where 37 is the order_id and  3379122 is the value to search in
> field of "37"
>
>
> The result I get is:
>
> {
>   "responseHeader":{
> "status":0,
> "QTime":6,
> "params":{
>   "q":"37:3379122"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "122":["20180320-08:08:35.038"],
> "49":["VIPER"],
> "382":[0],
> "151":[1.0],
> "9":[653],
> "10071":["20180320-08:08:35.088"],
> "15":["JPY"],
> "56":["XSVC"],
> "54":[1],
> "10202":["APMKTMAKING"],
>
> 
>
> I need to show the result like below:
>
> 1. the order_id: the term of "order_id" must be displayed instead of its
> actual tag 37;
> 2. the trd_date: the term of "trd_date" must be displayed in the result;
> 3. the whole message: the whole and raw message must be displayed in the
> result;
> 4. the two fields of order_id and trd_date must be highlighted.
>
> Can anyone tell me how do I do it? Thank you very much in advance.
>
> *---

Fwd: Solr join With must clause in fq

2018-04-08 Thread manuj singh
Hi all,
I am trying to debug a problem which i am facing and need some help.

I have a solr query which does join on 2 different cores. so lets say my
first core has following 3 docs

{ "id":"1", "m_id":"lebron", "some_info":"29" }

{ "id":"2", "m_id":"Wade", "matches_win":"29" }

{ "id":"3", "m_id":"lebron", "some_info":"1234" }

my second core has the following docs

{ "m_id": "lebron", "team": "miami" }

{ "m_id": "Wade", "team": "miami" }

so now we made an update to doc with lebron and changed the team to
"clevelend". So the new docs in core 2 looks like this.

{ "m_id": "lebron", "team": "clevelend" }

{ "m_id": "Wade", "team": "miami" }

now i am trying to join these 2 and finding the docs form core1 for team
miami.

my query looks like this

fq=+{!join from=m_id to=m_id fromIndex=core2 force=true}team:miami

I am expecting it to return doc with id=2 but what i am getting is document
1 and 2.

I am not able to figure out what is the problem. Is the query incorrect ?
or is there some issue in join.

*Couple of observations.*

1.if i remove the + from the filter query it works as expected. so the
following query works

fq={!join from=m_id to=m_id fromIndex=core2 force=true}team:miami

I am not sure how the Must clause affecting the query.

*2.* Also if you look the original query is not returning document
3.(however its returning document 1 which has the same m_id). Now the only
difference between doc 1 and doc3 is that doc1 was created when "lebron"
was part of team: miami. and doc3 was created when the team got updated to
"cleveland". So the join is working fine for the new docs in core1 but not
for the old docs.

3.If i use q instead of fq the query returns results as expected.

q=+{!join from=m_id to=m_id fromIndex=core2 force=true}team:miami

and

q={!join from=m_id to=m_id fromIndex=core2 force=true}team:miami

Both of the above works.

I am sure i am missing something how internally join works. I am trying to
understand why fq has a different behavior then q with the Must(+) clause.

I am using solr 4.10.



Thanks

Manuj


Solr nodes out of sync

2018-04-08 Thread James Keeney
We have a case on our dev server which uses the cloud example setup where
our two nodes are out of synch. One has records that the other does not.

We found the error:

c:organizations s:shard1 r:core_node1 x:organizations_shard1_replica0]
org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring
commit while not ACTIVE - state: BUFFERING replay: false

which explains what is happening. That node is the leader so the docs are
being stored on the second node transaction log but not fully synchronized
and committed.

My question is this:

What causes a node to become not ACTIVE and how do we monitor for that
state?

Thanks.

Jim K.
-- 
Jim Keeney
President, FitterWeb
E: j...@fitterweb.com
M: 703-568-5887

*FitterWeb Consulting*
*Are you lean and agile enough? *


Re: Solr 7.3.0 loading OpenNLPExtractNamedEntitiesUpdateProcessorFactory

2018-04-08 Thread Ryan Yacyshyn
Hi Shawn,

I'm pretty sure the paths to load the jars in analysis-extras is correct,
the jars in /contrib/analysis-extras/lib load fine. I verified this by
changing the name of solr.OpenNLPTokenizerFactory to
solr.OpenNLPTokenizerFactory2
and saw the new error. Changing it back to solr.OpenNLPTokenizerFactory
(without the "2") doesn't throw any errors, so I'm assuming these two
jar files (opennlp-maxent-3.0.3.jar and opennlp-tools-1.8.3.jar) must be
loading.

I tried swapping the order in which these jars are loaded as well, but no
luck there.

I have attached my solr.log file after a restart. Also included is my
solrconfig.xml and managed-schema. The path to my config
is /Users/ryan/solr-7.3.0/server/solr/nlp/conf and this is where I have the
OpenNLP bin files (en-ner-person.bin, en-sent.bin, and en-token.bin).
Configs are derived from the _default configset.

On a mac, and my Java version is:

java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

Thanks,
Ryan



On Sun, 8 Apr 2018 at 21:34 Shawn Heisey  wrote:

> On 4/8/2018 2:36 AM, Ryan Yacyshyn wrote:
> > I'm running into a small problem loading
> > the OpenNLPExtractNamedEntitiesUpdateProcessorFactory class, getting an
> > error saying it's not found. I'm loading all the required jar files,
> > according to the readme:
>
> You've got a  element to load analysis-extras jars, but are you
> certain it's actually loading anything?
>
> Can you share a solr.log file created just after a Solr restart?  Not
> just a reload -- I'm asking for a restart so the log is more complete.
> With that, I can see what's happening and then ask more questions that
> may pinpoint something.
>
> Thanks,
> Shawn
>
>





  

  
  7.3.0

  

  
  
  

  
  

  
  

  
  

  
  
  

  
  

  
  ${solr.data.dir:}


  
  

  
  

  
  


















${solr.lock.type:native}













  


  
  
  
  
  
  

  
  



  ${solr.ulog.dir:}
  ${solr.ulog.numVersionBuckets:65536}




  ${solr.autoCommit.maxTime:15000}
  false





  ${solr.autoSoftCommit.maxTime:-1}




  

  
  

  
  


1024























true





20


200




  

  


  

  



false

  


  
  








  

  
  
  


  explicit
  10
  
  








  

  
  

  explicit
  json
  true

  


  
  

  explicit

  

  

  _text_

  

  
  

  true
  ignored_
  _text_

  

  

  
  

text_general





  default
  _text_
  solr.DirectSolrSpellChecker
  
  internal
  
  0.5
  
  2
  
  1
  
  5
  
  4
  
  0.01
  




  

  
  

  
  default
  on
  true
  10
  5
  5
  true
  true
  10
  5


  spellcheck

  

  
  

  
  

  true


  tvComponent

  

  

  
  

  
  

  true
  false


  terms

  


  
  

string
  

  
  

  explicit


  elevator

  

  
  

  
  
  

  100

  

  
  

  
  70
  
  0.5
  
  [-\w ,/\n\"']{20,200}

  

  
  

  
  

  

  
  

  
  

  
  

  
  

  
  

  

  
  

  
  

  

  

  10
  .,!? 	


  

  

  
  WORD
  
  
  en
  US

  

  

  

  
  
  
  
[^\w-\.]
_
  
  
  
  
  

  -MM-dd'T'HH:mm:ss.SSSZ
  -MM-dd'T'HH:mm:ss,SSSZ
  -MM-dd'T'HH:mm:ss.SSS
  -MM-dd'T'HH:mm:ss,SSS
  -MM-dd'T'HH:mm:ssZ
  -MM-dd'T'HH:mm:ss
  -MM-dd'T'HH:mmZ
  -MM-dd'T'HH:mm
  -MM-dd HH:mm:ss.SSSZ
  -MM-dd HH:mm:ss,SSSZ
  -MM-dd HH:mm:ss.SSS
  -MM-dd HH:mm:ss,SSS
  -MM-dd HH:mm:ssZ
  -MM-dd HH:mm:ss
  -MM-dd HH:mmZ
  -MM-dd HH:mm
  -MM-dd

  
  

  java.lang.String
  text_general
  
*_str
256
  
  
  true


  java.lang.Boolean
  booleans


  java.util.Date
  pdates


  java.lang.Long
  java.lang.Integer
  plongs


  java.lang.Number
  pdoubles