Re: Using copyField with dynamicField
Zach, As an alternative to 'copyField', you might want to consider the CloneFieldUpdateProcessorFactory: http://lucene.apache.org/solr/5_0_0/solr-core/org/apache/solr/update/processor/CloneFieldUpdateProcessorFactory.html It supports specification of field names with regular expressions, exclusion of specific fields that otherwise match the regex, etc. Much more flexible than copyField, in my opinion. Regards, Scott On Mon, Aug 24, 2015 at 10:39 PM, Erick Erickson wrote: > What is reported in the Solr log? That's usually much more informative. > > Best, > Erick > > On Mon, Aug 24, 2015 at 5:26 PM, Alexandre Rafalovitch > wrote: > > It should work (at first glance). copyField does support wildcards. > > > > Do you have a field called "text"? Also, your field name and field > > type "text" have the same name. Not sure it is the best idea. > > > > Regards, > >Alex. > > > > Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter: > > http://www.solr-start.com/ > > > > > > On 24 August 2015 at 17:27, Zach Thompson wrote: > >> Hi All, > >> > >> Is it possible to use copyField with dynamicField? I was trying to do > >> the following, > >> > >> > >> > >> > >> and getting a 400 error on trying to copy the first dynamic field. > >> Without the copyField the fields seem to load ok. > >> > >> -- > >> Zach Thompson > >> z...@duckduckgo.com > >> > >> >
Re: Collections API - HTTP verbs
Hrishikesh, If you're running on Linux or Unix, the first ampersand in the URL is interpreted as the shell's "run this in the background" operator and anything beyond the ampersand will not be passed to curl. So Mark is right -- put single quotes around the URL so that it's not interpreted by the shell. Regards, Scott On Wed, Feb 18, 2015 at 9:31 PM, Mark Miller wrote: > Perhaps try quotes around the url you are providing to curl. It's not > complaining about the http method - Solr has historically always taken > simple GET's for http - for good or bad, you pretty much only post > documents / updates. > > It's saying the name param is required and not being found and since you > are trying to specify the name, I'm guessing something about the command is > not working. You might try just shoving it in a browser url bar as well. > > - Mark > > On Wed Feb 18 2015 at 8:56:26 PM Hrishikesh Gadre > wrote: > > > Hi, > > > > Can we please document which HTTP method is supposed to be used with each > > of these APIs? > > > > https://cwiki.apache.org/confluence/display/solr/Collections+API > > > > I am trying to invoke following API > > > > curl http:// > > :8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme& > > val=https > > > > This request is failing due to following error, > > > > 2015-02-18 17:29:39,965 INFO org.apache.solr.servlet.SolrDispatchFilter: > > [admin] webapp=null path=/admin/collections params={action=CLUSTERPROP} > > status=400 QTime=20 > > > > org.apache.solr.core.SolrCore: org.apache.solr.common.SolrException: > > Missing required parameter: name > > > > at > > org.apache.solr.common.params.RequiredSolrParams.get( > > RequiredSolrParams.java:49) > > > > at > > org.apache.solr.common.params.RequiredSolrParams.check( > > RequiredSolrParams.java:153) > > > > at > > org.apache.solr.handler.admin.CollectionsHandler.handleProp( > > CollectionsHandler.java:238) > > > > at > > org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody( > > CollectionsHandler.java:200) > > > > at > > org.apache.solr.handler.RequestHandlerBase.handleRequest( > > RequestHandlerBase.java:135) > > > > at > > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest( > > SolrDispatchFilter.java:770) > > > > at > > org.apache.solr.servlet.SolrDispatchFilter.doFilter( > > SolrDispatchFilter.java:271) > > > > I am using Solr 4.10.3 version. > > > > Thanks > > > > Hrishikesh > > >
ArrayIndexOutOfBoundsException in RecordingJSONParser.java
Hello, I'm running Solr 5.1 and during indexing I get an ArrayIndexOutOfBoundsException at line 61 of org/apache/solr/util/RecordingJSONParser.java. Looking at the code (see below), it seems obvious that the if-statement at line 60 should use a greater-than sign instead of greater-than-or-equals. @Override public CharArr getStringChars() throws IOException { CharArr chars = super.getStringChars(); recordStr(chars.toString()); position = getPosition(); // if reading a String , the getStringChars do not return the closing single quote or double quote //so, try to capture that if(chars.getArray().length >=chars.getStart()+chars.size()) { // line 60 char next = chars.getArray()[chars.getStart() + chars.size()]; // line 61 if(next =='"' || next == '\'') { recordChar(next); } } return chars; } Should I create a JIRA ticket? (Am I allowed to?) I can provide more info about my particular usage including a stacktrace if that's helpful. I'm using the new custom JSON indexing, which, by the way, is an excellent feature and will be of great benefit to my project. Thanks for that. Regards, Scott Dawson
Re: ArrayIndexOutOfBoundsException in RecordingJSONParser.java
Ticket opened: https://issues.apache.org/jira/i#browse/SOLR-7462 Thanks, Scott On Fri, Apr 24, 2015 at 9:38 AM, Shawn Heisey wrote: > On 4/24/2015 7:16 AM, Scott Dawson wrote: > > Should I create a JIRA ticket? (Am I allowed to?) I can provide more > info > > about my particular usage including a stacktrace if that's helpful. I'm > > using the new custom JSON indexing, which, by the way, is an excellent > > feature and will be of great benefit to my project. Thanks for that. > > Ouch. Thanks for finding the bug! > > Anyone can create an account on the Apache Jira and then create issues. > Please do! The issue for this bug would go in the SOLR project. > > https://issues.apache.org/jira/browse/SOLR > > Thanks, > Shawn > >
luceneMatchVersion
Hello, In Solr 5.1, I've noticed that luceneMatchVersion is set to 5.0.0 in the sample and any newly generated solrconfig.xml files. Is this an oversight or by design? Any reason I shouldn't bump it to 5.1.0 for new cores I'm creating? Thanks, Scott
Re: luceneMatchVersion
Thanks Shawn. There's a closed JIRA ticket related to this - SOLR-5048 - "fail the build if the example solrconfig.xml files don't have an up to date luceneMatchVersion". Regards, Scott On Wed, Apr 29, 2015 at 12:15 PM, Shawn Heisey wrote: > On 4/29/2015 9:56 AM, Scott Dawson wrote: > > In Solr 5.1, I've noticed that luceneMatchVersion is set to 5.0.0 in the > > sample and any newly generated solrconfig.xml files. Is this an oversight > > or by design? Any reason I shouldn't bump it to 5.1.0 for new cores I'm > > creating? > > I'm pretty sure that's an oversight, which we will need to correct > before releasing 5.2. Checking and updating luceneMatchVersion should > definitely be incorporated into the release HOWTO if it's not there > already. I don't see any reason that you can't bump that to 5.1.0 > locally. Thanks for bringing the problem to our attention! > > Thanks, > Shawn > >
Re: Injecting synonymns into Solr
There is a possible solution here: https://issues.apache.org/jira/browse/LUCENE-2347 (Dump WordNet to SOLR Synonym format). I don't have personal experience with it. I only know about it because it's mentioned on page 184 of the 'Solr in Action' book by Trey Grainger and Timothy Potter. Maybe someone out there knows more about it and can provide more information. Regards, Scott On Thu, Apr 30, 2015 at 9:45 AM, Kaushik wrote: > I am facing the same problem; currently I am resorting to a custom program > to create this file. Hopefully there is a better solution out there. > > Thanks, > Kaushik > > On Thu, Apr 30, 2015 at 3:58 AM, Zheng Lin Edwin Yeo > > wrote: > > > Hi, > > > > Does anyone knows any faster method of populating the synonyms.txt file > > instead of manually typing in the words into the file, which there could > be > > thousands of synonyms around? > > > > Regards, > > Edwin > > >
Re: Solr 5.0, Ubuntu 14.04, SOLR_JAVA_MEM problem
Bruno, You have the wrong kind of dash (a long dash) in front of the Xmx flag. Could that be causing a problem? Regards, Scott On Mon, May 4, 2015 at 5:06 AM, Bruno Mannina wrote: > Dear Solr Community, > > I have a recent computer with 8Go RAM, I installed Ubuntu 14.04 and SOLR > 5.0, Java 7 > This is a brand new installation. > > all work fine but I would like to increase the JAVA_MEM_SOLR (40% of total > RAM available). > So I edit the bin/solr.in.sh > > # Increase Java Min/Max Heap as needed to support your indexing / query > needs > SOLR_JAVA_MEM="-Xms3g –Xmx3g -XX:MaxPermSize=512m -XX:PermSize=512m" > > but with this param, the solr server can't be start, I use: > bin/solr start > > Do you have an idea of the problem ? > > Thanks a lot for your comment, > Bruno > > --- > Ce courrier électronique ne contient aucun virus ou logiciel malveillant > parce que la protection avast! Antivirus est active. > http://www.avast.com > >
Custom JSON
Hello, I'm trying to use the new custom JSON feature described in https://issues.apache.org/jira/browse/SOLR-6304. I'm running Solr 4.10.1. It seems that the new feature, or more specifically, the /update/json/docs endpoint is not enabled out-of-the-box except in the schema-less example. Is there some dependence of the feature on schemaless mode? I've tried pulling the endpoint definition and related pieces of the example-schemaless solrconfig.xml and adding those to the "standard" solrconfig.xml in the main example but I've run into a cascade of issues. Right now I'm getting a "This IndexSchema is not mutable" exception when I try to post to the /update/json/docs endpoint. My real question is -- what's the easiest way to get this feature up and running quickly and is this documented somewhere? I'm trying to do a quick proof-of-concept to verify that we can move from our current flat JSON ingestion to a more natural use of structured JSON. Thanks, Scott Dawson
Re: Custom JSON
Noble, Thanks. You're right. I had some things incorrectly configured but now I can put structured JSON into Solr using the out-of-the-box solrconfig.xml. One additional question: Is there any way to query Solr and receive the original structured JSON document in response? Or does the flattening process that happens during indexing obliterate the original structure with no way to reconstruct it? Thanks again, Scott On Thu, Oct 16, 2014 at 2:10 PM, Noble Paul wrote: > The end point /update/json/docs is enabled implicitly in Solr irrespective > of the solrconfig.xml > In schemaless the fields are created automatically by solr. > > If you have all the fields created in your schema.xml it will work . > > if you need an id field please use a copy field to create one > > --Noble > > On Thu, Oct 16, 2014 at 8:42 PM, Scott Dawson > wrote: > > > Hello, > > I'm trying to use the new custom JSON feature described in > > https://issues.apache.org/jira/browse/SOLR-6304. I'm running Solr > 4.10.1. > > It seems that the new feature, or more specifically, the > /update/json/docs > > endpoint is not enabled out-of-the-box except in the schema-less example. > > Is there some dependence of the feature on schemaless mode? I've tried > > pulling the endpoint definition and related pieces of the > > example-schemaless solrconfig.xml and adding those to the "standard" > > solrconfig.xml in the main example but I've run into a cascade of issues. > > Right now I'm getting a "This IndexSchema is not mutable" exception when > I > > try to post to the /update/json/docs endpoint. > > > > My real question is -- what's the easiest way to get this feature up and > > running quickly and is this documented somewhere? I'm trying to do a > quick > > proof-of-concept to verify that we can move from our current flat JSON > > ingestion to a more natural use of structured JSON. > > > > Thanks, > > Scott Dawson > > > > > > -- > - > Noble Paul >