name "foodDescUS" to "alphaname".
When i try to sort using alphaname ... i get this error :-
The field :foodDesc present in DataConfig does not have a counterpart in
Solr Schema
Please help
Thanks
Pratik
--
View this message in context:
http://lucene.472066.n3.nabble.
Hello,
I got over that problem but now i am facing a new problem.
Indexing works but search does not.
I used the following line in the schema:-
and
I'm trying to use the default "alphaOnlySort" in the sample schema.xml.
Database is MySQL, there is a column/field named ColXYZ
My data-
So, basically it does not fetch any documents/records whereas it does index
them.
Thanks
Pratik
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-sorting-problem-tp486144p2889075.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Were you able to sort the results using alphaOnlySort ?
If yes what changes were made to the schema and data-config ?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-sorting-problem-tp486144p2889473.html
Sent from the Solr - User mailing list archive at Nabb
in my database).
My requirements are,
Result should come faster and it should be accurate.
It should have the latest data.
Can you suggest if I should go with Apache Solr, or another solution for my
problem ?
Regards,
Pratik Thaker
The information in this email
ot;: "collection1",
"field": "eventID",
"level": 1
},
{
"node": "543004f0c92c0a651166ae80",
"schematype": "Employment",
"collection": "collection1",
"field": "eventID",
"level": 1
},
{
"node": "543004f0c92c0a651166ae8a",
"schematype": "Employment",
"collection": "collection1",
"field": "eventID",
"level": 1
},
{
"node": "543004f0c92c0a651166ae94",
"schematype": "Employment",
"collection": "collection1",
"field": "eventID",
"level": 1
},
{
"node": "543004f0c92c0a651166ae9d",
"schematype": "Customer",
"collection": "collection1",
"field": "eventID",
"level": 1
},
{
"EOF": true,
"RESPONSE_TIME": 38
}
]
}
}
If I rollup on the level field then the results are as expected but not
when the field is schematype. Any idea what's going on here?
Thanks,
Pratik
n
> wrote:
>
> > You'll need to use the sort expression to sort the nodes by schemaType
> > first. The rollup expression is doing a MapReduce rollup that requires
> the
> > the records to be sorted by the "over" fields.
> >
> > Joel Bernstein
>
Hey Everyone,
This is about the facet function of Streaming Expression. Is there any way
to set limit for number of facets to infinite? The *bucketSizeLimit
parameter *seems to accept only those numbers which are greater than 0.
Thanks,
Pratik
Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Jun 29, 2017 at 10:06 AM, Pratik Patel
> wrote:
>
> > Hey Everyone,
> >
> > This is about the facet function of Streaming Expression. Is there any
> way
> > to set limit for number of facets to infinite?
eaming expressions, we get
following response.
org.apache.solr.client.solrj.SolrServerException: Server refused connection
> at: http://:8081/solr/collection1_shard1_replica1
We don't get this error if we let jetty server bind to all interfaces. Any
idea about what's the problem here?
Thanks,
Pratik
illiseconds precision in date field "startTime" is
lost. Precision is preserved for non-zero milliseconds but it's being lost
for zero values. The field type of "startTime" field is as follows.
docValues="true" precisionStep="0"/>
Does anyone know how I can preserve milliseconds even if its zero? Or is it
not possible at all?
Thanks,
Pratik
Thanks for the clarification. I'll change my code to accommodate this
behavior.
On Thu, Oct 5, 2017 at 6:24 PM, Chris Hostetter
wrote:
> : > "startTime":"2013-02-10T18:36:07.000Z"
> ...
> : handler. It gets added successfully but when I retrieve this document
> back
> : using "id
For now, you can probably use Cartesian function of Streaming Expressions
which Joel implemented to solve the same problem.
https://issues.apache.org/jira/browse/SOLR-10292
http://joelsolr.blogspot.com/2017/03/streaming-nlp-is-coming-in-solr-66.html
Regards,
Pratik
On Sat, Oct 28, 2017 at 7:38
This solution looks like normalizing data like a m2m table in sql database,
> is it?
>
>
>
> 2017-10-29 21:51 GMT-02:00 Pratik Patel :
>
> > For now, you can probably use Cartesian function of Streaming Expressions
> > which Joel implemented to solve the same problem.
&
ection? Or just use this tuples in query time?
>
> 2017-10-30 11:00 GMT-02:00 Pratik Patel :
>
> > By including Cartesian function in Streaming Expression pipeline, you can
> > convert a tuple having one multivalued field into multiple tuples where
> > each tuple holds
Roll up needs documents to be sorted by the "over" field.
Check this for more details
http://lucene.472066.n3.nabble.com/Streaming-Expressions-rollup-function-returning-results-with-duplicate-tuples-td4342398.html
On Wed, Nov 1, 2017 at 3:41 PM, Kojo wrote:
> Wrap cartesianProduct function with
strings
java.lang.Boolean
booleans
java.util.Date
tdates
java.lang.Long
java.lang.Integer
tlongs
java.lang.Number
tdoubles
Regards,
Pratik Thaker
_
Hi Friends,
Can you please try to give me some details about below issue ?
Regards,
Pratik Thaker
From: Pratik Thaker
Sent: 07 February 2017 17:12
To: 'solr-user@lucene.apache.org'
Subject: DistributedUpdateProcessorFactory was explicitly disabled from this
updateRequestProcessorCha
Here is the same question in stackOverflow for better format.
http://stackoverflow.com/questions/42370231/solr-
dynamic-field-blowing-up-the-index-size
Recently, I upgraded from solr 5.0 to solr 6.4.1. I can run my app fine but
the problem is that index size with solr 6 is way too large. In solr
ck what index file extensions
> contribute most to the difference? That could give a hint.
>
> Regards,
> Alex
>
> On 21 Feb 2017 9:47 AM, "Pratik Patel" wrote:
>
> > Here is the same question in stackOverflow for better format.
> >
> > http:
use of doc values should actually blow
> > up the size of your index considerably if they are in fields that get
> sent
> > a lot of data.
> >
> > On Tue, Feb 21, 2017 at 10:50 AM, Pratik Patel
> wrote:
> >
> >> Thanks for the reply. I can see that in
9/02/scaling-lucene-and-solr/#d0e71
Thanks,
Pratik
On Tue, Feb 21, 2017 at 12:03 PM, Pratik Patel wrote:
> I am using the schema from solr 5 which does not have any field with
> docValues enabled.In fact to ensure that everything is same as solr 5
> (except the breaking changes) I am usin
I have a field type in schema which has been applied stopwords list.
I have verified that path of stopwords file is correct and it is being
loaded fine in solr admin UI. When I analyse these fields using "Analysis" tab
of the solr admin UI, I can see that stopwords are being filtered out.
However,
> Attach &debug=query to your query and look at the parsed query that's
> returned.
> That'll tell you what was searched at least.
>
> You can also use the TermsComponent to examine terms in a field directly.
>
> Best,
> Erick
>
> On Tue, Feb 21, 2017 at 2
1c4e574f88505556987be57ef1af28d01b6d94":"\n1.0 =
Description_note:*their*, product of:\n 1.0 = boost\n 1.0 =
queryNorm\n",
},
"QParser":"LuceneQParser",
"timing":{ ... }
}
Thanks,
Pratik
On Wed, Feb 22, 2017 at 11:25 AM,
wildcard to test whether the stopword was indexed or not. Thanks
again.
Regards,
Pratik
On Wed, Feb 22, 2017 at 12:10 PM, Alexandre Rafalovitch
wrote:
> StopFilterFactory (and WordDelimiterFilterFactory and maybe others)
> are NOT multiterm aware.
>
> Using wildcards triggers the edg
onnections of that document so that
graph traversal is possible. If not, what is the other the way to index
graph data and use graph traversal. I am trying to explore graph traversal
and am new to it. Any help would be appreciated.
Thanks,
Pratik
ctory.createInstance(StreamFactory.java:351)
> ... 40 more
> Caused by: java.lang.NumberFormatException: For input string:
> "524efcfd505637004b1f6f24"
> at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source)
> at sun.misc.FloatingDecimal.parseDouble(Unknown Source)
> at java.lang.Double.parseDouble(Unknown Source)
> at
> org.apache.solr.client.solrj.io.ops.LeafOperation.(LeafOperation.java:48)
> at
> org.apache.solr.client.solrj.io.ops.EqualsOperation.(EqualsOperation.java:42)
> ... 44 more
I can see that solr is trying to parse storeid as double and hence the
NumberFormatException, even though this field is of type String in schema.
How can I fix this?
Thanks,
Pratik
it's not a stable version*
On Mon, Mar 13, 2017 at 1:34 PM, Pratik Patel wrote:
> Thanks Joel! This is just a simplified sample query that I created to
> better demonstrate the issue. I am not sure whether I want to upgrade to
> solr 6.5 as only developer version is available
has string
> comparisons.
>
> In the expression you're working with it would be much more performant
> though to filter the query on the storeid.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Mon, Mar 13, 2017 at 1:06 PM, Pratik Patel wrote:
>
>
)
> ),
> fl="concept_name",
> on="ancestors=conceptid"
> )
Response :
{
> "result-set": {
> "docs": [
> {
> "node": "524f03355056c8b53b4ed199",
> "field": "eventID",
> "level": 1,
> "count(*)": 2,
> "collection": "collection1",
> "ancestors": [
> "524f02845056c8b53b4e9871",
> "524f02755056c8b53b4e9269"
> ]
> },
> .
> }
Thanks,
Pratik
only the ids in eventLink
documents is because I don't want to duplicate data unnecessarily. It will
complicate maintenance of consistency in index when delete/update happens.
Is there any way I can achieve this?
Thanks!
Pratik
On Tue, Mar 14, 2017 at 11:24 AM, Joel Bernstein wrote:
> Wo
> scatter="leaves",
> count(*)),
> gt(count(*),1))),
> fl="concept_name",
> on="ancestors=conceptid")
>
> Joel Bernstein
> http:
ng this article (
http://joelsolr.blogspot.com/2015/04/the-streaming-api-solrjio-basics.html)
I could implement and run single streaming expression,
search(collection1,q="*:*",fl="conceptid",sort="conceptid
asc",fq=storeid:"524efcfd505637004b1f6f24",fq=tags:"Company",fq=tags:"Prospects2",
qt="/export")
But I can not find a way to create a nested query. How can I do that?
Thanks,
Pratik
Great, I think I can achieve what I want by combining "select" and
"cartersian" functions in my expression. Thanks a lot for help!
Regards,
Pratik
On Wed, Mar 15, 2017 at 10:21 AM, Joel Bernstein wrote:
> I haven't created the jira ticket for this yet. It's fa
Hi All,
I am facing this issue since very long, can you please provide your suggestion
on it ?
Regards,
Pratik Thaker
-Original Message-
From: Pratik Thaker [mailto:pratik.tha...@smartstreamrdu.com]
Sent: 09 February 2017 21:24
To: 'solr-user@lucene.apache.org'
S
Hi Ishan,
After making suggested changes to solrconfig.xml, I did upconfig on all 3 SOLR
VMs and restarted SOLR engines.
But still I am facing same issue. Is it something I am missing ?
Regards,
Pratik Thaker
-Original Message-
From: Ishan Chattopadhyaya [mailto:ichattopadhy
Hi Alessandro,
Can you please suggest what should be the correct order of adding processors ?
I am having 5 collections, 6 shards, replication factor 2, 3 nodes on 3
separate VMs.
Regards,
Pratik Thaker
-Original Message-
From: alessandro.benedetti [mailto:a.benede...@sease.io]
Sent
find any
option which lets you specify a label in the query. I am using solr 6.4.1
in cloud mode and clustering algorithm is
"org.carrot2.clustering.lingo.LingoClusteringAlgorithm"
Thanks,
Pratik
ext^4");
try {
QueryResponse response =
getStore().getEnvironment().getSolr().query(request,
SolrRequest.METHOD.POST);
NamedList rsp = response.getResponse();
ArrayList> skg_resp =
(ArrayList>) rsp.get("clusters");
if (skg_resp != null) {
}
}
Any idea what is wrong here? Any pointer to documentation on how to
construct request for Semantic Knowledge Graph through solrJ would be very
helpful.
Thanks
Pratik
; Abstract_note:(*machine*)^4
Are these two queries any different?
Relative boosting is same in both of them.
I can see that they produce same results and ordering. Only difference is
that the score in case1 is 10 times the score in case2.
Thanks,
Pratik
as to how this can be achieved? Any direction
would be a great help!
Thanks And Regards,
Pratik
u invisage. As in, is
> there training corpus, are you looking at NGram techniques, etc.
>
> Regards,
> Alex.
> On Wed, 17 Oct 2018 at 13:40, Pratik Patel wrote:
> >
> > Hi Everyone,
> >
> > I have been using Semantic Knowledge Graph for document summarizat
nt to use this field as an input to Semantic Knowledge Graph. The
plugin works great for words. But now I want to use it for phrases. Any
idea around this would be really helpful.
Thanks a lot!
- Pratik
ed setting enablePositionIncrements="false" for stop word filter but
that parameter only works for lucene version 4.3 or earlier. Looks like
it's an open issue in lucene
https://issues.apache.org/jira/browse/LUCENE-4065
For now, I am trying to find a workaround using PatternReplaceFilterFactory.
Regard
single word string in that field.
@David I am using SKG through the plugin. So it is a POST request with
query in body. I haven't yet upgraded to version 7.5.
Thank you all for the help!
Regards,
Pratik
On Fri, Nov 16, 2018 at 8:36 AM David Hastings
wrote:
> Which function of the SKG are y
;type":"text_shingles",
> "limit":30,
> "discover_values":true
> }
> ]
> }
What I am expecting is that SKG will return words/phrases that are related
to the term "foo". I am filtering the text through StopWordFilter before
that. I have a
that is, given a query matching
set of documents, find interestingTerms for that set of documents based on
tf-idf?
Thanks!
Pratik
http://localhost:8081/solr/collection1/mlt?debugQuery=on&q=tags:voltage&mlt.boost=true&mlt.fl=my_field&mlt.interestingTerms=details&mlt.mindf=1&mlt.mintf=2&mlt.minwl=3&q=*:*&rows=100&start=0
Thanks,
Pratik
On Mon, Jan 21, 2019 at 2:52 PM Aman Tandon wrote:
>
Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Mon, Jan 21, 2019 at 3:02 PM Pratik Patel wrote:
>
> > Aman,
> >
> > Thanks for the reply!
> >
> > I have tried with corrected query but it doesn't solve the problem. also,
> > my tags filter matches multipl
Problem #1 can probably be solved by using "fetch" function. (
https://lucene.apache.org/solr/guide/6_6/stream-decorators.html#fetch)
Problem #2 and #3 can be solved by normalizing the graph connections and by
applying cartesianProduct on multi valued field, as described here.
http://lucene.472066
lpful.
Is it safe to use default managed-schema file as base add your own fields
to it?
Thanks,
Pratik
Hey Eric, thanks for the clarification! What about solrConfig.xml file?
Sure, it should be customized to suit one's needs but can it be used as a
base or is it best to create one from scratch ?
Thanks,
Pratik
On Wed, Feb 7, 2018 at 5:29 PM, Erick Erickson
wrote:
> That's really
That makes it clear. Thanks a lot for your help.
Pratik
On Feb 7, 2018 10:33 PM, "Erick Erickson" wrote:
> It can pretty much be used as-is, _except_
>
> you'll find one or more entries in your request handlers like:
> _text_
>
> Change "_text_&qu
I had a similar issue with index size after upgrading to version 6.4.1 from
5.x. The issue for me was that the field which caused index size to be
increased disproportionately had a field type("text_general") for which
default value of omitNorms was not true. Turning it on explicitly on field
fixed
Feb 14, 2018 at 1:01 PM, Alessandro Benedetti
wrote:
> Hi pratik,
> how is it possible that just the norms for a single field were causing such
> a massive index size increment in your case ?
>
> In your case I think it was for a field type used by multiple fields, but
> it&
@Alessandro I will see if I can reproduce the same issue just by turning
off omitNorms on field type. I'll open another mail thread if required.
Thanks.
On Thu, Feb 15, 2018 at 6:12 AM, Howe, David
wrote:
>
> Hi Alessandro,
>
> Some interesting testing today that seems to have gotten me closer t
Using cursor marker might help as explained in this documentation
https://lucene.apache.org/solr/guide/6_6/pagination-of-results.html
On Fri, May 18, 2018 at 4:13 PM, Deepak Goel wrote:
> I wonder if in-memory-filesystem would help...
>
> On Sat, 19 May 2018, 01:03 Erick Erickson,
> wrote:
>
>
We can limit the scope of graph traversal by applying some filter along the
way as follows.
gatherNodes(emails,
walk="john...@apache.org->from",
fq="body:(solr rocks)",
gather="to")
Is it possible to replace "body:(solr rocks)" by some streaming expression
lik
original events. But it would be a great improvement if I can also
limit the traversal so that only events from original list are visited at
second hop. That is why, I want to apply original search() function as a
filter in outer gatherNodes() function. I know it's a long shot but
considering t
e any better approach for this or is there any java library to do
this? My search for it didn't yield anything so I was wondering if anyone
here has an idea.
Thanks,
Pratik
,bar) as id
)
I can use merge() function but my streaming expression is quite complex and
that will make it even more complex as that would be a round about way of
doing it. Any idea how this can be achieved?
Thanks,
Pratik
rc/java/org/apache/solr/client/solrj/io/eval/AppendEvaluator.java <
> https://github.com/apache/lucene-solr/blob/master/solr/
> solrj/src/java/org/apache/solr/client/solrj/io/eval/AppendEvaluator.java>
> >>
> >>
> >>> On Jun 27, 2018, at 12:58 PM, Pra
; ConcatOperationTest.java <https://github.com/apache/
> lucene-solr/blob/branch_6_4/solr/solrj/src/test/org/
> apache/solr/client/solrj/io/stream/ops/ConcatOperationTest.java>
>
>
> > On Jun 27, 2018, at 1:27 PM, Aroop Ganguly
> wrote:
> >
> > It should, but
Joel Bernstein wrote
> Ok, that sounds like a bug. I can create a ticket for this.
>
> On Mon, Jul 1, 2019 at 5:57 PM Pratik Patel <
> pratik@
> > wrote:
>
>> I think the problem was that my streaming expression was always returning
>> just one node. When
Thanks a lot. I will update the ticket with more details if appropriate.
Pratik
On Wed, Jan 29, 2020 at 10:07 AM Joel Bernstein wrote:
> Here is the ticket:
> https://issues.apache.org/jira/browse/SOLR-14231
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
>
d,
de, def *
*Is there any Filter available in solr with which this can be achieved?*
*If writing a custom filter is the only possible option then I want to know
whether its possible to access adjacent tokens in the custom filter?*
*Any idea about this would be really helpful.*
Thanks,
Pratik
eatly helpful to see how more
than one tokens can be consumed. I can implement my custom logic once I
have access to multiple tokens from previous filter.
Thanks
Pratik
On Mon, Feb 10, 2020 at 2:47 AM Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Pratik,
> You might be a
Hello Everyone,
I am trying to update a field of a child document using atomic updates
feature. I am using solr and solrJ version 8.5.0
I have ensured that my schema satisfies the conditions for atomic updates
and I am able to do atomic updates on normal documents but with nested
child documents,
etFieldFromHierarchy(SolrInputDocument
> completeHierarchy, String fieldPath) {
> final List docPaths =
> StrUtils.splitSmart(fieldPath.substring(1), '/');
> ..
>}
Any idea what's wrong here?
Thanks
On Wed, Se
Following are the approaches I have tried so far and both results in NPE.
*approach 1
TestChildPOJO testChildPOJO = new TestChildPOJO().cId( "c1_child1" )
.conceptid( "c1" )
uld also include _nest_path and
> _nest_parent_. Your particular exception seems to be triggering
> something (maybe a bug) related to - possibly - missing _nest_path_
> field.
>
> See:
> https://lucene.apache.org/solr/guide/8_5/indexing-nested-documents.html#indexing-nested-documents
doing before, so there may be
> something I am missing myself.
>
>
> On Thu, 17 Sep 2020 at 12:46, Pratik Patel wrote:
> >
> > Thanks for your reply Alexandre.
> >
> > I have "_root_" and "_nest_path_" fields in my schema but not
&
n for this.
Any suggestions on how such documents should be indexed?
I am using SolrJ version 7.7.1 and Solr 7.4.0
Thanks!
Pratik
ation parameters. Is
there any way to do that?
Thanks!
Pratik
such kind of
testing? Any direction with this would be really helpful.
Thanks!
Pratik
ta in your
"testdata/test-data.json" file? I want to be sure about using the correct
format.
Thanks!
Pratik
On Tue, May 14, 2019 at 1:14 PM Angie Rabelero
wrote:
> Hi, I’ll advised you to extend the class SolrCloudTestCase, which extends
> the MiniSolrCloudCluster. Theres a hello
I have made sure that a new directory is always used as BASE_DIR to
MiniSolrCloudCluster.
Can anyone please throw some light on whats wrong here? Am I hitting any
solr test framework issue? I am using solr test framework version 7.7.1
Thanks a lot,
Pratik
is a way to load pre-created index
files into the cluster.
I checked the solr test framework and related examples but couldn't find
any example of index files being loaded in cloud mode.
Is there a way to load index files into solr running in cloud mode?
Thanks!
Pratik
ssible to do this through solrJ though.
Please let me know if there is better way available, it would really help.
Just so you know, I am trying to do this for unit tests related to solr
queries. Ultimately I want to load some pre-created data into
MiniSolrCloudCluster.
Thanks a lot,
Pratik
On We
Thanks guys, I found that the issue I had was because of some binary files
(NLP models) in my configuration. Once I fixed that, I was able to set up a
cluster. These exceptions are still logged but they are logged as INFO and
were not the real issue.
Thanks Again
Pratik
On Tue, Jun 4, 2019 at 4
If your children documents have a link to parent documents (like parent id
or something) then you can use graph traversal to do this.
On Mon, Jun 10, 2019 at 8:01 AM Jai Jamba
wrote:
> Can anyone help me in this ?
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
ata in new collection. I see that new collection is created but it seems
to be without any data.
Am I missing something here? Any idea what could be the cause of this?
Thanks!
Pratik
On Thu, Jun 6, 2019 at 11:18 AM Pratik Patel wrote:
> Thanks for the reply Alexandre, only special thi
can set up
the cluster fine if I remove all binary files bigger than 5 MB.
I have noticed the same issue when I try to restore a backup having
configuration files bigger than 5 MB.
Does jetty have some limit on the size of configuration files? Is there a
way to override this?
Thanks,
Pratik
a little bit larger than your largest
> file).
> If possible you can try to avoid storing the NLP / ML models in Solr but
> provide them on a share or similar where all Solr nodes have access to.
>
> > Am 11.06.2019 um 00:32 schrieb Pratik Patel :
> >
> > Hi,
> >
n 8.1 and that also has the same issue.
Is there any JIRA open for this or any patch available?
[image: image.png]
Thanks,
Pratik
I think the problem was that my streaming expression was always returning
just one node. When I added more data so that I can have more than one
node, I started seeing the result.
On Mon, Jul 1, 2019 at 11:21 AM Pratik Patel wrote:
> Hello Everyone,
>
> I am trying to execute
Great, thanks!
On Tue, Jul 2, 2019 at 6:37 AM Joel Bernstein wrote:
> Ok, that sounds like a bug. I can create a ticket for this.
>
> On Mon, Jul 1, 2019 at 5:57 PM Pratik Patel wrote:
>
> > I think the problem was that my streaming expression was always returning
> &g
uest = new QueryRequest(paramsLoc, SolrRequest.METHOD.POST);
Is this also a bug?
On Tue, Jul 2, 2019 at 10:17 AM Pratik Patel wrote:
> Great, thanks!
>
> On Tue, Jul 2, 2019 at 6:37 AM Joel Bernstein wrote:
>
>> Ok, that sounds like a bug. I can create a ticket for this.
>>
it is the
prescribed and recommended way to get children with parent. What is the
best practice to achieve this?
Thanks!
Pratik
/visualization.adoc#visualization
Thanks,
Pratik
On Wed, Oct 16, 2019 at 10:54 AM Joel Bernstein wrote:
> Hi,
>
> The Visual Guide to Streaming Expressions and Math Expressions is now
> complete. It's been published to Github at the following location:
>
>
> https://github.c
value.
Is there a related JIRA that I can track?
@Joel is there any way/ workaround to achieve this? i.e. to know whether
certain field is null or not?
Thanks and Regards,
Pratik
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
t_2" instead of
"config_set_1"?
I know that if I upload new configuration with the same name "config_set_1"
and reload the collection then it will have new configuration but I want to
keep the old config set, add a new one and make changes so that collection1
starts using new config set.
Is it possible?
Thanks and Regards
Pratik
Thanks Shawn! This is what I needed.
On Wed, Nov 20, 2019 at 3:59 PM Shawn Heisey wrote:
> On 11/20/2019 1:34 PM, Pratik Patel wrote:
> > Let's say I have a collection called "collection1" which uses config set
> > "config_set_1".
> > Now, using &q
94 matches
Mail list logo