nt the web application from
> issuing such queries. This may mean something like supporting paging
> only among the first 10 000 results. Typical requirement may also be to
> be able to see the last results of a query, but this can be accomplished
> by allowing sorting in both ascendin
f you use SolrJ, in particular CloudSolrClient, it’s ZooKeeper-aware
> and will both avoid dead nodes _and_ distribute the top-level
> queries to all the Solr nodes. It’ll also be informed when a dead
> nodes comes back and put it back into the rotation.
>
> Best,
> Erick
>
> >
that these problems may end?
Even if I downsize the Solr cloud setup to 2 boxes 2 nodes each with less
shards than the 16 shards that I have now, I would like to know your
oppinion about the question above.
Thank you,
Koji
Em qua, 14 de ago de 2019 às 14:15, Erick Erickson
escreveu:
> K
Heisey
escreveu:
> On 8/13/2019 9:28 AM, Kojo wrote:
> > Here are the last two gc logs:
> >
> >
> https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
>
> Thank you for that.
>
> Analyzing the 20MB gc log actually looks like a pretty
Shawn,
Here are the last two gc logs:
https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
Thank you,
Koji
Em ter, 13 de ago de 2019 às 09:33, Shawn Heisey
escreveu:
> On 8/13/2019 6:19 AM, Kojo wrote:
> > --
> > tail -f node1/logs/solr_
2019 às 13:26, Shawn Heisey
escreveu:
> On 8/12/2019 5:47 AM, Kojo wrote:
> > I am using Solr cloud on this configuration:
> >
> > 2 boxes (one Solr in each box)
> > 4 instances per box
>
> Why are you running multiple instances on one server? For most setups,
>
Hi,
I am using Solr cloud on this configuration:
2 boxes (one Solr in each box)
4 instances per box
At this moment I have an active collections with about 300.000 docs. The
other collections are not being queried. The acctive collection is
configured:
- shards: 16
- replication factor: 2
These t
I have a synonyms.txt mapping some words.
On my Solr 4.9, when I search a word that is in the synonyms.txt, the
debugger shows bellow:
"rawquerystring": "interleucina-6",
"querystring": "interleucina-6",
"parsedquery": "(+DisjunctionMaxQuerytext:interleucin
text:interleucin text:6
This is a zookeeper question, but I wonder you can help me.
Is it possible to directly versioning Solr cloud config files on Zookeper
using Git or any other versioning system? Or I realy need to use Zookeeper
cli?
When I said versioning directly on Zookeper, I mean version a folder of
Zookeeper o
},
{
"node": "01/11664-0",
"collection": "my_collection",
"field": "process_number",
"ancestors": [],
"level": 0
},
{
"EOF": true,
Hello everybody I have a question about Streaming Expression/Graph
Traversal.
The following pseudocode works fine:
complement( search(),
sort(
gatherNodes( collection, search())
),
)
However, when I feed the SE resultset above to another gatherNodes
function, I have a result dif
I think that you can use stream evaluators in your expressions to filter
the values you want:
https://lucene.apache.org/solr/guide/6_6/stream-evaluators.html
Em seg, 22 de out de 2018 às 12:10, RAUNAK AGRAWAL
escreveu:
> Thanks a lot Jan. Will try with 7.5
>
> I am currently using 7.2.1 ver
alse alarm.
Koji
Em qui, 13 de set de 2018 às 14:22, Kojo escreveu:
> I can do that.
> I tell you when I open the Jira.
>
>
>
> Em qui, 13 de set de 2018 às 14:05, Joel Bernstein
> escreveu:
>
>> I'll have to take a look and see if I can reproduce this exact
gt; Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Thu, Sep 13, 2018 at 1:03 PM Kojo wrote:
>
> > Same query feeding 25000 tuples to gatherNodes:
> >
> > gatherNodes(graph_auxilios,
> > search(graph_auxilios, zkHost="localhost:9983",qt="/select",
>
.
Em qui, 13 de set de 2018 às 13:50, Joel Bernstein
escreveu:
> I see that the hits=0 in this log request. Are there log requests that show
> results found for one of these queries?
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Thu, Sep 13, 2018
at the same time when it returns empty
result.
Em qui, 13 de set de 2018 às 10:34, Joel Bernstein
escreveu:
> That's odd behavior. What do the logs look like? This will produce a series
> of queries against the projects collection. Are you seeing those in the
> logs? Any errors?
Hi,
If I try to feed gatherNodes with more than 25000 tuples, it gives me empty
result-set.
gatherNodes(projects,
search(projects, zkHost="localhost:9983",qt="/select", rows=3,
q=*:*, fl="id", sort="id asc"),
walk="id->parent_id",
gather="id",
)
Response:
{ "result-set": { "docs": [ {
python client is
>> correctly escaped/encoded HTTP.
>>
>> One easy way is to use netcat to pretend to be a server:
>> $ nc -l 8983
>> And then send point the python client at that and send the request.
>>
>> -Yonik
>>
>>
>> On Tue, May 8, 2
gt; And then send point the python client at that and send the request.
>
> -Yonik
>
>
> On Tue, May 8, 2018 at 9:17 PM, Kojo wrote:
> > Thank you all. I tried escaping but still not working
> >
> > Yonik, I am using Python Requests. It works if my fq is a single
ent
a request that this server could not understand.\n\n\nApache/2.2.15 (Oracle) Server at leydenh Port
80\n\n'
Thank you,
2018-05-08 18:46 GMT-03:00 Yonik Seeley :
> On Tue, May 8, 2018 at 1:36 PM, Kojo wrote:
> > If I tag the fq query and I query for a simple word it works fin
Hello,
recently I have changed the way I get facet data from Solr. I was using GET
method on request but due to the limit of the query I changed to POST
method.
Bellow is a sample of the data I send to Solr, in order to get facets. But
there is something here that I don´t understand.
If I do not
automagically on
schemaless mode, download it adjust and upload to zk as described on the
documentation.
Thanks,
Robson
2018-04-17 11:49 GMT-03:00 Shawn Heisey :
> On 4/17/2018 8:15 AM, Kojo wrote:
> > I am trying schemaless mode and it seems to works very nice, and there is
> >
I have just deleted using command line and worked as expected!
2018-04-17 11:15 GMT-03:00 Kojo :
> Hi all,
>
> I am trying schemaless mode and it seems to works very nice, and there is
> no overhead to write a custom schema for each type of collection that we
> need to index.
&
Hi all,
I am trying schemaless mode and it seems to works very nice, and there is
no overhead to write a custom schema for each type of collection that we
need to index.
However we are facing a strange problem. Once we have created a collection
and indexed data on that collection, if we need to ma
rst doc has field X with a value of 1, it
> infers that this field
> is an int type. If doc2 has a value of 1.0, the doc fails with a parsing
> error.
>
> FYI,
> Erick
>
> On Tue, Apr 3, 2018 at 2:39 PM, Kojo wrote:
> > Hi Solrs,
> > We have a Solr cloud running in t
Hi Solrs,
We have a Solr cloud running in three nodes.
Five collections are running in schema mode and we would like to create
another collection running schemalles.
Does it fit all together schema and schemales on the same nodes?
I am not sure, because on this page it starts solr in schemalles m
can be run. Feel free to create a
> jira for this.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Mon, Jan 29, 2018 at 9:58 AM, Kojo wrote:
>
> > Hi solr-users!
> > I have a Streaming Expression which joins two search SE, one of them is
> > evaluate
Hi solr-users!
I have a Streaming Expression which joins two search SE, one of them is
evaluated on a cartesianProduct SE.
I´am trying to run that in parallel mode but it does not work.
Trying a very simple parallel I can see that it works:
parallel(
search(
But this one I´m trying to run,
I´am sorry, everything is working fine!
2018-01-23 16:44 GMT-02:00 Kojo :
> I am trying to solve one problem, exactly as the case described here:
>
> http://lucene.472066.n3.nabble.com/Streaming-expression-API-innerJoin-on-
> multi-valued-field-td4353794.html
>
> I cannot accom
I am trying to solve one problem, exactly as the case described here:
http://lucene.472066.n3.nabble.com/Streaming-expression-API-innerJoin-on-multi-valued-field-td4353794.html
I cannot accomplish that on Solr 6.6, my streaming expression returns
nothing:
hashJoin(
search(scholarship, zkHost=
e bit more clear that memory will not be hardly
affected if I use docValues, I will start to think about disk usage grow
and how much it impacts the infrastructure.
Thanks again,
2017-11-24 16:16 GMT-02:00 Erick Erickson :
> Kojo:
>
> bq: My question is, isn´t it to
> expens
I Think that I found the solution. After analysis, change from /export
request handler to /select request handler in order to obtain other fields.
I will try that.
2017-11-24 15:15 GMT-02:00 Kojo :
> Thank you very much for your answer, Shawn.
>
> That is it, I was looking for anoth
Shawn Heisey :
> On 11/23/2017 1:51 PM, Kojo wrote:
>
>> I am working on Solr to develop a toll to make analysis. I am using search
>> function of Streaming Expressions, which requires a field to be indexed
>> with docValues enabled, so I can get it.
>>
>> Sup
list
or not.
I appreciate your attention.
Thank you,
-- Forwarded message --
From: Kojo
Date: 2017-11-23 18:51 GMT-02:00
Subject: docValues
To: solr-user@lucene.apache.org
Hi,
I am working on Solr to develop a toll to make analysis. I am using search
function of Streaming
Hi,
I am working on Solr to develop a toll to make analysis. I am using search
function of Streaming Expressions, which requires a field to be indexed
with docValues enabled, so I can get it.
Suppose that after someone finishes the analysis, and would like to get
other fields of the resultset that
> On Wed, Nov 8, 2017 at 7:44 AM, Kojo wrote:
>
> > Amrit,
> > as far as I understand, in your example I have resulted documents
> > aggregated by the rollup function, but to get the documents themselves I
> > need to make another query that will get fq´s cached results,
looking
for that but I haven´t found.
2017-11-08 2:35 GMT-02:00 Amrit Sarkar :
> Kojo,
>
> Not sure what do you mean by making two request to get documents. A
> "search" streaming expression can be passed with "fq" parameter to filter
> the results and rollup on
Hi,
I am working on PoC of a front-end web to provide an interface to the end
user search and filter data on Solr indexes.
I am trying Streaming Expression for about a week and I am fairly keen
about using it to search and filter indexes on Solr side. But I am not sure
whether this is the right ap
I would like to try that!
Em 1 de nov de 2017 18:04, "Will Hayes" escreveu:
There is a community edition of App Studio for Solr and Elasticsearch being
released by Lucidworks in November. Drop me a line if you would like to get
a preview release.
-wh
--
Will Hayes | CEO | Lucidworks
direct. +1
ts to be sorted by the "over" field.
> Check this for more details
> http://lucene.472066.n3.nabble.com/Streaming-Expressions-rollup-function-
> returning-results-with-duplicate-tuples-td4342398.html
>
> On Wed, Nov 1, 2017 at 3:41 PM, Kojo wrote:
>
> > Wrap carte
Wrap cartesianProduct function with fetch function works as expected.
But rollup function over cartesianProduct doesn´t aggregate on a returned
field of the cartesianProduct.
The field "id_researcher" bellow is a Multivalued field:
This one works:
fetch(reasercher,
cartesianProduct(
Everything working fine, these functional programming is amazing.
Thank you!
2017-10-31 12:31 GMT-02:00 Kojo :
> Thank you, I am just starting with Streaming Expressions. I will try this
> one later.
>
> I will open another thread, because I can´t do some simple queries using
be pipelined,
> the next stage/function of pipeline will work on the new tuples generated.
>
> On Mon, Oct 30, 2017 at 10:09 AM, Kojo wrote:
>
> > Do you store this new tuples, created by Streaming Expressions, in a new
> > Solr cloud collection? Or just use this tuples in que
th multivalued fields. This happens
> in the Streaming Expression pipeline so you don't have to flatten your
> documents in index.
>
> On Mon, Oct 30, 2017 at 8:39 AM, Kojo wrote:
>
> > Hi,
> > just a question, I have no deep background on Solr, Graph etc.
> > This s
n't see a jira ticket for this yet. Feel free to create it and reply
> > back with the link.
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> > On Fri, Oct 27, 2017 at 9:55 AM, Kojo wrote:
> >
> > > Hi, I was looking for in
I am digging into graph traversal. I want to make queries crossing
collections.
Using gatherNodes function on Streaming Expressions, it is possible as the
example bellow
gatherNodes(logs,
gatherNodes(emails,
search(emails, q="body:(solr rocks)",
fl="from", sort
Hi, I was looking for information on Graph Traversal. More specifically,
support to search graph on multivalued field.
Searching on the Internet, I found a question exactly the same of mine,
with an answer that what I need is not implemented yet:
http://lucene.472066.n3.nabble.com/Using-multi-valu
47 matches
Mail list logo