Hi Guys,
I am running a Solr Cloud 4.10, with 4 Solr servers and 5 Zookeeper setup.
Solr servers:
solr01, solr02, solr03, solr04
I have around 20 collections in Solr cloud, and there are 4 Shards for each
Collection. For each Shard, I have 4 Replicas, and sitting on each Solr server,
with one
Hi everyone,
I'm reading the docs on the query elevation component and some questions
came up:
Can I specify a field that the elevate component will look at, such as only
looking at the title field? My search handler (using eDisMax) is searching
across multiple fields, but if I only want the elev
Yes, currently there is the 4 sort field limit. A custom handler could be
built that allows for unlimited sorts or you could provide a patch to the
export handler.
I think though that you'll find that performance drops off quite a bit as
the number of sort fields increases. This is because each fi
OK, I can see '/configs' directory in Solr UI and under that I can see
configuration fo my 'test' collection. BUt this all seemed to be disjointed
information. Doc is definitely not clear.
And what that Tree represent anyway where;s information for that. I would
ideally put link to that document.
Hi Erick,
I did read that paragraph. It says: First, if you don't provide the -d or
-n options, then the default configuration
($SOLR_HOME/server/solr/configsets/data_driven_schema_configs/conf) is
uploaded to ZooKeeper using the same name as the collection. For example,
the following command will
>From the doc you referenced where it outlines the parameters for the
create command:
-d : The configuration directory. This defaults to
data_driven_schema_configs.
You should also review the linked section about what configuration
directories are all about:
https://cwiki.apache.org/confluence/d
Hi,
I have 2 nodes solr cluster with 1 node standalone zookeeper.
I tried following from following doc to create collection:
bin/solr create -c contacts.
According to doc should upload config date into /configs/contacts in
ZooKeeper. But I can't find any configs directory under zookeeper.
http
Can't really deal with the security issues, but...
The resulting indexes created by MRIT are just plain vanilla
Solr/Lucene indexes. All the --go-live step does is issue a
MERGEINDEXES command from the core where they live to the directory
MRIT leaves them in, you might get some joy there, see:
ht
I posted this also to another thread, but I'll cross post to this ticket:
Take a look at org.apache.solr.client.solrj.io.sql.
StatementImpl.constructStream()
This uses a SolrStream to connect to the /sql handler. You can use the same
approach to send a request to the /stream handler just by chang
Hmm, this is odd. You should not have had to restart Zookeeper, are
you 100% sure
you looked in the same place you downloaded to?
Of course you would have to reload the collection to get them to "take".
BTW, in Solr 5.4+ there's upconfig/downconfig from the bin/solr
script, it was put
there to tr
Finally got it to straighten out.
So I have two collections, my test collection and my production collection.
I "fat fingered" the test collection and both collections were
complaining about the missing "id" field.
I downloaded the config from both collections and it was showing the id
field
@John
I am using a managed schema with zookeeper/solrcloud.
On 07/26/2016 04:21 PM, John Bickerstaff wrote:
@Michael - somewhere there should be a "conf" directory for your SOLR
instance. For my Dev efforts, I moved it to a different directory and I
forget where it was, originally -- but if y
ok...
I downloaded the config for both of my collections and the downloaded
managed-schema file shows "id" as defined? But the online view in the UI
shows it as not defined?
I've tried re-upping the config and nothing changes.
-Mike
On 07/26/2016 04:11 PM, John Bickerstaff wrote:
@Michae
@Michael - there are GUI available for ZooKeeper:
http://stackoverflow.com/questions/24551835/available-gui-for-zookeeper
I used the Eclipse plugin before and while it is a bit clunky it gets the job
done.
Alexandre Drouin
-Original Message-
From: John Bickerstaff [mailto:j...@johnbic
@Michael - somewhere there should be a "conf" directory for your SOLR
instance. For my Dev efforts, I moved it to a different directory and I
forget where it was, originally -- but if you search for solrconfig.xml or
schema.xml, you should find it.
It could be on your servers (or on only one of t
and further on in the file...
<
uniqueKey>id
On Tue, Jul 26, 2016 at 2:17 PM, John Bickerstaff
wrote:
> I don't see a managed schema file. As far as I understand it, id is set
> as a "uniqueKey" in the schema.xml file...
>
> On Tue, Jul 26, 2016 at 2:11 PM, Michael Joyner
> wrote:
>
>> o
I don't see a managed schema file. As far as I understand it, id is set as
a "uniqueKey" in the schema.xml file...
On Tue, Jul 26, 2016 at 2:11 PM, Michael Joyner wrote:
> ok, I think I need to do a manual edit on the managed-schema file but I
> get "NoNode" for /managed-schema when trying to u
@Michael - if you're on Linux and decide to take Alexandre's advice, I can
possibly save you some time. I wrestled with getting the data in and out
of zookeeper a while ago...
sudo /opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd upconfig -confdir
/home/john/conf/ -confname collectionName -z
ok, I think I need to do a manual edit on the managed-schema file but I
get "NoNode" for /managed-schema when trying to use the zkcli.sh file?
How can I get to this file and edit it?
On 07/26/2016 03:05 PM, Alexandre Drouin wrote:
Hello,
You may have a uniqueKey that points to a field that
Take a look at StatementImpl.constructStream()
This uses a SolrStream to connect to the /sql handler. You can use the same
approach to send a request to the /stream handler just by changing the
parameters. Then you can open and read the SolrStream.
We don't yet have a load balancing SolrStream.
Does anyone have an example of just POST'ing a streaming expression to
the /stream handler from SolrJ client code? i.e. I don't want to parse
and execute the streaming expression on the client side, rather, I
want to post the expression to the server side.
Currently, my client code is a big copy a
Other than deleting the collection, I think you'll have to edit the
manage-schema file manually.
Since you are using SolrCloud you will need to use Solr's zkcli
(https://cwiki.apache.org/confluence/display/solr/Command+Line+Utilities)
utility to download and upload the file from ZooKeeper.
Al
Same error via the UI:
Can't load schema managed-schema: unknown field 'id'
On 07/26/2016 03:05 PM, Alexandre Drouin wrote:
Hello,
You may have a uniqueKey that points to a field that do not exists anymore. You can try
adding an "id" field using Solr's UI or the schema API since you are usi
The schema API is failing with the the unknown field "id" error.
Where in the UI could I try adding this field back at?
On 07/26/2016 03:05 PM, Alexandre Drouin wrote:
Hello,
You may have a uniqueKey that points to a field that do not exists anymore. You can try
adding an "id" field using S
Hello,
You may have a uniqueKey that points to a field that do not exists anymore.
You can try adding an "id" field using Solr's UI or the schema API since you
are using the managed-schema.
Alexandre Drouin
-Original Message-
From: Michael Joyner [mailto:mich...@newsrx.com]
Sent: Ju
So I found the limit in the Ref Doc p. 394, under the /export request
handler:
"Up to four sort fields can be specified per request, with the 'asc' or
'desc' properties"
Yikes I'm in trouble. Does anyone know if this can be circumvented? Can I
write a custom handler that could handle up to 20? Oh
Hi,
I am using the Collection API to reload a core and I was wondering if there is
a way to know if the core was reloaded without errors.
For my testing, I added a known error (an invalid field in a request handler)
in my configuration and I use the url
"solr/admin/collections?action=RELOAD&
Hi, I'm trying to group search results by fields using the streaming API. I
don't see a sort limit mentioned in the Solr Ref Doc, but when I use 4
fields I get results and when I use 5 or more I get an exception:
java.util.concurrent.ExecutionException: java.io.IOException:
JSONTupleStream: expect
|Help!|
|
|
|What is the best way to recover from: |
Can't load schema managed-schema: unknown field 'id'
|I was managing the schema on a test collection, fat fingered it, but now
I find out the schema ops seem to altering all collections on the core?
SolrCloud 5.5.1 |||
|
-Mike|||
Hi,
I have issue that I saw it's a bug that really didn't resolved:
https://issues.apache.org/jira/browse/SOLR-8335
I worked with solr 6.1, on HDFS but I see it's also exists in solr 5.X
I started my solr, and because of out of memory exception, it was killed.
Now when I start it again, I have er
Hi,
I was trying to use the Mapreduce Indexer tool from cloudera, to index my data
in Hive table using Solr.
hadoop jar /path/to/lib/solr/contrib/mr/search-mr-*-job.jar
org.apache.solr.hadoop.MapReduceIndexerTool -Djute.maxbuffer=--morphline-file /path/to/morphlines.conf --output-dir
hdfs:/
And, I might add, you should look through your old logs
and see how long it takes to open a searcher. Let's
say Shawn's lower bound is what you see, i.e.
it takes a minute each to execute all the autowarming
in filterCache and queryResultCache... So you're current
latency is _at least_ 2 minutes be
Ok, makes sense now, thanks Joel. We should probably add some earlier
error checking vs. letting the code get all the way into the
HashQParser.
So this raises a separate question that I haven't been able to figure
out, namely, do we have an example of just POST'ing the expression to
the /stream ha
The difference would be if you are compiling and running the expression in
a java class or sending it to the /stream handler to be compiled.
If you're compiling it and running it locally you could get this error
because the StreamContext would not have the numWorkers variable set.
The /stream han
it's from a unit test, but not sure why that matters? If I wrap the
expression in a parallel expression with explicit workers=1, then it
works
On Thu, Jul 21, 2016 at 11:13 AM, Joel Bernstein wrote:
> Are you getting this error from a test case you've setup or from a manual
> call to the /stream
On 7/26/16 5:46 AM, Shawn Heisey wrote:
On 7/22/2016 10:15 AM, Rallavagu wrote:
As Erick indicated, these settings are incompatible with Near Real Time
updates.
With those settings, every time you commit and create a new searcher,
Solr will execute up to 1000 queries (potentia
Ah, I see - thanks for explaining that you’re not operating on tokens like that.
Given that, I think the best place to implement this is as a QParserPlugin -
that gives you the query string and allows you to return a standard Lucene
Query.
Erik
> On Jul 25, 2016, at 9:44 AM, sara ha
facet.query is really just a short-cut for numFound using that query standalone.
How many facet.query’s are you issuing? And what is the QTime for all those
queries when individually made like this:
/select?q=&rows=0
If one of your queries is “slow” - you mention wildcards and complex phra
Hello,
Thanks for your answer.
Yes, it seems a little tricky to me.
Best regards,
Elisabeth
2016-07-25 18:06 GMT+02:00 Erick Erickson :
> "Load" is a little tricky here, it means "load the core and open a
> searcher.
> The core _descriptor_ which is the internal structure of
> core.properties
I am experiencing very slow query time when using multiple facet.query on
large result sets (like 5 minutes).
I don't know how to optimize this, since it is not completely clear to me
how the facet.query works.
Currently my facet queries use tokenized text fields and contain wildcards
or even use c
On 7/22/2016 10:15 AM, Rallavagu wrote:
> size="5000"
> initialSize="5000"
> autowarmCount="500"/>
>
> size="2"
> initialSize="2"
> autowarmCount="500"/>
As Erick in
Hello.
I have setup Solr 6.1.0 to use SSL (on Windows) and to do client
authentication based on the client certificate.
When I use the same certificate for both the server and the client
authentication, everything works OK :
How it's parsed? You can check with debugQuery=true
On Tue, Jul 26, 2016 at 10:53 AM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> I'm using Solr 6.1.0
>
> Would like to find out, can we use the Block Join Parent Query Parser to
> filter the parents when I search for a field in the child document?
>
> F
Hi ,
I am using solr 4.7.2 verison.
I am using group.ngroups=true in my solr queries and recently shifted to
distributed architeture of solr, where i add shards param in my solr
queries as follow :
http://localhost:8983/solr/foo/select?wt=json&rows=2&group=true&group.field=dcterms_source&group.n
Hi,
I'm using Solr 6.1.0
Would like to find out, can we use the Block Join Parent Query Parser to
filter the parents when I search for a field in the child document?
For example, when I just filter by child query like this, I get 8 results.
q={!parent which="*content_type:parentDocument*"}+range
After a bit of testing I got it working. Basically all the configuration for
log4j by default is under server/resources/log4j.properites.
By default log4j should be able to find log4j.xml if you delete
log4j.properties. I tried it and that's no the case.
I figured out this is due the fact that s
46 matches
Mail list logo