Yes this is expected. On startup old console logs and gc logs are
moved into the archived folder by default. This can be disabled by
setting SOLR_LOG_PRESTART_ROTATION=false as a environment variable
(search for its usage in bin/solr) but it will also disable all log
rotation.
On Wed, May 3, 2017
Hi Erik,
about 1>
I have no core.properties at all, just a clean new installation.
- 5 x Zookeeper on 5 different server
- 5 x Solr 6.5.1 on 5 different server
- uploaded a configset with "bin/solr zk upconfig ..."
- started first Solr node with port 8983 of first server
- started second Solr node
Hi Shalin,
sounds like all or nothing method :-)
How about a short check if an instance is still running
and using that log file before moving it to archived?
Regards
Bernd
Am 04.05.2017 um 09:07 schrieb Shalin Shekhar Mangar:
> Yes this is expected. On startup old console logs and gc logs are
Hi list,
next problem with SolrCloud.
Situation:
- 5 x Zookeeper fresh, clean on 5 server
- 5 x Solr 6.5.1 fresh, clean on 5 server
- start of Zookeepers
- upload of configset with Solr to Zookeepers
- start of only one Solr instance port 8983 on each server
- With Solr Admin GUI check that all Sol
I'm not a fan of auto-archiving myself and we definitely shouldn't be
doing it before checking if an instance is running. Can you please
open an issue?
On Thu, May 4, 2017 at 12:52 PM, Bernd Fehling
wrote:
> Hi Shalin,
>
> sounds like all or nothing method :-)
>
> How about a short check if an in
Hi Satya,
In order to have more complete picture of your production (host, JVM,
ZK, Solr metrics), I would suggest using one of monitoring solutions.
One such solution is Sematext's SPM: http://sematext.com/spm/.
It is much easier if you are up to SaaS setup, but we also provide on
premise i
After many, many tests it is "time to say goodbye" to SolrCloud and
I belief it is not running and useful at all. :-(
I reduced to only 3 servers (Solr and Zookeeper) and tried to
_only_ create a simple single collection, but even this fails.
bin/solr create -c base -d
/home/solr/solr/solr/serv
Hi,
I have a field like this:
so I can do a fast in-place atomic updates
However if I do e.g.
curl -H 'Content-Type: application/json'
'http://localhost:8983/solr/collection/update?commit=true'
--data-binary '
[{
"id":"my_id",
"popularity":{"set":null}
}]'
then I'd expect the popularity f
Hi Dan,
Remove does not make sense when it comes to in-place updates of
docValues - it has to have some value, so only thing that you can do is
introduce some int value as null.
HTH,
Emir
On 04.05.2017 15:40, Dan . wrote:
Hi,
I have a field like this:
so I can do a fast in-place atomi
On 5/4/2017 7:40 AM, Dan . wrote:
> I have a field like this:
>
>
> docValues="true" multiValued="false"/>
>
> so I can do a fast in-place atomic updates
>
> However if I do e.g.
>
> curl -H 'Content-Type: application/json'
> 'http://localhost:8983/solr/collection/update?commit=true'
> --data-bin
Ok, we decided not to implement PositionLengthAttribute for now due to, it
either is a bad applied (how could one even misapply that attribute?) or Solr's
QueryBuilder has a weird way of dealing with it or.. well.
Thanks,
Markus
-Original message-
> From:Markus Jelsma
> Sent: Monday
I've pretty much ruled out system/hardware issues - the AWS instance has
been rebooted, and indexing to a core on a new and empty disk/file system
fails in the same way with a CorruptIndexException.
I can generally get indexing to complete by significantly dialing down the
number of indexer scri
I suspect that there is something not quite right about the how the /export
handler is configured. Straight out of the box in solr 6.4.2 /export will
be automatically configured. Are you using a Solr instance that has been
upgraded in the past and doesn't have standard 6.4.2 configs?
To really do
Hi Shawn,
Thanks for the suggestion.
I gave that a try but unfortunately it didn't work.
Delete somehow would be really useful, seems wasteful to have e.g. -1
representing null.
Cheers,
Dan
On 4 May 2017 at 15:30, Shawn Heisey wrote:
> On 5/4/2017 7:40 AM, Dan . wrote:
> > I have a field lik
Hi Emir,
Yes I though of representing -1 as null, but this makes the index
unnecessarily larger, particularly if we have to default all docs to this
value.
Cheers,
Dan
On 4 May 2017 at 15:16, Emir Arnautovic
wrote:
> Hi Dan,
>
> Remove does not make sense when it comes to in-place updates of
Hi Joel,
I think that might be one of the reason.
This is what I have for the /export handler in my solrconfig.xml
{!xport} xsort false query
This is the error message that I get when I use the /export handler.
java.io.IOException: java.util.concurrent.ExecutionException:
java.io.IOExcept
Yeah, the newest configurations are in implicitPlugins.json. So in the
standard release now there is nothing about the /export handler in the
solrconfig.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 4, 2017 at 11:38 AM, Zheng Lin Edwin Yeo
wrote:
> Hi Joel,
>
> I think that might be
Simon
After hearing about the weird time issue in EC2, I am going to ask if you have
a real server handy for testing. No, I have no hard facts, this is just a
suggestion.
And I have no beef with AWS, they have served me really well for other servers.
Cheers -- Rick
On May 4, 2017 10:49:25 AM
Hi Joel,
For the join queries, is it true that if we use q=*:* for the query for one
of the join, there will not be any results return?
Currently I found this is the case, if I just put q=*:*.
Regards,
Edwin
On 4 May 2017 at 23:38, Zheng Lin Edwin Yeo wrote:
> Hi Joel,
>
> I think that might
No *:* will simply return all the results from one of the queries. It
should still join properly. If you are using the /select handler joins will
not work properly.
This example worked properly for me:
hashJoin(parallel(collection2, j
workers=3,
I'm trying to run this streaming expression
search(data,qt="/export",q="*:*",fl="id",sort="id asc")
and I'm hitting this exception:
2017-05-04 17:24:05.156 ERROR (qtp1937348256-378) [c:data s:shard7
r:core_node38 x:data_shard7_replica1] o.a.s.c.s.i.s.ExceptionStream
java.io.IOException: java.uti
We have been having problems with different collections on different SolrCloud
clusters, all seeming to be related to the write.lock file with stack traces
similar to the following. Are there any suggestions what might be the cause and
what might be the solution? Thanks
org.apache.lucene.store
You need to look at all of your core.properties files and see if any
of them point to the same data directory.
Second: if you issue a "kill -9" you can leave write locks lingering.
Best,
Erick
On Thu, May 4, 2017 at 11:00 AM, Oakley, Craig (NIH/NLM/NCBI) [C]
wrote:
> We have been having problem
Did this error come from a standard 6.5.1 build, or form a build that was
upgraded to 6.5.1 with older config files?
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 4, 2017 at 1:57 PM, Yago Riveiro wrote:
> I'm trying to run this streaming expression
>
> search(data,qt="/export",q="*:*
Older build with that was upgraded from 6.3.0 to 6.5.1.
The config used in 6.3.0 are the same used in 6.5.1 without changes.
Should I update my configs?
--
/Yago Riveiro
On 4 May 2017, 21:45 +0100, Joel Bernstein , wrote:
> Did this error come from a standard 6.5.1 build, or form a build that
Hi,
Can someone please say what I am missing in this case? I have solr
6.3.0, and enabled http authentication, the configuration has been
uploaded to zookeeper. But I do see below error in logs sometimes. Are
the nodes not able to ciommunicate because of this error? I am not
seeing any functional
Hi Joel,
Yes, the /export works after I remove the /export handler from
solrconfig.xml. Thanks for the advice.
For *:*, there will be result returned when using /export.
But if one of the queries is *:*, this means the entire resultset will
contains all the records from the query which has *:*?
Ok, I suspect the changes in the config happened with this ticket:
https://issues.apache.org/jira/browse/SOLR-9721
So I think you just need to take the new ImplicitPlugins.json to get the
latest configs. Also check to make sure the /export handler is not
referenced in the solrconfig.
SOLR-9721 a
Hi,
I'm facing a issue when i'm querying the Solr
my query is "xiomi Mi 5 -white [64GB/ 3GB]"
while my search field definition is
My generated query is
+(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nam
29 matches
Mail list logo