Close, but not quite there yet. The rules say use
systemctl start (or stop or status) solr.service
That dot service part ought to be there. I suspect that if we omit it
then we may be scolded on-screen and lose some grade points.
On your error report below. Best to ensure that Sol
mctl start
solr.service.
Thanks,
Joe D.
On 15/10/2020 16:01, Ryan W wrote:
I have been starting solr like so...
service solr start
On Thu, Oct 15, 2020 at 10:31 AM Joe Doupnik wrote:
Alex has it right. In my environment I created user "solr" in group
"users". Then I
Alex has it right. In my environment I created user "solr" in group
"users". Then I ensured that "solr:user" owns all of Solr's files. In
addition, I do Solr start/stop with an /etc/init.d script (the Solr
distribution has the basic one which we can embellish) in which there is
control line
There is an effective alternative approach to placing
authentication within Solr. It is to use the web server (say Apache) as
a smart proxy to Solr and in so doing also apply access restrictions of
various kinds. Thus Solr remains intact, no addition needed for
authentication, and authentic
g now about my actual question.. thanks all for your
valuable theories
Sent from my iPhone
On Sep 1, 2020, at 2:01 PM, Joe Doupnik wrote:
As I have not received the follow-on message to mine I will cut&paste it
below.
My comments on that are the numbers are the numbers. More impor
Some time ago I faced a roughly similar challenge. After many
trials and tests I ended up creating my own programs to accomplish the
tasks of fetching files, selecting which are allowed to be indexed, and
feeding them into Solr (POST style). This work is open source, found on
https://netlab
More properly,it would be best to fix Tika and thus not push extra
complexity upon many many users. Error handling is one thing, crashes
though ought to be designed out.
Thanks,
Joe D.
On 25/08/2020 10:54, Charlie Hull wrote:
On 25/08/2020 06:04, Srinivas Kashyap wrote:
Hi Alexand
On 22/08/2020 22:08, maciejpreg...@tutanota.com.INVALID wrote:
Good morning.
When I uncomment any of commands in solr.in.sh, Solr doesn't run. What do I
have to do to fix a problem?
Best regards,
Maciej Pregiel
On 22/08/2020 22:08, maciejpreg...@tutanota.com.INVALID wrote:
Good morning.
When I
One day I will learn to type. In the meanwhile the command, as
root, is chown -R solr:users solr. That means creating that username if
it is not present.
Thanks,
Joe D.
On 30/05/2019 20:12, Joe Doupnik wrote:
On 30/05/2019 20:04, Bernard T. Higonnet wrote:
Hello,
I have
On 30/05/2019 20:04, Bernard T. Higonnet wrote:
Hello,
I have installed solr from ports under FreeBSD 12.0 and I am trying to
run solr as described in the Solr Quick Start tutorial.
I keep getting permission errors:
/usr/local/solr/example/cloud/node2/solr/../logs could not be
created. Exi
27/05/2019 18:38, Joe Doupnik wrote:
An interesting note on the memory returning issue for the G1
collector.
https://openjdk.java.net/jeps/346
Entitled "JEP 346: Promptly Return Unused Committed Memory from G1"
with a summary saying "Enhance the G1 garbage collector to
autom
ease read the full web page to have a rounded view of that
discussion.
Thanks,
Joe D.
On 27/05/2019 18:17, Joe Doupnik wrote:
My comments are inserted in-line this time. Thanks for the
amplifications Shawn.
On 27/05/2019 17:39, Shawn Heisey wrote:
On 5/27/2019 9:49 AM,
My comments are inserted in-line this time. Thanks for the
amplifications Shawn.
On 27/05/2019 17:39, Shawn Heisey wrote:
On 5/27/2019 9:49 AM, Joe Doupnik wrote:
A few more numbers to contemplate. An experiment here, adding 80
PDF and PPTX files into an empty index.
Solr v8.0
ving a few sets of them for different operating situations and
the customer chooses appropriately.
Thanks,
Joe D.
On 27/05/2019 11:05, Joe Doupnik wrote:
You are certainly correct about using external load balancers when
appropriate. However, a basic problem with servers, that of
licas.
Regards
Bernd
Am 27.05.19 um 10:33 schrieb Joe Doupnik:
While on the topic of resource consumption and locks etc, there
is one other aspect to which Solr has been vulnerable. It is failing
to fend off too many requests at one time. The standard approach is,
of course, named back pre
27/05/2019 08:52, Joe Doupnik wrote:
Generalizations tend to fail when confronted with conflicting
evidence. The simple evidence is asking how much real memory the Solr
owned process has been allocated (top, or ps aux or similar) and that
yields two very different values (the ~1.6GB of Solr v8.0
because perfection is not possible.
Thanks,
Joe D.
On 26/05/2019 20:30, Shawn Heisey wrote:
On 5/26/2019 12:52 PM, Joe Doupnik wrote:
I do queries while indexing, have done so for a long time,
without difficulty nor memory usage spikes from dual use. The system
has been designed to
is is also a very risky memory strategy. What happens if you Index
and query at the same time etc. maybe it is more worth to provide as much
memory as for concurrent operations are needed. This includes JVM memory but
also the disk caches.
Am 26.05.2019 um 20:38 schrieb Joe Doupnik :
On 26/05
On 26/05/2019 19:38, Jörn Franke wrote:
Different garbage collector configuration? It does not mean that Solr uses more
memory if it is occupied - it could also mean that the JVM just kept it
reserved for future memory needs.
Am 25.05.2019 um 17:40 schrieb Joe Doupnik :
Comparing
On 26/05/2019 19:15, Joe Doupnik wrote:
On 26/05/2019 19:08, Shawn Heisey wrote:
On 5/25/2019 9:40 AM, Joe Doupnik wrote:
Comparing memory consumption (real, not virtual) of quiesent
Solr v8.0 and prior with Solr v8.1.0 reveals the older versions use
about 1.6GB on my systems but v8.1.0
On 26/05/2019 19:08, Shawn Heisey wrote:
On 5/25/2019 9:40 AM, Joe Doupnik wrote:
Comparing memory consumption (real, not virtual) of quiesent
Solr v8.0 and prior with Solr v8.1.0 reveals the older versions use
about 1.6GB on my systems but v8.1.0 uses 4.5 to 5+GB. Systems used
are SUSE
Comparing memory consumption (real, not virtual) of quiesent Solr
v8.0 and prior with Solr v8.1.0 reveals the older versions use about
1.6GB on my systems but v8.1.0 uses 4.5 to 5+GB. Systems used are SUSE
Linux, with Oracle JDK v1.8 and openjdk v10. This is a major memory
consumption issue
On 22/04/2018 19:26, Joe Doupnik wrote:
On 22/04/2018 19:04, Nicolas Paris wrote:
Hello
I wonder if there is a plain text query syntax to say:
give me all document that match:
wonderful pizza NOT peperoni
all those in a 5 distance word bag
then
pizza are wonderful -> would match
I mad
On 22/04/2018 19:04, Nicolas Paris wrote:
Hello
I wonder if there is a plain text query syntax to say:
give me all document that match:
wonderful pizza NOT peperoni
all those in a 5 distance word bag
then
pizza are wonderful -> would match
I made a wonderful pasta and pizza -> would match
Pep
, Apr 21, 2018 at 10:55 AM, Joe Doupnik wrote:
A good find Erick, and one which brings into focus the real problem at
hand. That overload case would happen if there were an Optimise button or if
the curl equivalent command were issued, and is not a reason to avoid
either/both.
So, what co
x27;s almost certainly a horrible tradeoff. For more static indexes,
the "expert" API command is still available.
Best,
Erick
On Sat, Apr 21, 2018 at 5:08 AM, Joe Doupnik wrote:
In Solr v7.3.0 the ability to removed "deleted" docs from a core by use
of what until then was
that
brings down a Solr cluster that I think I agree with the decision to remove
such an inviting button.
Doug
On Sat, Apr 21, 2018 at 8:08 AM Joe Doupnik wrote:
-
Doug,
Thanks for that feedback. Here are my thoughts on the matter.
Removing deleted docs is often an irregular
In Solr v7.3.0 the ability to removed "deleted" docs from a core by
use of what until then was the Optmise button on the admin GUI has been
changed in an ungood way. That is, in the V7.3.0 Changes list, item SOLR
7733 (quote remove "optmize from the UI, end quote). The result of that
is an
28 matches
Mail list logo