Solr query is very slow -- guidance would be much appreciated

2019-08-22 Thread Muthu Thenappan
Hi,

when I run "q=one" the Qtime is around 200ms however when I run 2 or 3 words 
the run time exponentially increases eg: "q=two words" or "q=apple orange 
berry" resulting in 3s to 10s.

I have indexed around 10 milion document with each document containing around 
100 words. I am using that usual filter factories like lowercase/synonym/stop 
filter. Found that search result retrieving part taking long time and not the 
faceting/highlighting/sort... see below result.
"timing":{
  "time":3435.0,
  "prepare":{
"time":0.0,
"query":{
  "time":0.0},
"facet":{
  "time":0.0},
"facet_module":{
  "time":0.0},
"mlt":{
  "time":0.0},
"highlight":{
  "time":0.0},
"stats":{
  "time":0.0},
"expand":{
  "time":0.0},
"terms":{
  "time":0.0},
"spellcheck":{
  "time":0.0},
"debug":{
  "time":0.0}},
  "process":{
"time":3434.0,
"query":{
  "time":3316.0},
"facet":{
  "time":0.0},
"facet_module":{
  "time":0.0},
"mlt":{
  "time":0.0},
"highlight":{
  "time":116.0},
"stats":{
  "time":0.0},
"expand":{
  "time":0.0},
"terms":{
  "time":0.0},
"spellcheck":{
  "time":1.0},
"debug":{
  "time":0.0}

Caching is working good. However I did not change any factory default settings 
for caching.

Used jconsole to check GC time which is around 0.23 sec.

I would like query time to fall below 1 sec.

However not sure where else to look? Any guidance will be much appreciated. 
Thank you.

[cid:image001.png@01D5590F.5927A6D0]
Kind Regards,
Muthu Thenappan








This email, including any attachments sent with it, is confidential and for the 
sole use of the intended recipient(s). This confidentiality is not waived or 
lost, if you receive it and you are not the intended recipient(s), or if it is 
transmitted/received in error.

Any unauthorised use, alteration, disclosure, distribution or review of this 
email is strictly prohibited. The information contained in this email, 
including any attachment sent with it, may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this email in 
error, you are asked to immediately notify the sender by telephone collect on 
Australia +61 1800 198 175 or by return email. You should also delete this 
email, and any copies, from your computer system network and destroy any hard 
copies produced.

If not an intended recipient of this email, you must not copy, distribute or 
take any action(s) that relies on it; any form of disclosure, modification, 
distribution and/or publication of this email is also prohibited.

Although Queensland Health takes all reasonable steps to ensure this email does 
not contain malicious software, Queensland Health does not accept 
responsibility for the consequences if any person's computer inadvertently 
suffers any disruption to services, loss of information, harm or is infected 
with a virus, other malicious computer programme or code that may occur as a 
consequence of receiving this email.

Unless stated otherwise, this email represents only the views of the sender and 
not the views of the Queensland Government.

**


Re: Solr 7.6 JoinQueryParser with Muti Threaded Facet stops solr core with Exceptions

2019-08-22 Thread Mikhail Khludnev
This case is worth to be covered with test. Beside of that, if join is
executed in multiple threads it might make bigger footprint that gain from
employing threads. Does it make and impact for consequent searches?

On Thu, Aug 22, 2019 at 8:42 AM harjagsbby 
wrote:

> Few deatils added.
>
> public void close() {
> int count = refCount.decrementAndGet();
> if (count > 0) return; // close is called often, and only actually
> closes if nothing is using it.
> if (count < 0) {
>   log.error("Too many close [count:{}] on {}. Please report this
> exception to solr-user@lucene.apache.org", count, this );
>   assert false : "Too many closes on SolrCore";
>   return;
> }
> log.info("{} CLOSING SolrCore {}", logid, this);
>
> // stop reporting metrics
> try {
>   coreMetricManager.close();
> } catch (Throwable e) {
>   SolrException.log(log, e);
>   if (e instanceof  Error) {
> throw (Error) e;
>   }
> }}
>
> The above code has refCount in SolrCore.java this atomicRef is in negative
> when we do multi threaded facets and joins.
>
> Can anyone explain the semaphore implemenattion here. What would be
> scenario
> when this can go below 0. I can clearly see each thread of multithreaded
> facet trying to close the core.
>
> Every Thread of facet tries to call createWeight of JoinQParserPlugin and
> attaches closeHooks which getscalled from
> SolrRequestInfo.clearRequestInfo()
> .
>
> if (fromRef != null) {
>   final RefCounted ref = fromRef;
>   info.addCloseHook(new Closeable() {
> @Override
> public void close() {
>   ref.decref();
> }
>   });
> }
>
> info.addCloseHook(new Closeable() {
>   @Override
>   public void close() {
> fromCore.close();
>   }
> });
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
Sincerely yours
Mikhail Khludnev


Re: Solr query is very slow -- guidance would be much appreciated

2019-08-22 Thread Erick Erickson
If you’re literally including the quotes, i.e. q=“one two”, then you’re doing 
phrase searches which are more complex and will take longer. q=field:one AND 
field:two is a straight boolean query. Also, what query parser are you using? 
If it’s edismax, then you’re searching across multiple fields. Try adding 
&debug=query and take a look at the parsed query...

That said, these times are way longer than I would expect once the searcher is 
warmed. You can tell nothing from a single query executed after a new searcher 
is opened since the relevant parts of the index have to be read from disk. What 
happens with queries 2-N? (NOTE: don’t run the exact same query or you’ll hit 
the queryResultCache and get wonderful response times that are meaningless).

How much physical memory is on the machine and how much are you allocating to 
the JVM? Also, how much disk space does your index occupy and what kind (SSD or 
spinning disk)?

Bottom line: These kinds of queries should be way faster than that, even if 
they’re phrase queries.

Best,
Erick

> On Aug 22, 2019, at 3:30 AM, Muthu Thenappan 
>  wrote:
> 
> Hi,
> 
> when I run "q=one" the Qtime is around 200ms however when I run 2 or 3 words 
> the run time exponentially increases eg: "q=two words" or "q=apple orange 
> berry" resulting in 3s to 10s.
> 
> I have indexed around 10 milion document with each document containing around 
> 100 words. I am using that usual filter factories like lowercase/synonym/stop 
> filter. Found that search result retrieving part taking long time and not the 
> faceting/highlighting/sort... see below result.
> 
> "timing":{
>   "time":3435.0,
>   "prepare":{
> "time":0.0,
> "query":{
>   "time":0.0},
> "facet":{
>   "time":0.0},
> "facet_module":{
>   "time":0.0},
> "mlt":{
>   "time":0.0},
> "highlight":{
>   "time":0.0},
> "stats":{
>   "time":0.0},
> "expand":{
>   "time":0.0},
> "terms":{
>   "time":0.0},
> "spellcheck":{
>   "time":0.0},
> "debug":{
>   "time":0.0}},
>   "process":{
> "time":3434.0,
> "query":{
>   "time":3316.0},
> "facet":{
>   "time":0.0},
> "facet_module":{
>   "time":0.0},
> "mlt":{
>   "time":0.0},
> "highlight":{
>   "time":116.0},
> "stats":{
>   "time":0.0},
> "expand":{
>   "time":0.0},
> "terms":{
>   "time":0.0},
> "spellcheck":{
>   "time":1.0},
> "debug":{
>   "time":0.0}
> Caching is working good. However I did not change any factory default 
> settings for caching.
> 
> Used jconsole to check GC time which is around 0.23 sec.
> 
> I would like query time to fall below 1 sec. 
> 
> However not sure where else to look? Any guidance will be much appreciated. 
> Thank you.
> 
> 
> 
> Kind Regards,
> Muthu Thenappan
>  
>  
> 
>  
> 
> 
> 
> This email, including any attachments sent with it, is confidential and for 
> the sole use of the intended recipient(s). This confidentiality is not waived 
> or lost, if you receive it and you are not the intended recipient(s), or if 
> it is transmitted/received in error.
> 
> Any unauthorised use, alteration, disclosure, distribution or review of this 
> email is strictly prohibited. The information contained in this email, 
> including any attachment sent with it, may be subject to a statutory duty of 
> confidentiality if it relates to health service matters.
> 
> If you are not the intended recipient(s), or if you have received this email 
> in error, you are asked to immediately notify the sender by telephone collect 
> on Australia +61 1800 198 175 or by return email. You should also delete this 
> email, and any copies, from your computer system network and destroy any hard 
> copies produced.
> 
> If not an intended recipient of this email, you must not copy, distribute or 
> take any action(s) that relies on it; any form of disclosure, modification, 
> distribution and/or publication of this email is also prohibited.
> 
> Although Queensland Health takes all reasonable steps to ensure this email 
> does not contain malicious software, Queensland Health does not accept 
> responsibility for the consequences if any person's computer inadvertently 
> suffers any disruption to services, loss of information, harm or is infected 
> with a virus, other malicious computer programme or code that may occur as a 
> consequence of receiving this email.
> 
> Unless stated otherwise, this email represents only the views of the sender 
> and not the views of the Queensland Government.
> 
> **
> 



Facing time out issue jn solr

2019-08-22 Thread Sanjoy Ganguly
Hello,

Good evening!

I am facing issue while trying to index 4 files. Getting "time out error"
in log.

I am using Solr 7.5, installed in the Linux server.  We have lot of
business document that we are able to index but except below listed file.

1. File 1
Size- approx 340 MB
Page count- approx 5800

Rest files are also have same type of figure.

Just to clarify this file are opening in Adobe reader. File are having text.

All files are in PDF format.

Question-  Is there any file size or page count restriction in solr?

*Asper business protocol I will not be able to attach the files.

Thanks .

Awaiting your response.

Regards,
Sanjoy Ganguly


8.2.0 After changing replica types, state.json is wrong and replication no longer takes place

2019-08-22 Thread Markus Jelsma
Hello,

There is a newly created 8.2.0 all NRT type cluster for which i replaced each 
NRT replica with a TLOG type replica. Now, the replicas no longer replicate 
when the leader receives data. The situation is odd, because some shard 
replicas kept replicating up until eight hours ago, another one (same 
collection, same node) seven hours, and even another one four hours!

I inspected state.json to see what might be wrong, and compare it with another 
fully working, but much older, 8.2.0 all TLOG collection.

The faulty one still lists, probably from when it was created:
"nrtReplicas":"2",
"tlogReplicas":"0"
"pullReplicas":"0",
"replicationFactor":"2",

The working collection only has:
"replicationFactor":"1",

What actually could cause this new collection to start replicating when i 
delete the data directory, but later on stop replicating at some random time, 
which is different for each shard.

Is there something i should change in state.json, and can it just be reuploaded 
to ZK?

Thanks,
Markus


Solr indexing for unstructured data

2019-08-22 Thread amrit pattnaik
Hi ,
I am a newbie in Solr. I have a scenario wherein the pdf documents with
unstructured data have been parsed as text and kept in a separate directory.

Now once I build a collection and do indexing using "bin/post -c collection
name document name", the document gets indexed and I am able to retrieve
the result. But it is a schemaless mode, I add fields to the managed-schema
of collection.

If I use bin/post command mentioned above, it does not return the added
fields in schema in query result. So I tried indexing using curl command
wherein I explicitly mention the field name value in the document sent for
indexing. The required fields show up in query result but if I do a keyword
based search, the document added through curl command don't show up.

Would appreciate pointers/ help as I have been stuck on this issue for long.

Regards,
Amrit

-- 
With Regards,

Amrit Pattnaik


Re: Solr indexing for unstructured data

2019-08-22 Thread Alexandre Rafalovitch
In Admin UI, there is schema browsing screen:
https://lucene.apache.org/solr/guide/8_1/schema-browser-screen.html
That shows you all the fields you have, their configuration and their
(tokenized) indexed content.

This seems to be a good midpoint between indexing and querying. So, I
would check whether the field you expect (and the fields you did not
expect) are there. If they are, focus on querying. If they are not,
focus on indexing.

This is a generic advice, because the question is not really clear.
Specifically:
1) "PDF parsed as text" "and I index that file" - what does that file
look like (content type)
2) "I index with bin/post" "I am able to retrieve results"  vs "I use
bin/post above" "it does not return fields in query". I can't tell the
difference between those two sequences, if you are indexing the same
file with the same command, you should get the same results.

Hope that helps.

Regards,
   Alex.

On Thu, 22 Aug 2019 at 09:44, amrit pattnaik  wrote:
>
> Hi ,
> I am a newbie in Solr. I have a scenario wherein the pdf documents with
> unstructured data have been parsed as text and kept in a separate directory.
>
> Now once I build a collection and do indexing using "bin/post -c collection
> name document name", the document gets indexed and I am able to retrieve
> the result. But it is a schemaless mode, I add fields to the managed-schema
> of collection.
>
> If I use bin/post command mentioned above, it does not return the added
> fields in schema in query result. So I tried indexing using curl command
> wherein I explicitly mention the field name value in the document sent for
> indexing. The required fields show up in query result but if I do a keyword
> based search, the document added through curl command don't show up.
>
> Would appreciate pointers/ help as I have been stuck on this issue for long.
>
> Regards,
> Amrit
>
> --
> With Regards,
>
> Amrit Pattnaik


Re: Facing time out issue jn solr

2019-08-22 Thread Erick Erickson
No, there’s no a-priori file size in Solr. But ingesting a 340M file will take 
a long time. A very long time. The timeout is probably just the client timeout, 
I’ve seen a situation where the doc does get indexed even though there’s a 
timeout.

However:

1> There are several timeouts to be aware of that you can lengthen, all in 
solr.xml:
• socketTimeout

• connTimeout

• distribUpdateConnTimeout

• distribUpdateSoTimeout


distribUdateConnTimeout is important. If you have leaders and replicas 
(SolrCloud), the leader forwards the doc to the follower. If this timeout is 
exceeded, the leader may put the follower into “Leader Initiated Recovery”. You 
really need to insure that this parameter is longer than any anticipated 
timeout.

2> If you’re just throwing a 340MB  “semi structured” document at Solr (i.e. 
Word, PDF, whatever) you’re putting an awful lot of work on the node doing the 
indexing. You probably want to move the parsing off Solr, see: 
https://lucidworks.com/post/indexing-with-solrj/ or use one of the services.

3> I always question the utility of indexing such a large document. Assuming 
that’s mostly textual data, what are you going to do with it? It’ll have so 
many words in it that it’ll be found by many, many, many searches. It’ll also 
have so many words in it that it’ll tend to be far down in the results list. 
Assuming you’re OK with those issues, what will the user do with it if they 
click on it? Wait until the entire file is returned to the laptop then have the 
browser blow up trying to load it? My point is perhaps a better idea is to ask 
what use-case indexing this document serves. It may be that you have a 
perfectly valid reason, I just want to be sure you’ve thought through the 
implications.

Best,
Erick

> On Aug 22, 2019, at 9:00 AM, Sanjoy Ganguly  
> wrote:
> 
> Hello,
> 
> Good evening!
> 
> I am facing issue while trying to index 4 files. Getting "time out error"
> in log.
> 
> I am using Solr 7.5, installed in the Linux server.  We have lot of
> business document that we are able to index but except below listed file.
> 
> 1. File 1
>Size- approx 340 MB
>Page count- approx 5800
> 
> Rest files are also have same type of figure.
> 
> Just to clarify this file are opening in Adobe reader. File are having text.
> 
> All files are in PDF format.
> 
> Question-  Is there any file size or page count restriction in solr?
> 
> *Asper business protocol I will not be able to attach the files.
> 
> Thanks .
> 
> Awaiting your response.
> 
> Regards,
> Sanjoy Ganguly



Backup fails for Solr 8.0.0 with NoSuchFileException

2019-08-22 Thread Montenegro, Angel
Im experiencing the issue described in here, however, am using Solr 8.0.0:

https://issues.apache.org/jira/browse/SOLR-11616

I can't create backups using the backup API, one or two minutes after I
start the backup, it throws the following NoSuchFileException exception,
with a different file each time:

2019-08-22 16:47:33.300 ERROR (Thread-74) [ ] o.a.s.h.SnapShooter
Exception while creating snapshot
 java.nio.file.NoSuchFileException:
/mnt/orcid_data/solr_data/profile/index/_6gne_Lucene50_0.tip
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
~[?:?]
 at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
 at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
 at 
sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:178)
~[?:?]
 at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?]
 at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?]
 at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)
~[lucene-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979
- jimczi - 2019-03-08 11:58:55]
 at 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:181)
~[lucene-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979
- jimczi - 2019-03-08 11:58:55]
 at org.apache.lucene.store.Directory.copyFrom(Directory.java:182)
~[lucene-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979
- jimczi - 2019-03-08 11:58:55]
 at 
org.apache.solr.core.backup.repository.LocalFileSystemRepository.copyFileFrom(LocalFileSystemRepository.java:145)
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
jimczi - 2019-03-08 12:06:06]
 at org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:238)
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
jimczi - 2019-03-08 12:06:06]
 at 
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$2(SnapShooter.java:205)
[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
jimczi - 2019-03-08 12:06:06]
 at java.lang.Thread.run(Thread.java:834) [?:?]


Anyone know how to fix this? or any workaround?

Ángel Montenegro
Software architect
a.montene...@orcid.org
https://orcid.org/-0002-7869-831X


Can't start Solr 7.7.1 due to name-resolution issue

2019-08-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

I'm getting a failure to start my Solr instance. Here's the error from
the console log:

Error: Exception thrown by the agent : java.net.MalformedURLException:
Local host name unknown: java.net.UnknownHostException: [hostname]:
[hostname]: Name or service not known
sun.management.AgentConfigurationError:
java.net.MalformedURLException: Local host name unknown:
java.net.UnknownHostException: [hostname]: [hostname]: Name or service
not known
at
sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(C
onnectorBootstrap.java:480)
at sun.management.Agent.startAgent(Agent.java:262)
at sun.management.Agent.startAgent(Agent.java:452)
Caused by: java.net.MalformedURLException: Local host name unknown:
java.net.UnknownHostException: [hostname]: [hostname]: Name or service
not known
at
javax.management.remote.JMXServiceURL.(JMXServiceURL.java:289)
at
javax.management.remote.JMXServiceURL.(JMXServiceURL.java:253)
at
sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorB
ootstrap.java:739)
at
sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(C
onnectorBootstrap.java:468)
... 2 more


Now, my hostname is just the first part of the hostname, so like "www"
instead of "www.example.com". Running "host [hostname]" on the CLI
returns "Host [hostname]" not found: 3(NXDOMAIN)" so it's not entirely
surprising that this name resolution is failing.

What's the best way for me to get around this?

I'm running on Debian Stretch in Amazon EC2. I've tried fixing the
local name resolution so that it actually works, but when I reboot,
the EC2 instance reverts my DNS settings so those changes won't
survive a reboot.

Can I give the fully-qualified hostname to the JMX component in some way
?

I've this answer[1] on SO and everyone seems to say "edit /etc/hosts"
and, as I said, the EC2 startup scripts end up resetting those files
during a reboot.

Any ideas?

- -chris

[1]
https://stackoverflow.com/questions/20093854/jmx-agent-throws-java-net-m
alformedurlexception-when-host-name-is-set-to-all-num
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl1e9IQACgkQHPApP6U8
pFjG6g//XM6nPgioIuJs40gB9534GnsG9q8d42AUIoiDzJ+t8isFFxtphEChdcye
9/5ePo36fODIsNkzzXsAJh9L1iRgmnVy7QGQIDp07WEo9v2bVo2RkWl42zm+UQ5u
XIz//bpT+J9y3eBPdPCKaXou+UYeR9/2W/UYyN08/uayP2QVVd2ZavC6AbFW93i1
IF5vOmETOsxBgVlgngX4TQRNSKfe5gCqWT0l/diHpm7PjT2BDzNO7x3vRbfioOMS
ktXcRqdBJAzM9XLV1acI+0z7I1kzs/A+jCymT/4++VmI0Lf4AACIhoaqnmS9pxyY
nrXU8tttozbaHMiBS3dIIMZP1ZF4jzY0+/UPBfgXqM4OcErWTjrha4G/5oBlLqf8
msuVRTg6qbsQJP//UcDhN8kl593xCK/bcQMkzq1ABkwFUhb8PhXp/3IJCRjJm5q3
U3gTwMwA/k+R4aM8qGaLw+07aFCdVJKrIUW0NEEHEnwkjJxAeqIRdpV8acfrT6uy
3v78cVFvWaxcOtAyioUhek0jhKzCobcxsZEcxZqWWxY0DOFHWbip/agTJESC/sXV
wLY2P9lldo+S5dAoaGM7Ze1WJ5FOSLm6Juvl4CvyMeebyPFie4PrWX7b7ess8I+A
YwLyqfKQOV4qmWoiO7yNGcwfgIYNn3bJ/1b/vkmo+ua0KvjscYk=
=zeBa
-END PGP SIGNATURE-


RE: Can't start Solr 7.7.1 due to name-resolution issue

2019-08-22 Thread Jamie Gruener
This info might help: 
https://serverfault.com/questions/228140/lost-modified-etc-hosts-file-on-amazon-ec2-every-reboot-instance

--Jamie

-Original Message-
From: Christopher Schultz  
Sent: Thursday, August 22, 2019 4:01 PM
To: solr-user@lucene.apache.org
Subject: Can't start Solr 7.7.1 due to name-resolution issue

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

All,

I'm getting a failure to start my Solr instance. Here's the error from the 
console log:

Error: Exception thrown by the agent : java.net.MalformedURLException:
Local host name unknown: java.net.UnknownHostException: [hostname]:
[hostname]: Name or service not known
sun.management.AgentConfigurationError:
java.net.MalformedURLException: Local host name unknown:
java.net.UnknownHostException: [hostname]: [hostname]: Name or service not known
at
sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(C
onnectorBootstrap.java:480)
at sun.management.Agent.startAgent(Agent.java:262)
at sun.management.Agent.startAgent(Agent.java:452)
Caused by: java.net.MalformedURLException: Local host name unknown:
java.net.UnknownHostException: [hostname]: [hostname]: Name or service not known
at
javax.management.remote.JMXServiceURL.(JMXServiceURL.java:289)
at
javax.management.remote.JMXServiceURL.(JMXServiceURL.java:253)
at
sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorB
ootstrap.java:739)
at
sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(C
onnectorBootstrap.java:468)
... 2 more


Now, my hostname is just the first part of the hostname, so like "www"
instead of "www.example.com". Running "host [hostname]" on the CLI returns 
"Host [hostname]" not found: 3(NXDOMAIN)" so it's not entirely surprising that 
this name resolution is failing.

What's the best way for me to get around this?

I'm running on Debian Stretch in Amazon EC2. I've tried fixing the local name 
resolution so that it actually works, but when I reboot, the EC2 instance 
reverts my DNS settings so those changes won't survive a reboot.

Can I give the fully-qualified hostname to the JMX component in some way ?

I've this answer[1] on SO and everyone seems to say "edit /etc/hosts"
and, as I said, the EC2 startup scripts end up resetting those files during a 
reboot.

Any ideas?

- -chris

[1]
https://stackoverflow.com/questions/20093854/jmx-agent-throws-java-net-m
alformedurlexception-when-host-name-is-set-to-all-num
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl1e9IQACgkQHPApP6U8
pFjG6g//XM6nPgioIuJs40gB9534GnsG9q8d42AUIoiDzJ+t8isFFxtphEChdcye
9/5ePo36fODIsNkzzXsAJh9L1iRgmnVy7QGQIDp07WEo9v2bVo2RkWl42zm+UQ5u
XIz//bpT+J9y3eBPdPCKaXou+UYeR9/2W/UYyN08/uayP2QVVd2ZavC6AbFW93i1
IF5vOmETOsxBgVlgngX4TQRNSKfe5gCqWT0l/diHpm7PjT2BDzNO7x3vRbfioOMS
ktXcRqdBJAzM9XLV1acI+0z7I1kzs/A+jCymT/4++VmI0Lf4AACIhoaqnmS9pxyY
nrXU8tttozbaHMiBS3dIIMZP1ZF4jzY0+/UPBfgXqM4OcErWTjrha4G/5oBlLqf8
msuVRTg6qbsQJP//UcDhN8kl593xCK/bcQMkzq1ABkwFUhb8PhXp/3IJCRjJm5q3
U3gTwMwA/k+R4aM8qGaLw+07aFCdVJKrIUW0NEEHEnwkjJxAeqIRdpV8acfrT6uy
3v78cVFvWaxcOtAyioUhek0jhKzCobcxsZEcxZqWWxY0DOFHWbip/agTJESC/sXV
wLY2P9lldo+S5dAoaGM7Ze1WJ5FOSLm6Juvl4CvyMeebyPFie4PrWX7b7ess8I+A
YwLyqfKQOV4qmWoiO7yNGcwfgIYNn3bJ/1b/vkmo+ua0KvjscYk=
=zeBa
-END PGP SIGNATURE-