solr-exporter-config solr_metrics_core_searcher_cache

2020-11-13 Thread 李世明
HI:

solr-exporter-config.xml

solr_metrics_core_searcher_cache suggested add: escalation of ramBytesUsed


$object.value | to_entries | .[] | select(.key == "lookups" or .key == "hits" 
or .key == "size" or .key == "evictions" or .key == "inserts" or .key == 
"ramBytesUsed") as $target |


Solr DIH: empty child document transformer

2020-11-13 Thread Jordi Cabré
I will try to explain myself in as much detail as possible and isolating as
much as possible from the context.

Shortly, I'm trying to create a `DIH` in order to digest some documents as
nested. I mean, I need to digest an `one-to-many` relation and put it as
nested documents.

My `parents` data is:


++---+-+
| id |name_s | node_type_s |
++===+=+
|  1 | parent-name-1 | parent  |
|  2 | parent-name-2 | parent  |
|  3 | parent-name-3 | parent  |
++---+-+

And `children` data is:


+-+-+--+-+
| id  | parent_id_s |name_s| node_type_s |
+=+=+==+=+
| 1-1 |   1 | child-name-1 | child   |
| 2-1 |   1 | child-name-2 | child   |
| 3-2 |   2 | child-name-3 | child   |
| 4-3 |   3 | child-name-4 | child   |
+-+-+--+-+


Here my `DIH` configuration:

























As you can see, `child="true"` into `nested entity`.

After having performed my data import handler:

{
  "responseHeader": {
"status": 0,
"QTime": 0
  },
  "initArgs": [
"defaults",
[
  "config",
  "parent-children-config.xml"
]
  ],
  "command": "status",
  "status": "idle",
  "importResponse": "",
  "statusMessages": {
"Total Requests made to DataSource": "2",
"Total Rows Fetched": "7",
"Total Documents Processed": "3",
"Total Documents Skipped": "0",
"Full Dump Started": "2020-11-12 08:02:25",
"": "Indexing completed. Added/Updated: 3 documents. Deleted 0
documents.",
"Committed": "2020-11-12 08:02:25",
"Time taken": "0:0:0.304"
  }
}

So, digestion seems to be worked well.

After that, I've tested how to get only parents `q={!parent
which=node_type_s:parent}`:

{
   "responseHeader":{
  "status":0,
  "QTime":1,
  "params":{
 "q":"{!parent which=node_type_s:parent}",
 "_":"1605166879678"
  }
   },
   "response":{
  "numFound":3,
  "start":0,
  "numFoundExact":true,
  "docs":[
 {
"name_s":"parent-name-1",
"node_type_s":"parent",
"id":"1",
"_version_":1683140793502531584
 },
 {
"name_s":"parent-name-2",
"node_type_s":"parent",
"id":"2",
"_version_":1683140793504628736
 },
 {
"name_s":"parent-name-3",
"node_type_s":"parent",
"id":"3",
"_version_":1683140793505677312
 }
  ]
   }
}

As you can see, only `parents` are returned.

When I'm asking for only `children`:

{
   "responseHeader":{
  "status":0,
  "QTime":3,
  "params":{
 "q":"{!child of=\"node_type_s:parent\"}",
 "_":"1605166879678"
  }
   },
   "response":{
  "numFound":4,
  "start":0,
  "numFoundExact":true,
  "docs":[
 {
"name_s":"child-name-1",
"node_type_s":"child",
"parent_id_s":"1",
"id":"1-1",
"_version_":1683140793502531584
 },
 {
"name_s":"child-name-2",
"node_type_s":"child",
"parent_id_s":"1",
"id":"2-1",
"_version_":1683140793502531584
 },
 {
"name_s":"child-name-3",
"node_type_s":"child",
"parent_id_s":"2",
"id":"3-2",
"_version_":1683140793504628736
 },
 {
"name_s":"child-name-4",
"node_type_s":"child",
"parent_id_s":"3",
"id":"4-3",
"_version_":1683140793505677312
 }
  ]
   }
}

All right, only children documents are returned.

Then, I've also tried to get only `childrens of parent 1`:

{
   "responseHeader":{
  "status":0,
  "QTime":0,
  "params":{
 "q":"{!child of=\"node_type_s:parent\"}id:1",
 "_":"1605166879678"
  }
   },
   "response":{
  "numFound":2,
  "start":0,
  "numFoundExact":true,
  "docs":[
 {
"name_s":"child-name-1",
"node_type_s":"child",
"parent_id_s":"1",
"id":"1-1",
"_version_":1683140793502531584

Escape characters for core.properties User-Defined Properties

2020-11-13 Thread Jens Hittenkofer
Hi,

Im using SolrCloud (Solr 6.6.6 together with Zookeeper) . I want to use the 
same template/configuration for multiple Solr cores. Each core should use its 
own connectionstring for the data-import-handler (MSSQL). What im trying to do 
is to add the connectionstring as a User-Defined Property in the 
core.properties file while creating the core to reference in the 
data-import-handler, as such:
http://X:8983/solr/admin/collections?action=CREATE&name=TESTCORE&numShards=1&maxShardsPerNode=2&replicationFactor=3&collection.configName=TESTCORECONFIG&property.connectionstring=sqlserver://server.domain\instance;databaseName=SOMEDBNAME

But when I add the connectionstring like above I get escape an character ('\') 
before ':', '=' and '\'. So the connectionstring in core.properties looks like 
this: 'sqlserver\://server.domain\\instance;databaseName\=SOMEDBNAME'
I've tried to urlencode and escape the characters in different ways (also read 
documentation and googled ofc).
Is there anyway to do this or should I go for a totally different approach 
somehow?

Thanks



recommendation for solr replication throttling

2020-11-13 Thread Alu, Pino [CORP/US]
Hello,

We are needing a recommendation for solr replication throttling.  What are your 
recommendations for maxWriteMBPerSec value?  Our indexes contain 18 locales and 
size for all indexes is 188GB and growing.
Also will replication throttling work with solr 4.10.3?

Thanks,

Pino Alu | HCL Commerce Administrator :: Emerson.com | Enterprise IT
Emerson | 8000 West Florissant Ave. | St. Louis | MO | 63136 | USA
T +1 314 553 1785
pino@emerson.com

Delivering Technology Solutions With Purpose



FW: Vulnerabilities in SOLR 8.6.2

2020-11-13 Thread Narayanan, Lakshmi
This is my 5th attempt in the last 60 days
Is there anyone looking at these mails?
Does anyone care?? :(


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com


From: Narayanan, Lakshmi 
Sent: Thursday, October 22, 2020 1:06 PM
To: solr-user@lucene.apache.org
Subject: FW: Vulnerabilities in SOLR 8.6.2

This is my 4th attempt to contact
Please advise, if there is a build that fixes these vulnerabilities

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Sunday, October 18, 2020 4:01 PM
To: solr-user@lucene.apache.org
Subject: FW: Vulnerabilities in SOLR 8.6.2

SOLR-User Support team
Is there anyone who can answer my question or can point to someone who can help
I have not had any response for the past 3 weeks !?
Please advise


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Sunday, October 04, 2020 2:11 PM
To: solr-user@lucene.apache.org
Cc: Chattopadhyay, Salil 
mailto:salil.chattopadh...@mmc.com>>; Mutnuri, 
Vishnu D mailto:vishnu.d.mutn...@mmc.com>>; Pathak, 
Omkar mailto:omkar.pat...@mmc.com>>; Shenouda, Nasir B 
mailto:nasir.b.sheno...@mmc.com>>
Subject: RE: Vulnerabilities in SOLR 8.6.2

Hello Solr-User Support team
Please advise or provide further guidance on the request below

Thank you!

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Monday, September 28, 2020 1:52 PM
To: solr-user@lucene.apache.org
Cc: Chattopadhyay, Salil 
mailto:salil.chattopadh...@mmc.com>>; Mutnuri, 
Vishnu D mailto:vishnu.d.mutn...@mmc.com>>; Pathak, 
Omkar mailto:omkar.pat...@mmc.com>>; Shenouda, Nasir B 
mailto:nasir.b.sheno...@mmc.com>>
Subject: Vulnerabilities in SOLR 8.6.2
Importance: High

Hello Solr-User Support team
We have installed the SOLR 8.6.2 package into docker container in our DEV 
environment. Prior to using it, our security team scanned the docker image 
using SysDig and found a lot of Critical/High/Medium vulnerabilities. The full 
list is in the attached spreadsheet

Scan Summary
30 STOPS 190 WARNS188 Vulnerabilities

Please advise or point us to how/where to get a package that has been patched 
for the Critical/High/Medium vulnerabilities in the attached spreadsheet
Your help will be gratefully received


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com






**
This e-mail, including any attachments that accompany it, may contain
information that is confidential or privileged. This e-mail is
intended solely for the use of the individual(s) to whom it was intended to be
addressed. If you have received this e-mail and are not an intended recipient,
any disclosure, distribution, copying or other use or
retention of this email or information contained within it are prohibited.
If you have received this email in error, please immediately
reply to the sender via e-mail and also permanently
delete all copies of the original message together with any of its attachments
from your computer or device.
**


SOLR862 Vulnerabilities.xlsx
Description: SOLR862 Vulnerabilities.xlsx


Re: FW: Vulnerabilities in SOLR 8.6.2

2020-11-13 Thread Kevin Risden
As far as I can tell only your first and 5th emails went through. Either
way, Cassandra responded on 20200929 - ~15 hrs after your first message:

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/202009.mbox/%3Cbe447e96-60ed-4a40-88dd-9e0c28be6c71%40Spark%3E

Kevin Risden


On Fri, Nov 13, 2020 at 11:35 AM Narayanan, Lakshmi
 wrote:

> This is my 5th attempt in the last 60 days
>
> Is there anyone looking at these mails?
>
> Does anyone care?? L
>
>
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Thursday, October 22, 2020 1:06 PM
> *To:* solr-user@lucene.apache.org
> *Subject:* FW: Vulnerabilities in SOLR 8.6.2
>
>
>
> This is my 4th attempt to contact
>
> Please advise, if there is a build that fixes these vulnerabilities
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Sunday, October 18, 2020 4:01 PM
> *To:* solr-user@lucene.apache.org
> *Subject:* FW: Vulnerabilities in SOLR 8.6.2
>
>
>
> SOLR-User Support team
>
> Is there anyone who can answer my question or can point to someone who can
> help
>
> I have not had any response for the past 3 weeks !?
>
> Please advise
>
>
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Sunday, October 04, 2020 2:11 PM
> *To:* solr-user@lucene.apache.org
> *Cc:* Chattopadhyay, Salil ; Mutnuri, Vishnu
> D ; Pathak, Omkar ;
> Shenouda, Nasir B 
> *Subject:* RE: Vulnerabilities in SOLR 8.6.2
>
>
>
> Hello Solr-User Support team
>
> Please advise or provide further guidance on the request below
>
>
>
> Thank you!
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Monday, September 28, 2020 1:52 PM
> *To:* solr-user@lucene.apache.org
> *Cc:* Chattopadhyay, Salil ; Mutnuri, Vishnu
> D ; Pathak, Omkar ;
> Shenouda, Nasir B 
> *Subject:* Vulnerabilities in SOLR 8.6.2
> *Importance:* High
>
>
>
> Hello Solr-User Support team
>
> We have installed the SOLR 8.6.2 package into docker container in our DEV
> environment. Prior to using it, our security team scanned the docker image
> using SysDig and found a lot of Critical/High/Medium vulnerabilities. The
> full list is in the attached spreadsheet
>
>
>
> Scan Summary
>
> *30* *STOPS **190* *WARNS**188* *Vulnerabilities*
>
>
>
> Please advise or point us to how/where to get a package that has been
> patched for the Critical/High/Medium vulnerabilities in the attached
> spreadsheet
>
> Your help will be gratefully received
>
>
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> --
>
>
> **
> This e-mail, including any attachments that accompany it, may contain
> information that is confidential or privileged. This e-mail is
> intended solely for the use of the individual(s) to whom it was intended
> to be
> addressed. If you have received this e-mail and are not an intended
> recipient,
> any disclosure, distribution, copying or other use or
> retention of this email or information contained within it are prohibited.
> If you have received this email in error, please immediately
> reply to the sender via e-mail and also permanently
> delete all copies of the original message together with any of its
> attachments
> from your computer or device.
> **
>


Re: Frequent Index Replication Failure in solr.

2020-11-13 Thread Parshant Kumar
All,please help on this

On Tue, Nov 3, 2020, 6:01 PM Parshant Kumar 
wrote:

> Hi team,
>
> We are having solr architecture as *master->repeater-> 3 slave servers.*
>
> We are doing incremental indexing on the master server(every 20 min) .
> Replication of index is done from master to repeater server(every 10 mins)
> and from repeater to 3 slave servers (every 3 hours).
> *We are facing the frequent replication failure between master to repeater
> server  as well as between repeater  to slave servers.*
> On checking logs found that every time one of the below  exceptions
> occurred whenever the replication has failed .
>
> 1)WARN : Error in fetching file: _4rnu_t.liv (downloaded 0 of 11505507
> bytes)
> java.io.EOFException: Unexpected end of ZLIB input stream
> at
> java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
> at
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
> at
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)
> at
> org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:88)
> at
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:139)
> at
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:166)
> at
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:160)
> at
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1443)
> at
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1409)
>
>
> 2)
> WARN : Error getting file length for [segments_568]
> java.nio.file.NoSuchFileException:
> /data/solr/search/application/core-conf/im-search/data/index.20200711012319226/segments_568
> at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> at
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
> at
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
> at
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
> at java.nio.file.Files.readAttributes(Files.java:1737)
> at java.nio.file.Files.size(Files.java:2332)
> at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
> at
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:615)
> at
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:588)
> at
> org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(CoreAdminOperation.java:335)
>
> 3)
> WARN : Error in fetching file: _4nji.nvd (downloaded 507510784 of
> 555377795 bytes)
> org.apache.http.MalformedChunkCodingException: CRLF expected at end of
> chunk
> at
> org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:255)
> at
> org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
> at
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
> at
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
> at
> java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:238)
> at
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
> at
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)
> at
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:128)
> at
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:166)
> at
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1458)
> at
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1409)
> at
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1390)
> at
> org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:872)
> at
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:438)
> at
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:254)
>
> *Replication configuration of master,repeater,slave's is given below:*
>
> 
> 
> ${enable.master:false}
> commit
> startup
> 00:00:10
> 
>
>
> *Commit Configuration master,repeater,slave's is given below :*
>
>  
> 10false
>
>
> Please help in finding the root cause of replication failure.Let me know for 
> any queries.
>
> Thanks
>
> Parshant kumar
>
>
>
>
>
>
>
>
>
>
> 
>  Virus-free.
> www.avast.com
> 

Re: Frequent Index Replication Failure in solr.

2020-11-13 Thread David Hastings
looks like youre repeater is grabbing a file that the master merged into a
different file, why not lower how often you go from master->repeater,
and/or dont commit so often so you can make the index faster

On Fri, Nov 13, 2020 at 12:13 PM Parshant Kumar
 wrote:

> All,please help on this
>
> On Tue, Nov 3, 2020, 6:01 PM Parshant Kumar 
> wrote:
>
> > Hi team,
> >
> > We are having solr architecture as *master->repeater-> 3 slave servers.*
> >
> > We are doing incremental indexing on the master server(every 20 min) .
> > Replication of index is done from master to repeater server(every 10
> mins)
> > and from repeater to 3 slave servers (every 3 hours).
> > *We are facing the frequent replication failure between master to
> repeater
> > server  as well as between repeater  to slave servers.*
> > On checking logs found that every time one of the below  exceptions
> > occurred whenever the replication has failed .
> >
> > 1)WARN : Error in fetching file: _4rnu_t.liv (downloaded 0 of 11505507
> > bytes)
> > java.io.EOFException: Unexpected end of ZLIB input stream
> > at
> > java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
> > at
> > java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
> > at
> >
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)
> > at
> >
> org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:88)
> > at
> >
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:139)
> > at
> >
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:166)
> > at
> >
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:160)
> > at
> >
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1443)
> > at
> >
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1409)
> >
> >
> > 2)
> > WARN : Error getting file length for [segments_568]
> > java.nio.file.NoSuchFileException:
> >
> /data/solr/search/application/core-conf/im-search/data/index.20200711012319226/segments_568
> > at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> > at
> >
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
> > at
> >
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
> > at
> >
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
> > at java.nio.file.Files.readAttributes(Files.java:1737)
> > at java.nio.file.Files.size(Files.java:2332)
> > at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
> > at
> >
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:615)
> > at
> >
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:588)
> > at
> >
> org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(CoreAdminOperation.java:335)
> >
> > 3)
> > WARN : Error in fetching file: _4nji.nvd (downloaded 507510784 of
> > 555377795 bytes)
> > org.apache.http.MalformedChunkCodingException: CRLF expected at end of
> > chunk
> > at
> > org.apache.http.impl.io
> .ChunkedInputStream.getChunkSize(ChunkedInputStream.java:255)
> > at
> > org.apache.http.impl.io
> .ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
> > at
> > org.apache.http.impl.io
> .ChunkedInputStream.read(ChunkedInputStream.java:186)
> > at
> >
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
> > at
> > java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:238)
> > at
> > java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
> > at
> >
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)
> > at
> >
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:128)
> > at
> >
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:166)
> > at
> >
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1458)
> > at
> >
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1409)
> > at
> >
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1390)
> > at
> >
> org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:872)
> > at
> >
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:438)
> > at
> >
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:254)
> >
> > *Replication configuration of master,repeater,slave's is given below:*
> >
> > 
> > 
> > ${ena

Why am I able to sort on a multiValued field?

2020-11-13 Thread Andy C
I am adding a new float field to my index that I want to perform range
searches and sorting on. It will only contain a single value.

I have an existing dynamic field definition in my schema.xml that I wanted
to use to avoid having to updating the schema:




I went ahead and implemented this in a test system (recently updated to
Solr 8.7), but then it occurred to me that I am not going to be able to
sort on the field because it is defined as multiValued.

But to my surprise sorting worked, and gave the expected results.Why? Can
this behavior be relied on in future releases?

Appreciate any insights.

Thanks
- AndyC -


Recovering deleted files without backup

2020-11-13 Thread Alex Hanna
Hi all,

I've accidentally deleted some documents and am trying to recover them.
Unfortunately I don't have a snapshot or backup of the core, but have daily
backups of my VM. When my sysadmin restores the data folder, however,
the documents don't come back for some reason.

I'm running a pretty old version of Solr (5.x). Also, it looks like the
only new files created recently are .liv files, which were created at the
time of deletion, and also a segment_ file.

I'd love some guidance on this.

Thanks,
- A

-- 
Alex Hanna, PhD
alex-hanna.com
@alexhanna


Re: Recovering deleted files without backup

2020-11-13 Thread Dave
Just rebuild the index. Pretty sure they’re gone if they aren’t in your vm 
backup, and solr isn’t a document storage tool, it’s a place to index the data 
from your document store, so it’s understood more or less that it can always be 
rebuilt when needed

> On Nov 13, 2020, at 9:52 PM, Alex Hanna  wrote:
> 
> Hi all,
> 
> I've accidentally deleted some documents and am trying to recover them.
> Unfortunately I don't have a snapshot or backup of the core, but have daily
> backups of my VM. When my sysadmin restores the data folder, however,
> the documents don't come back for some reason.
> 
> I'm running a pretty old version of Solr (5.x). Also, it looks like the
> only new files created recently are .liv files, which were created at the
> time of deletion, and also a segment_ file.
> 
> I'd love some guidance on this.
> 
> Thanks,
> - A
> 
> -- 
> Alex Hanna, PhD
> alex-hanna.com
> @alexhanna