I was able to implement one join (https://wiki.apache.org/solr/Join) in my
query but I couldn't find the correct syntax to use multiple joins... is
that possible? Could someone please give me some example?
Thanks!
https://issues.apache.org/jira/browse/SOLR-2018
There used to be a waitFlush parameter (wait until the IndexWriter has
written all the changes) as well as a waitSearcher parameter (wait
until a new searcher has been registered... i.e. whatever changes you
made will be guaranteed to be visible).
The
Joel:
I did a little work with SOLR-9296 to try to reduce the number of
objects created, which would relieve GC pressure both at creation and
collection time. I didn't measure CPU utilization before/after, but I
did see up to a 11% increase in throughput.
It wouldn't hurt my feelings at all to ha
Hmm, that should work fine. Let us know what the logs show if anything
because this is weird.
Best,
Erick
On Tue, Nov 8, 2016 at 1:00 PM, Chetas Joshi wrote:
> Hi Erick,
>
> This is how I use the streaming approach.
>
> Here is the solrconfig block.
>
>
>
> {!xport}
> xsort
On 11/8/2016 3:55 PM, Shawn Heisey wrote:
> I am not in a position to try this in 6.x versions. Is there anyone
> out there who does have a 6.x index they can try it on, see if it's
> still a problem?
I upgraded a dev version of the program to SolrJ 6.2.1 (newest currently
available via ivy), the
Hi Stefan, I've been very busy today, I've read your mail but no time to
write an answer.
So now at last, finally everybody is sleeping around me :)
Let's start from the very beginning, sorry if I didn't get everything about
your first question, I just got you're unable to find the phone number wh
I have this code in my SolrJ program:
LOG.info("{}: background optimizing", logPrefix);
myOptimizeSolrClient.optimize(myName, false, false);
elapsedMillis = (System.nanoTime() - startNanos) / 100;
LOG.info("{}: Background optimize completed, elapsed={}", logPrefix,
elapsedMillis);
Thi
Any more thoughts on this? The longer i look at this situation, the
more i’m thinking i’m at fault here - expection something that isn’t
to be expected at all?
Whatever is on your mind once you’ve read mail - don’t keep to it, let me know.
-Stefan
On November 7, 2016 at 5:23:58 PM, Stefan Mathe
Hi Stephen,
Thanks for the update.
Regarding SOLR-9527 - I think we need a unit test for verifying
"createNodeSet" functionality. I will spend some time on it in next couple
of days.
Also regarding #2, i also found similar issue (doc count mismatch after
restore) while testing with a large colle
Hi Erick,
This is how I use the streaming approach.
Here is the solrconfig block.
{!xport}
xsort
false
query
And here is the code in which SolrJ is being used.
String zkHost = args[0];
String collection = args[1];
Map props = new HashMap()
Just wanted to note that we tested out the patch from SOLR-9527 and it worked
perfectly for the balancing issue - thank you so much for that!
As for issue #2, we've resorted to doing a hard commit, stopping all indexing
against the index, and then taking the backup, and we have a reasonably good
I won't be able to achieve the correct mapping as I did not store the
mapping info any where. I don't know if core-node1 was mapped to
shard1_recplica1 or shard2_replica1 in my old collection. But I am not
worried about that as I am not going to update any existing document.
This is what I did.
Hello,
Ran into OOM Error again right after two weeks. Below is the GC log viewer
graph. The first time we run into this was after 3 months and then second
time in two weeks. After first incident reduced the cache size and increase
heap from 8 to 10G. Interestingly query and ingestion load is li
I have the following query (which was working until I migrated to Solr 5.1)
with boost:
http://localhost:8983/solr/?wt=json&q={!boost+b=recip(ms(NOW,modification_date),3.16e-11,1,1)}{!boost+b=recip(ms(NOW,creation_date),3.16e-11,1,1)}Copy_field:(bolt)^10
OR object_name:(bolt)^300
The above query
Thanks Joel.
2016-11-08 11:43 GMT-08:00 Joel Bernstein :
> It sounds like your scenario, is around 25 queries per second, each pulling
> entire results. This would be enough to drive up CPU usage as you have more
> concurrent requests then CPU's. Since there isn't much IO blocking
> happening, in
It sounds like your scenario, is around 25 queries per second, each pulling
entire results. This would be enough to drive up CPU usage as you have more
concurrent requests then CPU's. Since there isn't much IO blocking
happening, in the scenario you describe, I would expect some pretty busy
CPU's.
8 November 2016, Apache Solr 6.3.0 available
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted search and analytics, rich
document parsing, geospatial search, extensiv
Hi Mike,
Thanks for bringing this up. You can certainly backup the index data stored
on local file-system to HDFS.
The HDFS backup repository implementation uses the same configuration
properties as expected by the HDFS directory factory. Here is the
description of the parameters,
- location
Yes that works well. Somehow I missed to mention the full condition in
previous message. This is what I am looking for -
fq={!parent which=PARENT_DOC_TYPE:PARENT}((childA_field_1:234 AND
childA_field_2:3) OR (childA_field_1:432 AND childA_field_2:6))
Regards,
Vinod
--
View this message in cont
Hi,
I am facing an issue with delta indexing implemeted with Full import and
SortedMapBackedCache as implemented below.
https://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport
cacheKey="id" cacheLookup="parent.id" processor="SqlEntityProcessor"
cacheImpl="SortedMapBackedCache"
The
Giving 'across two child docs', I think you are looking for
fq={!parent which=PARENT_DOC_TYPE:PARENT}childA_field_1:432&fq={!parent
which=PARENT_DOC_TYPE:PARENT}childA_field_2:6
On Mon, Nov 7, 2016 at 9:45 PM, Vinod Singh wrote:
> I have nested documents indexed in SOLR 6.2. The block join query
We have SolrCloud running on bare metal but want the nightly snapshots to
be written to HDFS. Can someone give me some help on configuring the
HdfsBackupRepository?
${solr.hdfs.default.backup.path}
${solr.hdfs.home:}
${solr.hdfs.confdir:}
Not sure how to procee
I am migrating a solr based app from Solr 4 to Solr 6. One of the
discrepancies I am noticing is around edismax query parsing. My code makes
the following call:
userQuery="+(title:shirts isbn:shirts) +(id:20446 id:82876)"
Query query=QParser.getParser(userQuery, "edismax", req).getQuery();
23 matches
Mail list logo