I have been experimenting with solr for a couple of weeks but i've been stuck
on a query i would like to execute for a couple of days now.
I have a nested data structure where I'm using a fq like this:
{!parent which="parentDoc:true"}parentDoc:false AND
This matches my child documents and retur
It would have been _really_ nice if this had been in the release notes.
Made me also scratch my head for a while when upgrading to Solr 6.
Additionally, this makes a rolling upgrade from Solr 5.x a bit more
scary since you have to update the collection schema to make the Solr 6
nodes work while
well i didn`t know if the package was moved to "
org.locationtech.spatial4j" in 6.x. i will try this as your suggestion. thx
for your help anyway.
On Jun 30, 2016 8:46 AM, "David Smiley" wrote:
> For polygons in 6.0 you need to set
>
> spatialContextFactory="org.locationtech.spatial4j.context.jts
i agree for this. :D
On Jun 30, 2016 4:12 PM, "Ere Maijala" wrote:
> It would have been _really_ nice if this had been in the release notes.
> Made me also scratch my head for a while when upgrading to Solr 6.
> Additionally, this makes a rolling upgrade from Solr 5.x a bit more scary
> since you
Hello,
We're indexing a large set of files using Solr 6.1.0, running a SolrCloud by
utilizing ZooKeeper 3.4.8.
We have two ensembles - and both clusters are running on three of their own
respective VMs (CentOS 7). We first thought the error was due to CDCR - as we
were trying to index a large a
Mads, some distributions require different steps for increasing max_open_files.
Check how it works vor CentOS specifically.
Markus
-Original message-
> From:Mads Tomasgård Bjørgan
> Sent: Thursday 30th June 2016 10:52
> To: solr-user@lucene.apache.org
> Subject: Solr node crashes wh
That's true, but I was hoping there would be another way to solve this issue as
it's not considered preferable in our situation.
Is it normal behavior for Solr to open over 4000 files without closing them
properly? Is it for example possible to adjust autoCommit-settings I
solrconfig.xml for fo
Yes, that is quite normal for a busy search engine, especially for cloud
environments. We always start by increasing it to 64k minimum when provisioning
machines.
Markus
-Original message-
> From:Mads Tomasgård Bjørgan
> Sent: Thursday 30th June 2016 13:05
> To: solr-user@lucene.apache
Hi,
Could you please remove my email from the mailing list (for now).
Many thanks for the help and resource you have provided.
Colin Hunter
--
www.gfc.uk.net
Hi,
I am looking for a way to serialize a SolrInputDocument.
I want to store the serialized document in a MySQL table.
Later I want to deserialize that document and send it to the Solr server.
Currently I am looking at org.apache.solr.client.solrj.request.UpdateRequest
and JavaBinUpdateRequest
Hi Team,
Hope you are doing good !!
We are using Solr 5.3.1 version as our search engine. This setup is provided by
the Bitnami cloud and the amazon AMI is ami-50a47e23.
We have a website which has content in Chinese. We use Nutch crawler to crawl
the entire website and index it to the Solr co
Hello - we use GZipped output streams too for buffering large sets of
SolrInputDocument's to disk before indexing. Works fine and SolrInputDocument
is very easily compressed as well.
Markus
-Original message-
> From:Sebastian Riemer
> Sent: Thursday 30th June 2016 13:56
> To: solr-
Mads Tomasgård Bjørgan wrote:
> That's true, but I was hoping there would be another way to solve this issue
> as it's not considered preferable in our situation.
What you are looking for might be
https://cwiki.apache.org/confluence/display/solr/IndexConfig+in+SolrConfig#IndexConfiginSolrConfig
Hi,
It appears, the issue was due to a mis-config I did in schema. After
StemmerOverrideFilterFactory was added on both query and index sides, the
problem has disappeared.
Thanks,
Dmitry
On Thu, May 19, 2016 at 9:01 PM, Shawn Heisey wrote:
> On 5/19/2016 5:26 AM, Dmitry Kan wrote:
> > On quer
Hi,
I have a java OBJECT which I need to load once.
I have written a java custom component, which I have added in
"last-components" in solrconfig.xml, from which I want to access the above
mentioned OBJECT when each search request comes in.
Is there a way I can load a java object on server/ insta
Hi,
the lifecycle of your Solr extension (i.e. the component) is not
something that's up to you.
Before designing the component you should read the framework docs [1],
in order to understand the context where it will live, once deployed.
There's nothing, as far as I know, other than the compon
see: http://lucene.apache.org/solr/resources.html, there's an
'unsubscribe' link that will automatically do this. NOTE: you _must_
use the exact same e-mail you first subscribed with, this sometimes
trips people up if the mail is forwarded from the original account.
Best,
Erick
On Thu, Jun 30, 20
NP, glad it worked!
On Wed, Jun 29, 2016 at 10:33 PM, Tim Chen wrote:
> Hi Erick,
>
> I have followed your instruction to added as new replica and deleted the old
> replica - works great!
>
> Everything back to normal now.
>
> Thanks mate!
>
> Cheers,
> Tim
>
> -Original Message-
> From:
I've read about the sort stream in v6.1 but it appears to me to break the
streaming design. If it has to read all the results into memory then it's
not streaming. Sounds like it could be slow and memory intensive for very
large result sets. Has anyone had good results with the sort stream when
ther
Hi,
The streaming API in Solr 6x has been expanded to supported many different
parallel computing workloads. For example the topic stream supports pub/sub
messaging. The gatherNodes stream supports graph traversal. The facet
stream supports aggregations inside the search engine, while the rollup
s
hi all,
We have a solrcloud setup with zookeeper, and right now we're testing it
with indexing and querying.
*collection A:*
*collection B:*
I'm trying to figure out why on collection B, indexing works but querying
doesn't. I believe by looking at Collection B > Schema > Load Term Info on
a
Hello Arcadius, Noble,
I have a single Solr cluster setup across two DC's having good connectivity
with below similar configuration and looking to use preferredNodes
feature/rule that search queries executed from DC1 client, uses all dc1
replica's and DC2 client, uses all dc2 replica's.
Bit confu
Hi,
When I use defType=edismax, and using debug mode by setting debug=True, I
found that the search for "r&d" is actually done to search on just the
character "r".
http://localhost:8983/solr/collection1/highlight?q=
"r&d"&debugQuery=true&defType=edismax
"debug":{
"rawquerystring":"\"r",
None of the pasted images came through, the mail server is quite aggressive
about stripping them. You'll need to upload them somewhere and provide a
link.
Best,
Erick
On Thu, Jun 30, 2016 at 7:36 PM, nuhaa wrote:
> hi all,
>
> We have a solrcloud setup with zookeeper, and right now we're testin
ah i see
https://ibin.co/2mWaie7IVxDF.png
Collection A:
https://ibin.co/2mWawhCm76cN.png
Collection B:
https://ibin.co/2mWb4BmlMom2.png
--
nuhaa
http://about.me/nuhaa
> On 1 Jul 2016, at 12:24 PM, Erick Erickson wrote:
>
> None of the pasted images came through, the mail server is quite ag
25 matches
Mail list logo