:Dogs
--
Regards,
Selvam
v&indent=true&rows=100
the above command not seems to effective to download 10 million record.
Could you please suggest an idea?
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
la:294)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:158)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
Query Parser
defType=edismax
On Wed, Jan 17, 2018 at 4:47 PM, Selvam Raman wrote:
> Hi,
>
> solr version 6.4.2
>
> hl.method = unified, hl.bs.type=Word, this setting working fine for normal
> queries but failed in wildcard queries.(tried other hl.bs.type parm
both normal queries and
wildcard queries.
Why unified is not working and original/default is working fine for
wildcard queries?
any suggestion would be appreciated.
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
Hi,
Solr version - 6.4
Parser - Edismax
Leading wildcard search is allowed in edismax.
1) how can i disable leading wildcard search
2) why leading wildcard search takes so much of time to give the response.
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
I am using edismax query parser.
On Fri, Dec 15, 2017 at 10:37 AM, Selvam Raman wrote:
> Solr version - 6.4.0
>
> "title_en":["Chip-seq"]
>
> When i fired query like below
>
> 1) chip-seq
> 2) chi*
>
> it is giving expected result, for this
Hi Steve,
i have raised the jira ticket SOLR-11764
<https://issues.apache.org/jira/browse/SOLR-11764>.
I am happy to work with you to solve this problem.
Thanks,
selvam R
On Thu, Dec 7, 2017 at 2:48 PM, Steve Rowe wrote:
> Hi Selvam,
>
> This sounds like it may be a bug - c
creates two terms rather than single
term.
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
:
> This looks like you're using "pre analyzed fields" which have a very
> specific format. PreAnalyzedFields are actually pretty rarely used,
> did you enable them by mistake?
>
> On Tue, Dec 5, 2017 at 11:37 PM, Selvam Raman wrote:
> > When i look at the
) then put it into
solrquery.
On Wed, Dec 6, 2017 at 11:22 AM, Selvam Raman wrote:
> When i am firing query it returns the doc as expected. (Example:
> q=synthesis)
>
> I am facing the problem when i include wildcard character in the query.
> (Exam
/Metadata2:
org.apache.solr.client.solrj.SolrServerException:
No live SolrServers available to handle this
request:[/solr/Metadata2_shard1_replica1,
solr/Metadata2_shard2_replica2,
solr/Metadata2_shard1_replica2]
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
Hi All,
which is the best tool for solr perfomance test. I want to identify how
much load my solr could handle and how many concurrent users can query on
solr.
Please suggest.
--
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
approach to handle this problem?
Thanks,
selvam R
wsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 26 September 2016 at 17:36, Selvam wrote:
> > Hi All,
> >
> > We use DataImportHandler to import data from Redshift. We want to
> overwrite
> > some 250M existing re
build that
values again.
I learned about Transformers, I am not sure if it is possible to get the
old document value during that process. Any help would be appreciated.
--
Regards,
Selvam
ire operation is handled
> > by Lucene, using its forceMerge process.
> >
> > Thanks,
> > Shawn
> >
>
--
Regards,
Selvam
KnackForge <http://knackforge.com>
Regards,
Selvam
Hi,
On a note, we also need all 350 fields to be stored and indexed.
On Thu, Jun 2, 2016 at 12:58 PM, Selvam wrote:
> Hello all,
>
> We need to run a heavy SOLR with 300 million documents, with each
> document having around 350 fields. The average length of the fields will be
(for each
node/shards?). We are using Solr 5.5 and want best performance.
We are new to SolrCloud, I would like to request your inputs on how many
nodes/shards we need to have and how many servers for best performance. We
primarily use geo-statial search.
--
Regards,
Selvam
Hi All,
I have written a blog to cover this nested merge expressions, see
http://knackforge.com/blog/selvam/solr-streaming-expressions for more
details.
Thanks.
On Mon, Aug 10, 2015 at 3:51 PM, Selvam wrote:
> Hi,
>
> Thanks, that seems to be working!
>
> On Sat, Aug 8, 2015 a
.com/
>
> On Sat, Aug 8, 2015 at 8:08 AM, Selvam wrote:
>
> > Hi,
> >
> > I needed to run a multiple subqueries each with its own limit of rows.
> >
> > For eg: to get 30 users from country India with age greater than 30 and
> 50
> > users from Engla
ase?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Sat, Aug 8, 2015 at 7:36 AM, Selvam wrote:
>
> > Hi,
> >
> > Thanks, good to know, in fact my requirement needs to merge multiple
> > expressions, while current streaming expressions supports only
> report them so we can get all the error messages covered.
>
> Thanks,
>
> Joel
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Aug 7, 2015 at 6:19 AM, Selvam wrote:
>
> > Hi,
> >
> > Sorry, it is working now.
> >
> >
Hi,
Sorry, it is working now.
curl --data-urlencode
'stream=search(gettingstarted,q="*:*",fl="id",sort="id asc")'
http://localhost:8983/solr/gettingstarted/stream
I missed *'asc'* in sort :)
Thanks for the help Shawn Heisey.
On Fri, Aug 7, 201
: java.lang.ArrayIndexOutOfBoundsException: 1"
I tried different port, 9983 as well, which returns "Empty reply from
server". I think I miss some obvious configuration.
On Fri, Aug 7, 2015 at 2:04 PM, Shawn Heisey wrote:
> On 8/7/2015 1:37 AM, Selvam wrote:
> > https:/
olr/gettingstarted/stream
It throws me "java.lang.ArrayIndexOutOfBoundsException: 1\n\tat
org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:260)"
Kindly let me know if you have worked with Streaming API and a way to fix
this issue.
--
Regards,
Selvam
KnackForge <http://knackforge.com>
Hi All,
I think Solr 5.1+ supports streaming API that can be used for my need.
Though it is not working for me right now. I will send another email for
that.
On Thu, Aug 6, 2015 at 3:08 PM, Selvam wrote:
> Dear Toke,
>
> Thanks for your input. Infact my scenario is much more com
irst subquery count to 60 while second one to 40. Yes, I need to give
different limit for each subquery. Could you suggest me a way to do this?
Thanks again.
On Thu, Aug 6, 2015 at 2:55 PM, Toke Eskildsen
wrote:
> On Thu, 2015-08-06 at 12:32 +0530, Selvam wrote:
> > Good day, I wanted to
ingle SOLR query?
Should I write custom search request handler to implement this? Any
pointers would be really helpful.
--
Regards,
Selvam
-hand-side
>
> Best
> Erick
>
> On Tue, Jan 15, 2013 at 11:27 AM, Selvam wrote:
> > Thanks Erick, can you tell me how to do the appending
> > (lowercaseversion:LowerCaseVersion) before indexing. I tried pattern
> > factory filters, but I could not get it right.
>
displays the right-hand-side of the returned token.
>
> Simple solution, not very elegant, but sometimes the easiest...
>
> Best
> Erick
>
>
> On Fri, Jan 11, 2013 at 1:30 AM, Selvam wrote:
>
> > Hi*,
> >
> > *
> > I have been trying to figure out
Hi,
This should be trivial question, still I am failing to get the details.I
have 2 cores+default collection,
*collection1:*
article_id
title
content
*core0:*
cluster_id
cluster_name
cluster_count
*core1:*
article_id
article_cluster_id
score
Given an article_id, I want to return top 10 ( based
Hi all,
I am trying to write a custom document clustering component that should
take all the docs in commit and cluster them; Solr Version:3.5.0
Main Class:
public class KMeansClusteringEngine extends DocumentClusteringEngine
implements SolrEventListener
I added newSearcher event listener, that
34 matches
Mail list logo