Hi,
Any help or pointers in this issue?
Thanks,
On Wed, Feb 24, 2016 at 12:44 PM, Debraj Manna
wrote:
> Hi,
>
> I am using Solrj 5.1 to talk add & delete docs from solr. Whenever there
> is some exception while doing addition or deletion. Solr is throwing
> SolrServerException with the error m
one way i see is :
store a display snippet in a separate field and fetch that instead
please let me know if you see any other ways or isusue on the appraoch.
Regards,
Anil
On 25 February 2016 at 11:30, Anil wrote:
> HI,
>
> we are indexing and storing 2 mb text data in a text field. we need t
can some one share your ideas ?
On 24 February 2016 at 08:14, Anil wrote:
> Yes Yonik. i could not find numBuckets true/false in Solr reference
> documentation and Solrj facet params as well.
>
> Could you please point me to documentation ? Thank you.
>
> On 23 February 2016 at 20:53, Yonik Seel
HI,
we are indexing and storing 2 mb text data in a text field. we need to
display partial content of filed to be displayed in UI every time when this
document is part of the response.
is there any to get only few characters of the fields ? fetching 2mb data
and truncate in application is little
Sorry Jack for confusion.
I have field which holds free text. text can contain path , ip or any free
text.
I would like to tokenize the text of the field using white space. if the
text token is of path or ip pattern , it has be tockenized like path
hierarchy way.
Regards,
Anil
On 24 February 2
Hi,
In Solr 5.5, all the shipped examples now use dynamic schema. So, how
are they expected to add new types? We have "add/delete fields" UI in
the new Admin UI, but not "add/delete types".
Do we expect them to use REST end points and curl? Or to not modify
types at all? Or edit the "do not edit"
Hi Alvaro,
We had thought about this. But our requirement is dynamic.
The 4 fields to pivot on would change as per the many requirements.
So, this will need to be handled at query time.
Just considering the Endeca equivalent, it looks easy there.
If this feature is not available on Solr, would
Very well explained, thanks Davis, Daniel really thanks. I read your email
thoroughly and I enjoyed it while I was reading. Though at some point my
thought partite from your view. But still I can a good perception of yours
how one can see the architectural world of lucene ecosystem. I will try to
On 2/24/2016 9:58 AM, Lokesh Chhaparwal wrote:
> Can someone please update on this exception trace while we are using
> distributed search using shards parameter (solr-master-slave).
The line of code where the NPE happened (from the 4.7.2 source) is in
XMLWriter.java, at line 190:
for (String
https://issues.apache.org/jira/browse/SOLR-8734 created for follow-up.
- Original Message -
From: solr-user@lucene.apache.org
To: solr-user@lucene.apache.org
At: Feb 24 2016 22:41:14
Hi Markus - thank you for the question.
Could you advise if/that the solrconfig.xml has a element (for
Hi Markus - thank you for the question.
Could you advise if/that the solrconfig.xml has a element (for
which deprecated warnings would appear separately) or that the solrconfig.xml
has no element?
If either is the case then yes based on the code (SolrIndexConfig.java#L153)
the warnings would
Look at the core admin API in Solr, the MERGEINDEXES action...
On Feb 22, 2016 17:41, "jeba earnest" wrote:
> My requirement is to add the index folder to the solr data directory. I am
> generating a lucene index by mapreduce program. And later I would like to
> merge the index with the solr inde
Hi,
We got quite a few unit tests that inherit the abstract distributed test thing
(haven't got hte FQCN around). On Solr 5.4.x we had a lot issues with
connection reset, which i assumed, judging from resolved tickets, had been
resolved with 5.5.0. Did i miss something? Can someone point me to
I see your point. Didn't realize that you are using windows. If it work
using double quotes, please go ahead and launch that way.
Thank,
Susheel
On Wed, Feb 24, 2016 at 12:44 PM, bbarani wrote:
> Its still throwing error without quotes.
>
> solr start -e cloud -noprompt -z
> localhost:2181,lo
On 2/24/2016 11:19 AM, Michael Beccaria wrote:
> We're running solr 4.4.0 running in this software
> (https://github.com/CDRH/nebnews - Django based newspaper site). Solr is
> running on Ubuntu 12.04 in Jetty. The site occasionally (once a day) goes
> down with a Connection Refused error. I’m ha
We're running solr 4.4.0 running in this software
(https://github.com/CDRH/nebnews - Django based newspaper site). Solr is
running on Ubuntu 12.04 in Jetty. The site occasionally (once a day) goes down
with a Connection Refused error. I’m having a hard time troubleshooting the
issue and was loo
Thanks again Jeff. I will check the documentation of join queries becasue I
never used it before.
Regards
Roland
2016-02-24 19:07 GMT+01:00 Jeff Wartes :
>
> I suspect your problem is the intersection of “very large document” and
> “high rate of change”. Either of those alone would be fine.
>
>
I suspect your problem is the intersection of “very large document” and “high
rate of change”. Either of those alone would be fine.
You’re correct, if the thing you need to search or sort by is the thing with a
high change rate, you probably aren’t going to be able to peel those things out
of
On Wed, Feb 24, 2016 at 12:51 PM, Shawn Heisey wrote:
> On 2/24/2016 9:09 AM, Mike Thomsen wrote:
>> Yeah, it was a problem on my end. Not just the content-type as you
>> suggested, but I had to wrap that whole JSON body so it looked like this:
>>
>> {
>> "params": { ///That block pasted here
I would also point you at many of Mr. Underwood's blog posts, as they have
helped me quite a bit :)
http://techblog.chegg.com/2012/12/12/measuring-search-relevance-with-mrr/
On Wed, Feb 24, 2016 at 11:37 AM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> For relevance, I would als
On 2/24/2016 9:09 AM, Mike Thomsen wrote:
> Yeah, it was a problem on my end. Not just the content-type as you
> suggested, but I had to wrap that whole JSON body so it looked like this:
>
> {
> "params": { ///That block pasted here }
> }
I'm surprised you can get JSON to work at all. I would
Its still throwing error without quotes.
solr start -e cloud -noprompt -z
localhost:2181,localhost:2182,localhost:2183
Invalid command-line option: localhost:2182
Usage: solr start [-f] [-c] [-h hostname] [-p port] [-d directory] [-z
zkHost] [
-m memory] [-e example] [-s solr.solr.home] [-a "add
Great! Thanks!
-Original message-
> From:Yonik Seeley
> Sent: Wednesday 24th February 2016 18:04
> To: solr-user@lucene.apache.org
> Subject: Re: /select changes between 4 and 5
>
> On Wed, Feb 24, 2016 at 11:21 AM, Markus Jelsma
> wrote:
> > Re: POST in general still works for quer
Hi - i see lots of:
o.a.s.c.Config Beginning with Solr 5.5, is deprecated, configure
it on the relevant instead.
On my development machine for all cores. None of the cores has either parameter
configured. Is this expected?
Thanks,
Markus
On Wed, Feb 24, 2016 at 11:21 AM, Markus Jelsma
wrote:
> Re: POST in general still works for queries... I just verified it:
>
> This is not supposed to change i hope? We rely on POST for some huge
> automated queries. Instead of constantly increasing URL length limit, we rely
> on POST.
Yep, an
Hi,
Can someone please update on this exception trace while we are using
distributed search using shards parameter (solr-master-slave).
Thanks,
Lokesh
On Wed, Feb 17, 2016 at 5:33 PM, Lokesh Chhaparwal
wrote:
> Hi,
>
> We are facing NPE while using distributed search (Solr version 4.7.2)
> (u
Hi list!
Does SolrJ already wrap the new JSON Facet API? I couldn't find any info
about this.
If not, what's the best way for a Java client to build and send requests
when you want to use the JSON Facets?
On a side note, since the JSON Facet API uses POST I will not be able to
see the requested
For relevance, I would also look at retention metrics. Harder to tie back
to a specific search. But what happens after the conversion? Did they
purchase the product and hate it? Or did they come back for more? Retention
metrics say a lot about the whole experience. But for many search-heavy
applica
Your statement makes no sense. Please clarify. Express your requirement(s)
in plain English first before dragging in possible solutions. Technically,
path elements can have embedded spaces.
-- Jack Krupansky
On Wed, Feb 24, 2016 at 6:53 AM, Anil wrote:
> HI,
>
> i need to use both WhitespaceTok
Binoy, 0.1 is still a positive boost. With title getting the highest weight,
this won't make any difference. I've tried this as well.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-time-de-boost-tp4259309p4259552.html
Sent from the Solr - User mailing list archive at
Hi Emir,
I've a bunch of contentgroup values, so boosting them individually is
cumbersome. I've boosting on query fields
qf=text^6 title^15 IndexTerm^8
and
bq=Source:simplecontent^10 Source:Help^20
(-ContentGroup-local:("Developer"))^99
I was hoping *(-ContentGroup-local:("Developer"))^9
Re: POST in general still works for queries... I just verified it:
This is not supposed to change i hope? We rely on POST for some huge automated
queries. Instead of constantly increasing URL length limit, we rely on POST.
Regards,
Markus
-Original message-
> From:Yonik Seeley
> Sent:
Click through rate (CTR) is fundamental. That is easy to understand and
integrates well with other business metrics like conversion. CTR is at least
one click anywhere in the result set (first page, second page, …). Count
multiple clicks as a single success. The metric is, “at least one click”.
Yeah, it was a problem on my end. Not just the content-type as you
suggested, but I had to wrap that whole JSON body so it looked like this:
{
"params": { ///That block pasted here }
}
On Wed, Feb 24, 2016 at 11:05 AM, Yonik Seeley wrote:
> POST in general still works for queries... I just
POST in general still works for queries... I just verified it:
curl -XPOST "http://localhost:8983/solr/techproducts/select"; -d "q=*:*"
Maybe it's your content-type (since it seems like you are posting
Python)... Were you using some sort of custom code that could
read/accept other content types?
I've wondered about this as well.Recall that the proper architecture for
Solr as well as ZooKeeper is as a back-end service, part of a tiered
architecture, with web application servers in front. Solr and other search
engines should fit in at the same layer as RDBMS and NoSQL, with the web
On 2/23/2016 11:10 PM, Neeraj Bhatt wrote:
> Hello
>
> We have a solr cloud stored and indexed data of around 25 lakh documents
> We recently moved to solr 5.4.1 but are unable to move our indexed
> data. What approach we should follow
>
> 1. data import handler works in solr cloud ? what should we
With 4.10, we used to post JSON like this example (part of it is Python) to
/select:
{
"q": "LONG_QUERY_HERE",
"fq": fq,
"fl": ["id", "title", "date_of_information", "link", "search_text"],
"rows": 100,
"wt": "json",
"indent": "true",
"_": int(time.time())
}
We just up
I have checked it already in the ref. guide. It is stated that you can not
search in external fields:
https://cwiki.apache.org/confluence/display/solr/Working+with+External+Files+and+Processes
Really I am very curios that my problem is not a usual one or the case is
that SOLR mainly focuses on sea
HI,
i need to use both WhitespaceTokenizerFactory and
PathHierarchyTokenizerFactory for use case.
Solr supports only one tokenizer. is there any way we can achieve
PathHierarchyTokenizerFactory functionality with filters ?
Please advice.
Regards,
Anil
Depending of what features you do actually need, might be worth a look
on "External File Fields" Roland?
-Stefan
On Wed, Feb 24, 2016 at 12:24 PM, Szűcs Roland
wrote:
> Thanks Jeff your help,
>
> Can it work in production environment? Imagine when my customer initiate a
> query having 1 000 docs
Thanks Jeff your help,
Can it work in production environment? Imagine when my customer initiate a
query having 1 000 docs in the result set. I can not use the pagination of
SOLR as the field which is the basis of the sort is not included in the
schema for example the price. The customer wants the
Hi all,
I am indexing the payload in Solr as advised in
https://lucidworks.com/blog/2014/06/13/end-to-end-payload-example-in-solr/
and I am also able to search for it.
What I want to do now is getting the payload within my Solr custom function
to do some calculation however I can see just methods t
Hi Bill,
You can take a look at Sematext's search analytics
(https://sematext.com/search-analytics). It provides some of metrics you
mentioned, plus some additional (top queries, CTR, click stats, paging
stats etc.). In combination with Sematext's performance metrics
(https://sematext.com/spm)
If you were to apply a boost of less than 1, so something like 0.1, that
would reduce the score of the docs you want to de-boost.
On Wed, 24 Feb 2016, 15:17 Emir Arnautovic
wrote:
> Hi Shamik,
> Is boosting others acceptable option to you, e.g.
> ContentGroup:"NonDeveloper"^100.
> Which query pa
Hi Shamik,
Is boosting others acceptable option to you, e.g.
ContentGroup:"NonDeveloper"^100.
Which query parser do you use?
Regards,
Emir
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On 23.02.2016 23:42, Shami
Hi kshitij
We are using following configuration and it is working fine
http://11.11.11.11:8983/solr/classify"; query="*:*"
fl="id,title,content,segment," wt="javabin" />
Please give processor="SolrEntityProcessor" and also give fl
(fieldswhich you want to be saved in your new instance)
hi
I am using following tag
i am able to connect but indexing is not working. My solr have same versions
On Wed, Feb 24, 2016 at 12:48 PM, Neeraj Bhatt
wrote:
> Hi
>
> Can you give your data import tag details tag in
> db-data-config.xml
> Also is your previuos and new solr have differe
Hi everyone,
I am really need your help, please read below
If we have to run solr in cloud mode, we are going to use zookeeper, now
any zookeeper client can connect to zookeeper server, Zookeeper has
facility to protect znode however any one can see znode acl however
password could be encrypte
49 matches
Mail list logo