Dear all Solr users/developeres,
Hi,
I am going to use Solr JSON facet range on a date filed which is stored as
long milis. Unfortunately I got java heap space exception no matter how
much memory assigned to Solr Java heap! I already test that with 2g heap
space for Solr core with 50k documents!! I
acet=true&json.facet=%7bresult: %7b
type: range,
field: stat_date,
start: 146027158386,
end: 1460271583864,
gap: 1
%7d%7d
Sincerely,
On Sun, Apr 10, 2016 at 4:56 PM, Yonik Seeley wrote:
> On Sun, Apr 10, 2016 at 3:47 AM, Ali Nazemian
> wrote:
> > Dear all Solr users/develo
Dear all,
Hi,
I am wondering, is there any way to introduce and add a function for facet
gap parameter? I already know there are some Date Math that can be used.
(Such as DAY, MONTH, and etc.) I want to add some functions and try to use
them as gap in facet range; Is it possible?
Sincerely,
Ali.
Dear Solr Users/Developers,
Hi,
I was wondering what is the correct query syntax for searching sequence of
terms with blank character in the middle of sequence. Suppose I am looking
for a query syntax with using fq parameter. For example suppose I want to
search for all documents having "hello wor
Dear all,
Hi,
I was wondering, is it possible to re-index Solr 6.0 data in case of
store=false? I am using Solr as a secondary datastore, and for the sake of
space efficiency all the fields (except id) are considered as store=false.
Currently, due to some changes in application business, Solr schem
construct the original steam, which very possibly would take
> up as much space as if you'd just stored the values anyway. _And_ it
> would burden every one else who didn't want to do this with a bloated
> index.
>
> Best,
> Erick
>
> On Sun, May 8, 2016 at 4:25 AM, A
Dears,
Hi,
I have a collection of 1.6m documents in Solr 5.2.1. When I use facet on
field of content this error will appear after around 30s of trying to
return the results:
null:org.apache.solr.common.SolrException: Exception during facet.field: content
at org.apache.solr.request.SimpleFa
try the facet.method=enum and it works fine. Did you mean that
actually applying facet on analyzed field is wrong?
Best regards.
On Mon, Jul 20, 2015 at 8:07 PM, Toke Eskildsen
wrote:
> Ali Nazemian wrote:
> > I have a collection of 1.6m documents in Solr 5.2.1.
> > [...
7;re reporting, so I
> clearly don't
> understand something about your setup. Are you _sure_?
>
> Best,
> Erick
>
> On Mon, Jul 20, 2015 at 9:32 PM, Ali Nazemian
> wrote:
> > Dear Toke and Davidphilip,
> > Hi,
> > The fieldtype text_fa has some custom
st regards.
On Tue, Jul 21, 2015 at 10:00 AM, Ali Nazemian
wrote:
> Dear Erick,
>
> Actually faceting on this field is not a user wanted application. I did
> that for the purpose of testing the customized normalizer and charfilter
> which I used. Therefore it just used for the pur
Dears,
Hi,
I know that there are lots of tips about how to make the Solr indexing
faster. Probably some of the most important ones which are considered in
client side are choosing batch indexing and multi-thread indexing. There
are other important factors that are server side which I dont want to
m
Dear Yonik,
Hi,
Really thanks for you response.
Best regards.
On Tue, Jul 21, 2015 at 5:42 PM, Yonik Seeley wrote:
> On Tue, Jul 21, 2015 at 3:09 AM, Ali Nazemian
> wrote:
> > Dear Erick,
> > I found another thing, I did check the number of unique terms for this
> > fi
gt; the filterCache to create a bitset for each unique term. Which is totally
> > incompatible with the uninverted field error you're reporting, so I
> > clearly don't
> > understand something about your setup. Are you _sure_?
> >
> > Best,
> > E
Hi,
I am going to implement a searchcomponent for Solr to return document main
keywords with using the more like this interesting terms. The main part of
implemented component which uses mlt.retrieveInterestingTerms by lucene
docID does not work for all of the documents. I mean for some of the
doc
I was wondering how can I overcome this query requirement in Solr 5.2.1:
I have two different Solr cores refer as "core1" and "core2". core1 has
some fields such as field1, field2 and field3 and core2 has some other
fields such as field1, field4 and field5. I am looking for Solr query which
can r
Dear Mikhail,
Hi,
I want to enrich the result.
Regards
On Oct 6, 2015 7:07 PM, "Mikhail Khludnev"
wrote:
> Hello,
>
> Why do you need sibling core fields? do you facet? or just want to enrich
> result page with them?
>
> On Tue, Oct 6, 2015 at 6:04 PM, Ali Na
ot;
wrote:
> thus, something like [child]
>
> https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents
> can be developed.
>
> On Tue, Oct 6, 2015 at 6:45 PM, Ali Nazemian
> wrote:
>
> > Dear Mikhail,
> > Hi,
> > I want to enric
SolrDocument or IndexableField is from? Seems we'd have to add an
> > attribute for that.
> >
> > The other possibly simpler thing to do is execute the join at index time
> > with an update processor.
> >
> > Ryan
> >
> > On Tuesday, October 6, 2015
el
>
> On Sun, Oct 11, 2015 at 10:01 AM, Ali Nazemian
> wrote:
>
> > Dear Susheel,
> > Hi,
> >
> > I did check the jira issue that you mentioned but it seems its target is
> > Solr 6! Am I correct? The patch failed for Solr 5.3 due to class not
> foun
rs.
On Mon, Oct 12, 2015 at 12:29 PM, Ali Nazemian
wrote:
> Thank you very much.
>
> Sincerely yours.
>
> On Mon, Oct 12, 2015 at 6:15 AM, Susheel Kumar
> wrote:
>
>> Yes, Ali. These are targeted for Solr 6 but you have the option download
>> source from trunk,
Dear Midas,
Hi,
AFAIK, currently Solr uses virtual memory for storing memory maps.
Therefore using 36GB from 48GB of ram for Java heap is not recommended. As
a rule of thumb do not access more than 25% of your total memory to Solr
JVM in usual situations.
About your main question, setting softcommi
Hi,
There is a while since I have had problem with Solr 5.2.1 and I could not
fix it yet. The only think that is clear to me is when I send bulk update
to Solr the commit thread will be blocked! Here is the thread dump output:
"qtp595445781-8207" prio=10 tid=0x7f0bf68f5800 nid=0x5785 waiting f
--
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
> On 08.12.2015 08:16, Ali Nazemian wrote:
>
>> Hi,
>> There is a while since I have had problem with Solr 5.2.1 and I could not
>
analyzing part I think it would be acceptable).
- The concurrentsolrclient is used in all the indexing/updating cases.
Regards.
On Tue, Dec 8, 2015 at 6:36 PM, Ali Nazemian wrote:
> Dear Emir,
> Hi,
> There are some cases that I have soft commit in my application. However,
> the bulk upd
I did that already. The situation was worse. The autocommit part makes solr
unavailable.
On Dec 8, 2015 7:13 PM, "Emir Arnautovic"
wrote:
> Hi Ali,
> Can you try without explicit commits and see if threads will still be
> blocked.
>
> Thanks,
> Emir
>
> On 08
I really appreciate if somebody can help me to solve this problem.
Regards.
On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian wrote:
> I did that already. The situation was worse. The autocommit part makes
> solr unavailable.
> On Dec 8, 2015 7:13 PM, "Emir Arnautovic"
> wrot
g bulk size?
> How many indexing threads?
>
> Thanks,
> Emir
>
>
> On 11.12.2015 10:06, Ali Nazemian wrote:
>
>> I really appreciate if somebody can help me to solve this problem.
>> Regards.
>>
>> On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian
>>
Dear all,
Hi,
I was wondering is there any performance comparison available for different
solr queries?
I meant what is the cost of different Solr queries from memory and CPU
points of view? I am looking for a report that could help me in case of
having different alternatives for sending single que
Hi,
I was wondering is it possible to filter tfq() function query to specific
selection of collection? Suppose I want to count all occurrences of term
"test" in documents with fq=category:2, how can I handle such query with
tfq() function query? It seems applying fq=category:2 in a "select" query
w
Dear All,
Hi,
I wrote a customize updateProcessorFactory for the purpose of extracting
interesting terms at index time an putting them in a new field. Since I use
MLT interesting terms for this purpose, I have to make sure that the added
document exists in index or not. If it was indexed before the
on previously indexed but not post-processed
> documents.
>
> Regards,
>Alex.
>
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 23 March 2015 at 15:07, Ali Nazemian wrote:
> > Dear All,
>
Dear all,
Hi,
I am looking for a way to filtering lucene index with multiple conditions.
For this purpose I checked two different method of filtering search, none
of them work for me:
Using BooleanQuery:
BooleanQuery query = new BooleanQuery();
String lower = "*";
String upper = "*";
I implement a small code for the purpose of extracting some keywords out of
Lucene index. I did implement that using search component. My problem is
when I tried to update Lucene IndexWriter, Solr index which is placed on
top of that, does not affect. As you can see I did the commit part.
Bool
for
> updating the index, so it really doesn’t surprise me that you aren’t
> seeing updates.
>
> I’d suggest you describe the problem you are trying to solve before
> proposing solutions.
>
> Upayavira
>
>
> On Tue, Apr 7, 2015, at 01:32 PM, Ali Nazemian wrote:
> > I im
IndexWriter.
On Tue, Apr 7, 2015 at 6:13 PM, Ali Nazemian wrote:
> Dear Upayavira,
> Hi,
> It is just the part of my code in which caused the problem. I know
> searchComponent is not for changing the index, but for the purpose of
> extracting document keywords I was forced to hack sear
Dear all,
Hi,
As a part of my code I have to update Lucene document. For this purpose I
used writer.updateDocument() method. My problem is the update process is
not affect index until restarting Solr. Would you please tell me what part
of my code is wrong? Or what should I add in order to apply the
Dears,
Hi,
I have strange problem with Solr 4.10.x. My problem is when I do searching
on solr Zero date which is "0002-11-30T00:00:00Z" if more than one filter
be considered, the results became invalid. For example consider this
scenario:
When I search for a document with fq=p_date:"0002-11-30T00:0
sults, but the actual searching is
> done via the main query with the q parameter.
>
> -- Jack Krupansky
>
> On Tue, Apr 14, 2015 at 4:17 AM, Ali Nazemian
> wrote:
>
> > Dears,
> > Hi,
> > I have strange problem with Solr 4.10.x. My problem is when I do
> sea
s the best solution doesn't involve "Y" at all?
> See Also: http://www.perlmonks.org/index.pl?node_id=542341
>
>
>
>
> : Date: Thu, 9 Apr 2015 01:02:16 +0430
> : From: Ali Nazemian
> : Reply-To: solr-user@lucene.apache.org
> : To: "solr-user@lucene.ap
Dear all,
Hi,
I was wondering is there any function query for converting date format in
Solr? If no, how can I implement such function query myself?
--
A.Nazemian
:
> I'm not sure what you're asking for, give us an example input/output pair?
>
> Best,
> Erick
>
> On Tue, Jun 9, 2015 at 8:47 AM, Ali Nazemian
> wrote:
> > Dear all,
> > Hi,
> > I was wondering is there any function query for converting date fo
bution.
> > It should take in input a date format and a field and give in response
> > the
> > new formatted Date.
> >
> > The would be simple to use it :
> >
> > fl=id,persian_date:dateFormat("/mm/dd",gregorian_Date)
> >
> > Th
Dear Lucene/Solr developers,
Hi,
I decided to develop a plugin for Solr in order to extract main keywords
from article. Since Solr already did the hard-working for calculating
tf-idf scores I decided to use that for the sake of better performance. I
know that UpdateRequestProcessor is the best suit
Dear Solr users/developers,
Hi,
I have tried to implement the Page and Post relation in single Solr Schema.
In my use case each page has multiple posts. Page and Post fields are as
follows:
Post:{post_content, owner_page_id, document_type}
Page:{page_id, document_type}
Suppose I want to query
Hi everybody,
I was wondering which scenario (or the combination) would be better for my
application. From the aspect of performance, scalability and high
availability. Here is my application:
Suppose I am going to have more than 10m documents and it grows every day.
(probably in 1 years it reach
ww.datastax.com/what-we-offer/products-services/
> datastax-enterprise
>
> -- Jack Krupansky
>
> -----Original Message- From: Ali Nazemian
> Sent: Monday, May 26, 2014 9:50 AM
> To: solr-user@lucene.apache.org
> Subject: Using SolrCloud with RDBMS or without
>
&
s in 10 hours, is that good
> enough?
> I don't know, it's your problem space after all ;). And is it acceptable
> to not
> see changes to the schema until tomorrow morning? If so, there's no need
> to get
> more complicated
>
> Best,
> Erick
>
> On
10m to 100m documents.
Regards.
On Mon, May 26, 2014 at 8:30 PM, Shawn Heisey wrote:
> On 5/26/2014 7:50 AM, Ali Nazemian wrote:
> > I was wondering which scenario (or the combination) would be better for
> my
> > application. From the aspect of performance, scalability and hig
Hi every body,
I was wondering is there any way for using cross doc join on integraion of
one solr core and a relational database.
Suppose I have a table in relational database (my sql) name USER. I want to
keep track of news that each user can have access. Assume news are stored
inside solr and th
that this Query implementation can _only_ be used as an fq, not as
> a q (it would need to implement createWeight).
> */
> public class AreaIsOpenControlQuery extends ExtendedQueryBase implements
> PostFilter {
>
>
>
> On Friday, May 30, 2014 2:26 PM, Ali Nazemian
> wr
Dears,
Hi,
I am going to apply customer security filtering for each document per each
user. (using custom profile for each user). I was thinking of adding user
fields to index and using solr join for filtering. But It seems for
distributed solr this is not a solution. Could you please tell me what
Google
> search should bring a couple more.
>
> Regards,
>Alex.
> Personal website: http://www.outerthoughts.com/
> Current project: http://www.solr-start.com/ - Accelerating your Solr
> proficiency
>
>
> On Tue, Jun 17, 2014 at 6:24 PM, Ali Nazemian
> wro
Any idea would be appropriate.
On Tue, Jun 17, 2014 at 5:44 PM, Ali Nazemian wrote:
> Dear Alexandre,
> Yeah I saw that, but what is the best way of doing that from the
> performance point of view?
> I think of one solution myself:
> Suppose we have a RDBMS for users th
Hi,
I used solr 4.8 for indexing the web pages that come from nutch. I know
that solr deduplication operation works on uniquekey field. So I set that
to URL field. Everything is OK. except that I want after duplication
detection solr try not to delete all fields of old document. I want some
fields
Any suggestion would be appreciated.
Regards.
On Mon, Jun 30, 2014 at 2:49 PM, Ali Nazemian wrote:
> Hi,
> I used solr 4.8 for indexing the web pages that come from nutch. I know
> that solr deduplication operation works on uniquekey field. So I set that
> to URL field. Ever
ur preserve-field functionality.
> Could even be a nice contribution.
>
> Regards,
>Alex.
>
> Personal website: http://www.outerthoughts.com/
> Current project: http://www.solr-start.com/ - Accelerating your Solr
> proficiency
>
>
> On Tue, Jul 1, 2014 at 6:50 P
nt project: http://www.solr-start.com/ - Accelerating your Solr
> proficiency
>
>
> On Mon, Jul 7, 2014 at 2:08 PM, Ali Nazemian
> wrote:
> > Dears,
> > Is there any way that I can do that in other way?
> > I mean if you look at my main problem again you will find out
g your Solr
> proficiency
>
>
> On Mon, Jul 7, 2014 at 4:32 PM, Ali Nazemian
> wrote:
> > Updating documents will add some extra time to indexing process. (I send
> > the documents via apache Nutch) I prefer to make indexing as fast as
> > possible.
>
ds,
>Alex.
> Personal website: http://www.outerthoughts.com/
> Current project: http://www.solr-start.com/ - Accelerating your Solr
> proficiency
>
>
> On Mon, Jul 7, 2014 at 4:48 PM, Ali Nazemian
> wrote:
> > Dear Alexande,
> > What if I use ExternalFileFiled
I think this will not improve the performance of indexing but probably it
would be a solution for using HDFS HA with replication factor. But I am not
sure about that.
On Mon, Jul 7, 2014 at 12:53 PM, search engn dev
wrote:
> Currently i am exploring hadoop with solr, Somewhere it is written as
Dears,
Hi,
According to my requirement I need to change the default behavior of Solr
for overwriting the whole document on unique-key duplication. I am going to
change that the overwrite just part of document (some fields) and other
parts of document (other fields) remain unchanged. First of all I
lds.
>
> Thanks,
> Himanshu
>
>
> On Tue, Jul 8, 2014 at 1:09 PM, Ali Nazemian
> wrote:
>
> > Dears,
> > Hi,
> > According to my requirement I need to change the default behavior of Solr
> > for overwriting the whole document on unique-key duplication. I am
side this is what solr
> do
> : to documents with duplicated uniquekey.
> : Regards.
> :
> :
> : On Tue, Jul 8, 2014 at 12:29 PM, Himanshu Mehrotra <
> : himanshu.mehro...@snapdeal.com> wrote:
> :
> : > Please look at https://wiki.apache.org/solr/Atomic_
Dear All,
Hi,
I was wondering is there anybody out there that tried to integrate Solr
with Accumulo? I was thinking about using Accumulo on top of HDFS and using
Solr to index data inside Accumulo? Do you have any idea how can I do such
integration?
Best regards.
--
A.Nazemian
there a reason you're thinking
> of using both databases in particular?
>
>
> On Wed, Jul 23, 2014 at 5:17 AM, Ali Nazemian
> wrote:
>
> > Dear All,
> > Hi,
> > I was wondering is there anybody out there that tried to integrate Solr
> > with Accumul
ccordingly in
> the QParserPlugin.
>
> This will give you true row level security in Solr and Accumulo, and it
> performs quite well in Solr.
>
> Let me know if you have any other questions.
>
> Joe
>
>
> On Thu, Jul 24, 2014 at 4:07 AM, Ali Nazemian
> wrote:
&g
the right direction. And
> it has Hadoop and Spark integration as well.
>
> See:
> http://www.datastax.com/what-we-offer/products-services/
> datastax-enterprise
>
> -- Jack Krupansky
>
> -Original Message- From: Ali Nazemian
> Sent: Thursday, July 24, 2014 1
curiosity, why are you not using that integrated Lucene support of
> Sqrrl Enterprise?
>
>
> -- Jack Krupansky
>
> -Original Message- From: Ali Nazemian
> Sent: Thursday, July 24, 2014 3:07 PM
>
> To: solr-user@lucene.apache.org
> Subject: Re: integrating Accumulo
have to define
something (probably with accumulo iterator) to import to solr on inserting
new data.
Regards.
On Fri, Jul 25, 2014 at 12:59 PM, Ali Nazemian
wrote:
> Dear Jack,
> Actually I am going to do benefit-cost analysis for in-house developement
> or going for sqrrl support.
>
Just highlighting the
> challenge of such a task.
>
> Just to be clear, you are referring to "sync mode" and not mere "ETL",
> which people do all the time with batch scripts, Java extraction and
> ingestion connectors, and cron jobs.
>
> Give it a shot a
Dear all,
Hi,
I changed solr 4.9 to write index and data on hdfs. Now I am going to
connect to those data from the outside of solr for changing some of the
values. Could somebody please tell me how that is possible? Suppose I am
using Hbase over hdfs for do these changes.
Best regards.
--
A.Nazem
Actually I am going to do some analysis on the solr data using map reduce.
For this purpose it might be needed to change some part of data or add new
fields from outside solr.
On Tue, Aug 5, 2014 at 5:51 PM, Shawn Heisey wrote:
> On 8/5/2014 7:04 AM, Ali Nazemian wrote:
> > I changed
iles by raw I/O
> operations, good luck! I'm 99.99% certain that's going to cause
> you endless grief.
>
> Best,
> Erick
>
>
> On Tue, Aug 5, 2014 at 9:55 AM, Ali Nazemian
> wrote:
>
> > Actually I am going to do some analysis on the solr data using map
purposes
such as Analysis. So why we go for HDFS in the case of analysis if we want
to use SolrJ for this purpose? What is the point?
Regards.
On Wed, Aug 6, 2014 at 8:59 AM, Ali Nazemian wrote:
> Dear Erick,
> Hi,
> Thank you for you reply. Yeah I am aware that SolrJ is my last option
Dear all,
Hi,
I was wondering how can I mange to index comments in solr? suppose I am
going to index a web page that has a content of news and some comments that
are presented by people at the end of this page. How can I index these
comments in solr? consider the fact that I am going to do some ana
these
comments? What is the document granularity?
Best regards.
On Wed, Aug 6, 2014 at 1:29 PM, Gora Mohanty wrote:
> On 6 August 2014 14:13, Ali Nazemian wrote:
> >
> > Dear all,
> > Hi,
> > I was wondering how can I mange to index comments in solr? suppose I am
&g
sletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On Wed, Aug 6, 2014 at 11:18 AM, Ali Nazemian
> wrote:
> > Dear Gora,
> > I think you misunderstood my problem. Actually I used nutch for c
set is small enough to fit on a single machine
> _and_ you can get
> through your analysis in a reasonable time (reasonable here is up to you),
> then HDFS
> is probably not worth the hassle. But in the big data world where we're
> talking petabyte scale,
> having HDFS as
e HDFS world, you can scale
> > > pretty linearly
> > > with the number of nodes you can rack together.
> > >
> > > Frankly though, if your data set is small enough to fit on a single
> > machine
> > > _and_ you can get
> > > through your analysis in a reasonable
Dear all,
Hi,
I was wondering how can I use solrJ for sending nested document to solr?
Unfortunately I did not find any tutorial for this purpose. I really
appreciate if you can guide me through that. Thank you very much.
Best regards.
--
A.Nazemian
Dear all,
Hi,
I was wondering how can I implement solr boosting words from specific list
of important words? I mean I want to have a list of important words and
tell solr to score documents based on the weighted sum of these words. For
example let word "school" has weight of 2 and word "president"
Dear all,
Hi,
Right now I face with the strange problem related to solJ client:
When I use only incremental partial update. The incremental partial update
works fine. When I use only the add child documents. It works perfectly and
the child documents added successfully. But when I have both of them
I also check both solr log and solr console. There is no error inside that,
it seems that every thing is fine! But actually there is not any child
document after executing process.
On Mon, Sep 29, 2014 at 1:47 PM, Ali Nazemian wrote:
> Dear all,
> Hi,
> Right now I face with th
uery.
>
> Koji
> --
> http://soleami.com/blog/comparing-document-classification-functions-of-
> lucene-and-mahout.html
>
>
> (2014/09/29 4:25), Ali Nazemian wrote:
>
>> Dear all,
>> Hi,
>> I was wondering how can I implement solr boosting words fr
Tue, Sep 30, 2014 at 7:07 PM, Ali Nazemian wrote:
> Dear Koji,
> Hi,
> Thank you very much.
> Do you know any example code for UpdateRequestProcessor? Anything would be
> appreciated.
> Best regards.
>
> On Tue, Sep 30, 2014 at 3:41 AM, Koji Sekiguchi
> wrote:
>
Did anybody test that?
Best regards.
On Mon, Sep 29, 2014 at 2:05 PM, Ali Nazemian wrote:
> I also check both solr log and solr console. There is no error inside
> that, it seems that every thing is fine! But actually there is not any
> child document after executing process.
>
>
Dear all,
Hi,
I am going to do partial update on a field that has not any value. Suppose
I have a document with document id (unique key) '12345' and field
"read_flag" which does not index at the first place. So the read_flag field
for this document has not any value. After I did partial update to t
y: https://www.linkedin.com/groups?gid=6713853
>
>
> On 6 October 2014 03:40, Ali Nazemian wrote:
> > Dear all,
> > Hi,
> > I am going to do partial update on a field that has not any value.
> Suppose
> > I have a document with document id (unique key) '1234
outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 6 October 2014 11:23, Ali Nazemian wrote:
> > Dear Alex,
> > Hi,
> > LOL,
Hi,
I am going to import solr source code to eclipse for some development
purpose. Unfortunately every tutorial that I found for this purpose is
outdated and did not work. So would you please give me some hint about how
can I import solr source code to eclipse?
Thank you very much.
--
A.Nazemian
; lucene-solr-trunk
> > cd lucene-solr-trunk
> > ant eclipse
> >
> > ... And then, from your Eclipse "import existing java project", and
> select
> > the directory where you placed lucene-solr-trunk
> >
> > On Sun, Oct 12, 2014 at 7:09 AM, Ali Nazemian
Dear all,
Hi,
I was wondering how can I mark some documents as duplicate (just marking
for future usage not deleting) based on the hash combination of some
fields? Suppose I have 2 fields name "url" and "title" I want to create
hash based on url+title and send it to another field name "signature".
Hi,
I was wondering how can I have both solr deduplication and partial update.
I found out that due to some reasons you can not rely on solr deduplication
when you try to update a document partially! It seems that when you do
partial update on some field- even if that field does not consider as
dup
r your usecase,
> just don't configure teh signatureField to be the same as your uniqueKey
> field.
>
> configure some othe fieldname (ie "signature") instead.
>
>
> : Date: Tue, 14 Oct 2014 12:08:26 +0330
> : From: Ali Nazemian
> : Reply-To
x27; specifically on the business level?
>
> Regards,
> Alex
> On 22/10/2014 7:27 am, "Ali Nazemian" wrote:
>
> > The problem is when I partially update some fields of document. The
> > signature becomes useless! Even if the updated fields are not included i
Hi,
I was wondering what is the hardware requirement for indexing 500 million
documents in Solr? Suppose maximum number of concurrent users in peak time
would be 20.
Thank you very much.
--
A.Nazemian
Hi everybody,
I am going to add some analysis to Solr at the index time. Here is what I
am considering in my mind:
Suppose I have two different fields for Solr schema, field "a" and field
"b". I am going to use the created reverse index in a way that some terms
are considered as important ones and
ene.apache.org/core/4_10_3/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
>
> And to use your custom similarity class in Solr:
>
> https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements#OtherSchemaElements-Similarity
>
>
> -- Jack Krupansky
&g
g stage. Have
> you tried UpdateRequestProcessors? They have access to the full
> document when it is sent and can do whatever they want with it.
>
> Regards,
>Alex.
>
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>
>
> On 11 January 2015 at 10:55, Ali
re by a function of the term frequency of arbitrary terms,
> using the tf, mul, and add functions.
>
> See:
> https://cwiki.apache.org/confluence/display/solr/Function+Queries
>
> -- Jack Krupansky
>
> On Sun, Jan 11, 2015 at 10:55 AM, Ali Nazemian
> wrote:
>
> &
1 - 100 of 112 matches
Mail list logo