Hi,
When I try to upgrade Guava that SOLR depends on, I notice the Guava
version listed in maven repository for SOLR is 14.0.1 (
https://mvnrepository.com/artifact/org.apache.solr/solr-core/8.0.0). I also
noticed that there is a Jira issue resolved in SOLR that upgraded Guava
dependency to 25.
I use this fa jar for Solr 6.6.5
https://github.com/adejanovski/cassandra-jdbc-wrapper
Kind regards,
Daphne Liu
BI Architect • Big Data - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com
T 904.5641192/ F 904.928.1525 / daphne
.
Kind regards,
Daphne Liu
BI Architect • Big Data - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com
T 904.9281448 / F 904.928.1525 / daphne@cevalogistics.com
Making business flow
-Original Message-
From: zhenyuan wei
Hi,
We are running a 7.4.0 solr cluster with 3 tlogs and a few pulls. There is
one collection divided into 8 shards, with each tlog has all 8 shards, and
each pull either has shard1 to 4 or shard5 to 8.
When using jmx to collect num_docs metrics via datadog, we found that the
metrics for some sha
(ConcurrentMergeScheduler.java:626)
Kind regards,
Daphne Liu
BI Architect • Big Data - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com
T 904.9281448 / F 904.928.1525 / daphne@cevalogistics.com
Making business flow
-Original
performance is very good.
Grafana dashboard solution can be viewed @
https://grafana.com/dashboards/5204/edit
Kind regards,
Daphne Liu
BI Architect Big Data - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com
T 904.9281448 / F
Hello,
I am using Solr 6.3.0. Does anyone know in deltaImportQuery when referencing
id, should I use '${dih.delta.id}' or '${dataimporter.delta.id} ?
Both were mentioned in Delta-Import wiki. I am confused. Thank you.
Kind regards,
Daphne Liu
BI Architect - Matrix SCM
Hi All:
We are trying to index a large number of documents in solrcloud and keep
seeing the following error: org.apache.solr.common.SolrException: Service
Unavailable, or org.apache.solr.common.SolrException: Service Unavailable
but with a similar stack:
request: http://wp-np2-c0:8983/solr/unipro
Hi All:
We have a normal build/stage -> prod settings for our production pipeline.
And we would build solr index in the build environment and then the index
is copied to the prod environment.
The solrcloud in prod seems working fine when the file system backing it is
writable. However, we see man
NO, I use the free version. I have the driver from someone else. I can share it
if you want to use Cassandra.
They have modified it for me since the free JDBC driver I found will timeout
when the document is greater than 16mb.
Kind regards,
Daphne Liu
BI Architect - Matrix SCM
CEVA Logistics
my Cassandra clusters' memory, we are very happy
with the result.
Kind regards,
Daphne Liu
BI Architect - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com T 904.564.1192 / F 904.928.1448 /
daphne@cevalogistic
For Solr 6.3, I have to move mine to
../solr-6.3.0/server/solr-webapp/webapp/WEB-INF/lib. If you are using jetty.
Kind regards,
Daphne Liu
BI Architect - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com T 904.564.1192 / F
ption:
org.apache.thrift.transport.TTransportException: Frame size (17676563) larger
than max length (16384000
Thank you.
Kind regards,
Daphne Liu
BI Architect - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com<http://www.cevalogistics.com/> T 904.564.1192 / F
9
hi all:
I was using solr 3.6 and tried to solve a recall-problem today , but
encountered a weird problem.
There's doc with field value : 均匀肤色, (just treated that word as a symbol
if you don't know it, I just want to describe the problem as exact as
possible).
And below was the analysis
, Yongtao Liu wrote:
> Mikhail,
>
> Thanks for your reply.
>
> Random field is based on index time.
> We want to do sampling based on search result.
>
> Like if the random field has value 1 - 100.
> And the query touched documents may all in range 90 - 100.
> So random
Shamik,
Thanks a lot.
Collapsing query parser solve the issue.
Thanks,
Yongtao
-Original Message-
From: shamik [mailto:sham...@gmail.com]
Sent: Tuesday, September 27, 2016 3:09 PM
To: solr-user@lucene.apache.org
Subject: RE: how to remove duplicate from search result
Did you take a look
David,
Thanks for your reply.
Group cannot solve the issue.
We also need run facet and stats based on search result.
With group, facet and stats result still count duplicate.
Thanks,
Yongtao
-Original Message-
From: David Santamauro [mailto:david.santama...@gmail.com]
Sent: Tuesday, Sep
Mikhail,
Thanks for your reply.
Random field is based on index time.
We want to do sampling based on search result.
Like if the random field has value 1 - 100.
And the query touched documents may all in range 90 - 100.
So random field will not help.
Is it possible we can sampling based on searc
Hi,
I am try to remove user defined duplicate from search result.
like below documents match the query.
when query return, I try to remove doc3 from result since it has duplicate guid
with doc1.
Id (uniqueKey)
guid
doc1
G1
doc2
G2
doc3
G1
To do this, I generate exclude list based guid
Sorry, the table is missing.
Update below email with table.
-Original Message-
From: Yongtao Liu [mailto:y...@commvault.com]
Sent: Monday, September 26, 2016 10:47 AM
To: 'solr-user@lucene.apache.org'
Subject: remove user defined duplicate from search result
Hi,
I am try to r
Expressions using JDBC (Oracle) stream source
Ok you should be able to create the jira.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jun 23, 2016 at 11:52 AM, Hui Liu wrote:
> Joel, I just opened an account for this, my user name is
> h...@opentext.com; let me know when I can op
.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jun 23, 2016 at 11:06 AM, Hui Liu wrote:
> Thanks Joel, I have never opened a ticket before with Solr, do you
> know the steps (url etc) I should follow? I will be glad to do so...
> At the meantime, I guess the workaroun
me to
> help users report what different drivers using for the classes.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Jun 23, 2016 at 10:18 AM, Hui Liu wrote:
>
>> Joel - thanks for the quick response, in my previous test, the
>> collecti
releases/lucene-solr/6.0.0/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/JDBCStream.java
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Jun 22, 2016 at 11:46 PM, Hui Liu wrote:
> Hi,
>
>
>
> I have Solr 6.0.0 installed on my PC (windows 7), I was
(got a null pointer error); I am merely trying to get
the data returned from Oracle table, I have not tried to index them in the Solr
yet, attached is the shema.xml and solrconfig.xml for this collection
'document5'; does anyone know what am I missing? thanks for any help!
Regards,
Hu
deep. It is not
something you can put on top of a search engine.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 10, 2016, at 12:39 PM, Hui Liu wrote:
>
> What if we plan to use Solr version 6.x? this url says it support 2 different
der
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 10, 2016, at 7:41 AM, Hui Liu wrote:
>
> Walter,
>
> Thank you for your advice. We are new to Solr and have been using
> Oracle for past 10+ years, so we are used to the idea of
@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 9, 2016, at 9:51 AM, Hui Liu wrote:
>
> Hi Walter,
>
> Thank you for the reply, sorry I need to clarify what I mean by 'migrate
> tables' from Oracle to Solr, we are not literally move existing records from
> Oracle
wunderwood.org>
http://observer.wunderwood.org/ (my blog)
> On Jun 9, 2016, at 8:50 AM, Hui Liu
> mailto:h...@opentext.com>> wrote:
>
> Hi,
>
> We are porting an application currently hosted in Oracle 11g to
> Solr Cloud 6.x, i.e we plan to migrate a
type
of down time? Any feedback is appreciated!
Regards,
Hui Liu
Opentext, Inc.
ocument_id,
sender_msg_dest", sort="document_id asc",qt="/export")
my guess is the 'null pointer' error from the stack trace is caused by no data
in the 'shard2'.
Regards,
Hui
-Original Message-
From: Hui Liu
Sent: Monday, June 06, 2016 1:
eam?expr=search(document3,zkHost="
127.0.0.1:2181",q="*:*",fl="document_id, sender_msg_dest", sort="document_id
asc",qt="/export")
I think most browsers will url encode the expression automatically, but you can
url encode also using an online tool.
ument_id asc",qt="/export")'
"http://localhost:8988/solr/document2/stream";
curl --data-urlencode 'expr=search(document3,q="*:*",fl="document_id,
sender_msg_dest", sort="document_id asc",qt="/select",rows=10)'
"http://
://cwiki.apache.org/confluence/display/solr/Schema+API), which is fully
supported by SolrJ:
http://lucene.apache.org/solr/6_0_0/solr-solrj/org/apache/solr/client/solrj/request/schema/package-summary.html
.
Liu, Ming (Ming) schrieb am Di., 31. Mai 2016 09:41:
> Hello,
>
> I am very new to Solr, I want t
Hello,
I am very new to Solr, I want to write a simple Java program to get a core's
schema information. Like how many field and details of each field. I spent a
few time searching on internet, but cannot get much information about this. The
solrj wiki seems not updated for long time. I am using
-sorlcloud/
All the best
Liu Bo
On 19 March 2015 at 17:54, Zheng Lin Edwin Yeo wrote:
> Hi,
>
> I'm using Solr Cloud now, with 2 shards known as shard1 and shard2, and
> when I try to index rich-text documents using REST API or the default
> Documents module in Solr Admin UI, the
hi, all, solr admin page is always "loading", and when I send query request
also can not get any response. the tcp link is always "ESTABLISHED"。only
restart solr service can fix it. how to find out the problem?
solr:4.6
jetty:8
thanks so much.
ge in context:
> http://lucene.472066.n3.nabble.com/Where-to-specify-numShards-when-startup-up-a-cloud-setup-tp4078473p4128566.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
All the best
Liu Bo
s run a query on a N-Gram analyzed field and with filter queries
on store_id field. The "suggest" is actually a query. It may not perform as
well as suggestion but can do the trick.
You can try it to build a additional N-GRAM field for suggestion only and
search on it with fq on your &qu
8 PM, tasmaniski
> wrote:
> @kamaci
> Ofcourse. That is the problem.
>
> "group.limit is: the number of results (documents) to return for each
> group."
> NumFound is number of total found, but *not* sum number of *return for each
> group.*
>
> @Liu Bo
> seems to be t
ults but I show only 3 that works OKBut in
> numFound I
> > still have 20 for apress publisher...
> >
> >
> >
> > --
> > View this message in context:
>
> http://lucene.472066.n3.nabble.com/Grouping-results-with-group-limit-return-wrong-numFound-tp4108174.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>
--
All the best
Liu Bo
c(ex. save the query to file) .
> : But I dont want to touch solr source code, all I want is to add code(like
> : plugin). if i understand it right I want to write my own search handler
> , do
> : some logic , then pass the data to solr default search handler.
>
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
--
All the best
Liu Bo
.12.2013 09:55, schrieb Liu Bo:
>
>> hi Josip
>>
>
> hi liu,
>
>
> for the 1 question we've done similar things: copying search field to a
>> text field. But highlighting is normally on specific fields such as tittle
>> depending on how the search cont
l the best
Liu Bo
On 18 December 2013 15:26, Alexandre Rafalovitch wrote:
> If this happens rarely and you want to deal with in on the way into Solr,
> you could just keep one of the values, using URP:
>
> http://lucene.apache.org/solr/4_6_0/solr-core/org/apache
ghlighted "text". This is essential,
> because we use copyfield to put almost everything to searchable_text
> (title, subtitle, description, ...)
>
> 2.) I can't get ellipsis working i tried hl.tag.ellipsis=...,
> f.text.hl.tag.ellipsis=..., configuring it in RequestHandler noting seems
> to work, maxAnalyzedChars is just cutting the sentence?
>
> Kind Regards
>
> Josip Delic
>
>
--
All the best
Liu Bo
n email. I trust
our business logic and data integrity more than solr, I will definitely not
do this again. ;-)
All the best
Liu Bo
On 11 December 2013 07:21, Furkan KAMACI wrote:
> Hi Liu;
>
> Yes. it is an expected behavior. If you send data within square brackets
> Solr will behav
?
Is there any way to tell solr this is not a "multivalued value" add don't
break it?
Your help and suggestion will be much of my appreciation.
--
All the best
Liu Bo
gt;
> if( condition 1 )
> {
> --> DELETE DUPLICATE DOC in INDEX <--
> addIncomingDoc = true;
> }
>
> return addIncomingDoc;
> }
--
All the best
Liu Bo
and then use DIH to get/merge content
> from the other source/server. Seem feasible/appropriate? I spec'd it out
> and it seems to make sense...
>
> R
>
> > On Nov 11, 2013, at 11:25 PM, Liu Bo wrote:
> >
> > like Erick said, merge data from different datasour
o point of indexing
> > - Using DIH at one time indexing
> > - At application whenever there is transaction to the details that we
> are
> > storing in Solr.
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://lucene.472066.n3.nabble.com/Multi-core-support-for-indexing-multiple-servers-tp4099729p4099933.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
--
All the best
Liu Bo
y((headline_fr:wild |
> headline_en:wild)))~3))/no_coord
> > >
> > > If I add "and" to the French stopwords list, I *do* get results, and
> the
> > > parsed query is:
> > >
> > > (+((DisjunctionMaxQuery((headline_fr:osca | headline_en:oscar))
> > > DisjunctionMaxQuery((headline_fr:wild |
> headline_en:wild)))~2))/no_coord
> > >
> > > This implies that the only solution is to have a minimal, shared
> > stopwords
> > > list for all languages I want to support. Is this correct, or is there
> a
> > > way of supporting this kind of searching with per-language stopword
> > lists?
> > >
> > > Thanks for any ideas!
> > >
> > > Tom
> > >
> >
>
--
All the best
Liu Bo
spath. Please correct me if I am wrong,
Is there any ways to use resources in plugin jars such as configuration
file?
BTW is there any difference between SolrResourceLoader with tomcat webapp
classLoader?
--
All the best
Liu Bo
ntList results1 = responseA.getResults();
> SolrDocumentList results2 = responseB.getResults();
>
> results1 : d1, d2, d3
> results2 : d1,d2, d4
>
> Return : d1, d2
>
> Regards,
> Michael
>
--
All the best
Liu Bo
r and restart tomcat to see what happens.
--
All the best
Liu Bo
s message in context:
> http://lucene.472066.n3.nabble.com/Multiple-schemas-in-the-same-SolrCloud-tp4094279p4094729.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
All the best
Liu Bo
tomcat is much easier and
clean than coreAdmin described in the wiki:
http://wiki.apache.org/solr/SolrCloudTomcat.
It costs me some time to move from jetty to tomcat, but I think our IT team
will like this way. :)
On 6 October 2013 23:53, Liu Bo wrote:
> Hi all
>
> I'v
;replicas":{"core_node2":{
"state":"active",
"core":"content",
"node_name":"10.199.46.202:8080_solr",
"base_url":"http://10.199.46.202:8080/solr";,
"leader":"true"}}},
"shard2":{
* "range":null,*
"state":"active",
"replicas":{"core_node3":{
"state":"active",
"core":"content",
"node_name":"10.199.46.165:8080_solr",
"base_url":"http://10.199.46.165:8080/solr";,
"leader":"true",
*"router":"implicit"*}}
--
All the best
Liu Bo
:"10.199.46.202:8080_solr",
"base_url":"http://10.199.46.202:8080/solr";,
"leader":"true"}}},
"shard2":{
* "range":null,*
"state":"active",
"replicas":{"core_node3":{
"state":"active",
"core":"content",
"node_name":"10.199.46.165:8080_solr",
"base_url":"http://10.199.46.165:8080/solr";,
"leader":"true",
*"router":"implicit"*}}
--
All the best
Liu Bo
lrJ?
If neither of these two ways is working, I think I am going to reuse the
DAO of the old project and feed the data to solr using SolrJ, probably
using embedded Solr server.
Your help will be much of my appreciation.
<http://wiki.apache.org/solr/DataImportHandlerFaq>--
All the best
Liu Bo
This picture is extracted from apache-solr-ref-guide-4.4.pdf ,Maybe it will
help you.
You could download the document from
https://www.apache.org/dyn/closer.cgi/lucene/solr/ref-guide/
-邮件原件-
发件人: Ali, Saqib [mailto:docbook@gmail.com]
发送时间: 2013年8月22日 5:15
收件人: solr-user@lucene.apache.
Hi:
I want to rank the search result by the function:
relevance_score*numberic_field/(relevance_score +numberic_field ) , this
function equals to
1/((1/relevance_score)+1/numberic_field)
As far as I know ,I could use function query: sort=
div(1,sum(div(1,field(numberic_field)),div(1,query
Hi,
Here is the case:Given a doc named "sport center", we hope some query like
"sportctr" (user ignore) can recall it.Can shingle and synonym filter be
combined in some smart way to produce the term?
Thanks,Xiang
SolrJ? post.jar?
>
> Best
> Erick
>
>
> On Tue, Feb 19, 2013 at 8:00 PM, Siping Liu wrote:
>
> > Thanks for the quick response. It's Solr 3.4. I'm pretty sure we get
> plenty
> > memory.
> >
> >
> >
> > On Tue, Feb 19, 2013 at 7:5
.
>
> Personal blog: http://blog.outerthoughts.com/
> LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> - Time is the quality of nature that keeps events from happening all at
> once. Lately, it doesn't seem to be working. (Anonymous - via GTD book)
>
>
> On Tue
Hi,
we have an index with 2mil documents in it. From time to time we rewrite
about 1/10 of the documents (just under 200k). No autocommit. At the end we
a single commit and got time out after 60 sec. My questions are:
1. is it normal to have the commit of this size takes more than 1min? I
know it's
2012 at 3:24 AM, Lee Carroll
wrote:
> take a look at
> http://wiki.apache.org/solr/QueryElevationComponent
>
> On 20 July 2012 03:48, Siping Liu wrote:
>
> > Hi,
> > I have requirements to place a document to a pre-determined position for
> > special filter query v
anyone knows?
On Thu, Jul 19, 2012 at 5:48 PM, Roy Liu wrote:
> Hi,
>
> When I use Transformer to handle files, I always get NULL with
> row.get(columnName).
> anyone knows?
>
> --
> The following file is *data-config.xml*
>
>
>
Hi,
I have requirements to place a document to a pre-determined position for
special filter query values, for instance when filter query is
fq=(field1:"xyz") place document abc as first result (the rest of the
result set will be ordered by sort=field2). I guess I have to plug in my
Java code as a
Hi,
I want to index emails using solr. I put the user name, password, hostname
in data-config.xml under mail folder. This is a valid email but when I run
in url http://localhost:8983/solr/mail/dataimport?command=full-import It
said cannot access mail/dataimporter reason: no found. But when i run
to:erickerick...@gmail.com]
Sent: Tuesday, November 15, 2011 8:37 AM
To: solr-user@lucene.apache.org
Subject: Re: memory usage keep increase
I'm pretty sure not. The words "virtual memory address space" is important
here, that's not physical memory...
Best
Erick
On Mon, Nov 14,
Hi all,
I saw one issue is ram usage keep increase when we run query.
After look in the code, looks like Lucene use MMapDirectory to map index file
to ram.
According to
http://lucene.apache.org/java/3_1_0/api/core/org/apache/lucene/store/MMapDirectory.html
comments, it will use lot of memory.
, and there
are lot of query which cause wide index file access.
Then, the machine has no available memory.
The system change to very slow.
What i did is change lucene code to disable MMapDirectory.
On Wed, Sep 21, 2011 at 1:26 PM, Yongtao Liu wrote:
>
>
> -Original Message-
> F
\apache-solr-3.1.0\contrib\extraction\lib\tika*.jar
--
Best Regards,
Roy Liu
On Mon, Apr 11, 2011 at 3:10 PM, Mike wrote:
> Hi All,
>
> I have the same issue. I have installed solr instance on tomcat6. When try
> to index pdf I am running into the below exception:
>
> 11
I changed data-config-sql.xml to
There are no errors, but, the indexed pdf is convert to Numbers..
200 1 202 1 203 1 212 1 222 1 236 1 242 1 244 1 254 1 255
--
Best Regards,
Roy Liu
On Mon, Apr 11, 2011 at 2:02 PM, Roy Liu wrote
st Regards,
Roy Liu
On Mon, Apr 11, 2011 at 2:12 PM, Darx Oman wrote:
> Hi there
>
> Error is not clear...
>
> but did you copy "apache-solr-dataimporthandler-extras-4.0-SNAPSHOT.jar"
> to your solr\lib ?
>
.
--
Best Regards,
Roy Liu
On Mon, Apr 11, 2011 at 5:16 AM, Lance Norskog wrote:
> You have to upgrade completely to the Apache Solr 3.1 release. It is
> worth the effort. You cannot copy any jars between Solr releases.
> Also, you cannot copy over jars from newer Tika releases.
>
&
Thanks Lance,
I'm using Solr 1.4.
If I want to using TikaEP, need to upgrade to Solr 3.1 or import jar files?
Best Regards,
Roy Liu
On Fri, Apr 8, 2011 at 10:22 AM, Lance Norskog wrote:
> You need the TikaEntityProcessor to unpack the PDF image. You are
> sticking binary blobs int
"Indexing completed. Added/Updated: 5
documents. Deleted 0 documents."
http://localhost:8080/solr/dataimport?command=full-import
However, I can not search anything.
Anyone can help me ?
Thanks.
*data-config-sql.xml*
*schema.xml*
Best Regards,
Roy Liu
.
*1. data-config.xml*
* *
*2. schema.xml*
*3. Database*
*attachment *is a column of table attachment. it's type is IMAGE.
Best Regards,
Roy Liu
Hi all,
I just noticed a wierd thing happend to my solr search result.
if I do a search for "ecommons", it cannot get the result for "eCommons",
instead,
if i do a search for "eCommons", i can only get all the match for "eCommons",
but not "ecommons".
I cannot figure it out why?
please help me
Stephan and all,
I am evaluating this like you are. You may want to check
http://www.tomkleinpeter.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/.
I would appreciate if others can shed some light on this, too.
Bests,
James
On Fri, Sep 10, 2010 at 6:07 AM, Stephan Raemy wrote:
> Hi
I get no match when searching for "helloworld", even though I have "hello
world" in my index. How do people usually deal with this? Write a custom
analyzer, with help from a collection of all dictionary words?
thanks for suggestions/comments.
__
before stressing test, Should i close SolrCache?
which tool u use?
How to do stress test correctly?
Any pointers?
--
regards
j.L ( I live in Shanghai, China)
Hi,
I'm using Solr 1.4 (from nightly build about 2 months ago) and have this
defined in solrconfig:
and following code that get executed once every night:
CommonsHttpSolrServer solrServer = new CommonsHttpSolrServer("http://...";);
solrServer.setRequestWriter(new BinaryRequestWriter());
Are you sure the url is correct?
--
regards
j.L ( I live in Shanghai, China)
Hi,
I read pretty much all posts on this thread (before and after this one). Looks
like the main suggestion from you and others is to keep max heap size (-Xmx) as
small as possible (as long as you don't see OOM exception). This brings more
questions than answers (for me at least. I'm new to So
I understand there's no "update" in Solr/lucene, it's really delete+insert. Is
there anyway to get a Document's insert time stamp, w/o explicitely creating
such a data field in the document? If so, how can I query it, for instance "get
all documents that are older than 24 hours"? Thanks.
__
solr have much fieldtype, like: integer,long, double, sint, sfloat,
tint,tfloat,,and more.
but lucene not fieldtype,,just name and value, value only string.
so i not sure is it a problem when i use solr to search( index made by
lucene).
--
regards
j.L ( I live in Shanghai, China)
I use solr to search and index is made by lucene. (not
EmbeddedSolrServer(wiki is old))
Is it problem when i use solr to search?
which the difference between Index(made by lucene and solr)?
thks
--
regards
j.L ( I live in Shanghai, China)
i use lucene-core-2.9-dev.jar, lucene-misc-2.9-dev.jar
On Thu, Jul 2, 2009 at 2:02 PM, James liu wrote:
> i try http://wiki.apache.org/solr/MergingSolrIndexes
>
> system: win2003, jdk 1.6
>
> Error information:
>
>> Caused by: java.lan
i try http://wiki.apache.org/solr/MergingSolrIndexes
system: win2003, jdk 1.6
Error information:
> Caused by: java.lang.ClassNotFoundException:
> org.apache.lucene.misc.IndexMergeTo
> ol
> at java.net.URLClassLoader$1.run(Unknown Source)
> at java.security.AccessController.doPriv
Hi,
I have this standard query:
q=(field1:hello OR field2:hello) AND (field3:world)
Can I use dismax handler for this (applying the same search term on field1 and
field2, but keep field3 with something separate)? If it can be done, what's the
advantage of doing it this way over using the s
if user use keyword to search and get summary(auto generated by
keyword)...like this
doc filed: id, text
id: 001
text:
> Open source is a development method for software that harnesses the power
> of distributed peer review and transparency of process. The promise of open
> source is better qual
Hi,
I have a field called "service" with following values:
- Shuttle Services
- Senior Discounts
- Laundry Rooms
- ...
When I conduct query with "facet=true&facet.field=service&facet.limit=-1", I
get something like this back:
- shuttle 2
- service 3
- senior 0
- laundry 0
- room 3
-
*Collins:
*i don't know what u wanna say?
--
regards
j.L ( I live in Shanghai, China)
On Mon, Feb 16, 2009 at 4:30 PM, revathy arun wrote:
> Hi,
>
> When I index chinese content using chinese tokenizer and analyzer in solr
> 1.3 ,some of the chinese text files are getting indexed but others are not.
>
are u sure ur analyzer can do it good?
if not sure, u can use analzyer link in
first: u not have to restart solr,,,u can use new data to replace old data
and call solr to use new search..u can find something in shell script which
with solr
two: u not have to restart solr,,,just keep id is same..example: old
id:1,title:hi, new id:1,title:welcome,,just index new data,,it will
1: modify ur schema.xml:
like
2: add your field:
3: add your analyzer to {solr_dir}\lib\
4: rebuild newsolr and u will find it in {solr_dir}\dist
5: follow tutorial to setup solr
6: open your browser to solr admin page, find analyzer to check analyzer, it
will tell u how to ana
U can find answer in tutorial or example
On Tuesday, June 2, 2009, The Spider wrote:
>
> Hi,
> I am using solr nightly bind for my search.
> I have to search in the location field of the table which is not my default
> search field.
> I will briefly explain my requirement below:
> I want to ge
u means how to config solr which support chinese?
Update problem?
On Tuesday, June 2, 2009, Fer-Bj wrote:
>
> I'm sending 3 files:
> - schema.xml
> - solrconfig.xml
> - error.txt (with the error description)
>
> I can confirm by now that this error is due to invalid characters for the
> XML form
1 - 100 of 346 matches
Mail list logo