Hi Tommaso,
Thanks a lot i am able index the content and extract the entities has
mentioned by you.
I have made the xml content like this
Entity.xml
Senator Dick Durbin (D-IL) Chicago , March
3,2007.
Entity Extraction
and it worked.
For benefit of others the procedure which i followed
Thanks. Not sure what the "value" should be (assume it is the servlet
name, but is there a default servlet name for term vectors? - the docs
don't really say much, so any guidance useful). It also looks like
using the ModifiableParams returns only a single offset for each term
i.e. if tf > 1
Hi Tom,
> >.Do you use multiple threads for indexing? Large RAM buffer size is
> >>also good, but I think perf peaks out mabye around 512 MB (at least
> >>based on past tests)?
>
> We are using Solr, I'm not sure if Solr uses multiple threads for indexing.
>We have 30 "producers" each sen
Hi Mike,
>.Do you use multiple threads for indexing? Large RAM buffer size is
>>also good, but I think perf peaks out mabye around 512 MB (at least
>>based on past tests)?
We are using Solr, I'm not sure if Solr uses multiple threads for indexing. We
have 30 "producers" each sending documents
Hi,
there is a solution without the patch. Here it should be explained:
http://www.lucidimagination.com/blog/2010/08/11/stumped-with-solr-chris-hostetter-of-lucene-pmc-at-lucene-revolution/
If not, I will do on 9.10.2010 ;-)
Regards,
Peter.
> I've a similar problem with a project I'm working on
If you use Chantal's suggestion from an earlier thread, involving facets
and tokenized fields, but not the tokens handling -- i think it will
work. (But that solution requires only one auto-suggest value per
document).
There are a bunch of ways people have figured out to do auto-suggest
witho
This might be a dumb question, but should I expect that a custom
transformer written in java will perform better than a javascript
script transformer?
Or does the javascript get compiled to bytecode such that there really
should not be much difference between the two?
Of course, the bigger perfor
Running 1.4.1.
I'm able to execute stats queries against multi-valued fields, but when
given a facet, the statscomponent only considers documents that have a facet
value as the last value in the field.
As an example, imagine you are running stats on "fooCount", and you want to
facet on "bar", wh
Thijs,
The only thing I could find is this:
http://search-lucene.com/m/VDjIlUc2Ci2/iscsi&subj=Lucene+on+NFS+iSCSI
I don't have experience with transferring Solr/Lucene/index to different
hardware nodes without stopping and persisting things before transfer.
Otis
Sematext :: http://sematex
Hi,
I don't *think* there is any DIH request queuing going on - each is triggered
by
the DIH request. You need to queue them yourself if your app/data is such that
running multiple imports/deltas causes problems with either hardware or data.
Otis
Sematext :: http://sematext.com/ :: Solr
My simple but effective solution to that problem was to replace the
white spaces in the items you index for autosuggest with some special
character, then your wildcarding will work with the whole phrase as you
desire.
Index: "mike_shaffer"
Query: "mike_sha*"
-Original Message-
From: mik
On Wed, Oct 6, 2010 at 9:17 PM, Kouta Osabe wrote:
> Hi, Gora
>
> Thanks for your advice.
>
> and then I try to write these codes following your advice.
>
> Case1
> "pub_date" column(MySQL) is 2010-09-27 00:00:00.
>
> I wrote like below.
>
> SolrJDto info = new SolrJDto();
> TimeZone tz2 = TimeZon
Hi, Gora
Thanks for your advice.
and then I try to write these codes following your advice.
Case1
"pub_date" column(MySQL) is 2010-09-27 00:00:00.
I wrote like below.
SolrJDto info = new SolrJDto();
TimeZone tz2 = TimeZone.getTimeZone("UTC+9");
Calendar cal = Calendar.getInstance(tz2);
// publ
Hi,
Going to start playing with SolrCloud soon, and have a few questions which is
not answered on http://wiki.apache.org/solr/SolrCloud
a) I want to have multiple cores on the same server, which may be part of the
same collection
http://localhost:8983/solr/admin/cores?action=CREATE&name=core1&c
Hi,
I was interested in gaining some insight into how you guys schedule updates for
your Solr index (I have a single index).
Right now during development I have added deltaQuery specifications to data
import entities to control the number of rows being queries on re-indexes.
However in terms o
It seemed like SOLR-1316 was a little too long to continue the conversation.
Is there support for quotes indicating a phrase query. For example, my
autosuggest query for "mike sha" ought to return "mike shaffer", "mike
sharp", etc. Instead I get suggestions for "mike" and for "sha", resulting
in a
Marian Steinbach-3 wrote:
>
> We are planning to periodically index several MySQL database tables
> plus a Zope CMS document tree in Solr.
>
> Indexing the Zope DB seems to be tricky though.
>
> Has anyone here done this and could provide a URL or sample code to a
> solution? Something running
Hi Mahesh,
the issue here is that you're not sending a ...
to Solr from which UIMAUpdateRequestProcessor extracts text to analyze :)
Infact by default UIMAUpdateRequestProcessor extracts text to analyze from
that field and send that value to a UIMA pipeline.
Obviously you could choose to customize
Hi Tommaso,
I will try the service call outside Solr/UIMA.
And the text i am using is
FileName: Entity.xml
Entity.xml
Senator Dick Durbin (D-IL) Chicago , March 3,
2007.
Entity Extraction
and using curl to index it curl http://localhost:8080/solr/update -F
solr.bo...@entity.xml
Hi.
Our hardware department is planning on moving some stuff to new machines
(on our request)
They are suggesting using virtualization (some CISCO solution) on those
machines and having the 'disk' connected via ISCSI.
Does anybody have experience running a SOLR index on a ISCSI drive?
We have
On Wed, Oct 6, 2010 at 1:58 PM, Marian Steinbach wrote:
> Hi!
>
> We are planning to periodically index several MySQL database tables
> plus a Zope CMS document tree in Solr.
>
> Indexing the Zope DB seems to be tricky though.
[...]
Been a while since I touched Zope, but there seems to be somethi
Hi!
We are planning to periodically index several MySQL database tables
plus a Zope CMS document tree in Solr.
Indexing the Zope DB seems to be tricky though.
Has anyone here done this and could provide a URL or sample code to a
solution? Something running as a python script would be great, but
Hi All
I m new to solr extract request handler, i want to index pdf documents but when
i submit document to solr using curl i got following exception
Document [Null] missing required field DocID
my curl command is like
curl
"http://localhost:8983/solr1/update/extract?literal.DocID=123&fma
If you add a field to the schema file and restart Solr, the existing
documents won't have that field. New documents that you index will. If
this is ok, you are safe.
In general, don't change the schema without indexing. You can trip
over the weirdest problems.
On Wed, Oct 6, 2010 at 12:31 AM, Gor
On Wed, Oct 6, 2010 at 11:59 AM, M.Rizwan wrote:
> Hi,
>
> I have lots of documents in my solr index.
> Now I have a requirement to change its schema and add a new field.
>
> What should I do, so that all the documents keep working after schema
> change?
[...]
You will need to reindex if the sche
Hi,
I have lots of documents in my solr index.
Now I have a requirement to change its schema and add a new field.
What should I do, so that all the documents keep working after schema
change?
Thanks
Riz
26 matches
Mail list logo