I am trying to integrate JBOSS and Solr(multicore)
To get started, I am trying to deploy a single instance of solr.
1) I have Edited
C:/jboss/jboss-4.2.1.GA/server/default/conf/jboss-service.xml and entered
the following details:
http://www.w3.org/2001/XMLSchema-instance";
Hi,
So I've been using the textTight field to hold filenames, and I've run
into a weird problem. Basically, people want to search by part of a
filename (say, the filename is stm0810m_ws_001ftws and they want to
find everything starting with stm0810m_ (stm0810m_*). I'm hoping
someone mig
> It is useful only if your bandwidth is very low.
> Otherwise the cost of copying/comprressing/decompressing can take up
> more time than we save.
I mean compressing and transferring. If the optimized index itself has
a very high compression ratio then it is worth exploring the option
of compres
I may be a bit off the mark. It seems that DataImportHandler may be
able to do this very easily for you.
http://wiki.apache.org/solr/DataImportHandler#jdbcdatasource
On Fri, Oct 24, 2008 at 6:28 PM, Simon Collins
<[EMAIL PROTECTED]> wrote:
> Hi
>
>
>
> We're running solr on a win 2k3 box under to
This is the JIRA location
https://issues.apache.org/jira/secure/Dashboard.jspa
The trunk is not changed a lot since 1.3 release. If it works for you
you can just stick to the one you are using till you get a patch.
--Noble
On Mon, Oct 27, 2008 at 9:04 PM, William Pierce <[EMAIL PROTECTED]> wrote
Are you sure you optimized the index?
It is useful only if your bandwidth is very low.
Otherwise the cost of copying/comprressing/decompressing can take up
more time than we save.
On Tue, Oct 28, 2008 at 2:49 AM, Simon Collins
<[EMAIL PROTECTED]> wrote:
> Is there an option on the replication ha
On Oct 27, 2008, at 8:53 PM, Ryan McKinley wrote:
On Oct 27, 2008, at 6:10 PM, Grant Ingersoll wrote:
Warning: shameless plug: Tom Morton and I have a chapter on NER
and OpenNLP (and Solr, for that matter) in our book "Taming
Text" (Manning) and the code will be open once we have a place
On Oct 27, 2008, at 6:10 PM, Grant Ingersoll wrote:
Warning: shameless plug: Tom Morton and I have a chapter on NER and
OpenNLP (and Solr, for that matter) in our book "Taming
Text" (Manning) and the code will be open once we have a place to
put it (hopefully soon). In fact, you'll see u
Warning: shameless plug: Tom Morton and I have a chapter on NER and
OpenNLP (and Solr, for that matter) in our book "Taming
Text" (Manning) and the code will be open once we have a place to put
it (hopefully soon). In fact, you'll see us doing a lot of this kind
of stuff w/ Solr and it sh
Is there an option on the replication handler to compress the files?
I'm trying to replicate off site, and seem to have accumulated about
1.4gb. When compressed with winzip of all things i can get this down to
about 10% of the size.
Is compression in the pipeline / can it be if not!
sim
Boost at index time or at query time?
For index time, you would add the boost on the field/document. At
query time, you can add boosts to each term that belongs to a specific
field.
On Oct 27, 2008, at 2:10 PM, sunnyfr wrote:
Hi,
I've my field in the schema which are text_es, text_fr,
Thank you both for your nice answers. I will try it out.
2008/10/27 Erik Hatcher <[EMAIL PROTECTED]>
> I don't think delete-by-query supports purely negative queries, even though
> they are supported for q and fq parameters for searches.
>
> Try using:
>
> *:* AND -deptId:[1 TO *]
>
>Er
Hi,
I've my field in the schema which are text_es, text_fr, text_ln And I
would like to boost them according the field language, How could I do that,
According to the fact that I've stored all this field ???
Thanks a lot for your help,
Sunny
--
View this message in context:
http://www.n
Extractors are exactly as good as the data you have to train or
configure them with. An open source extractor platform may still
require you to come up with a rather large heap of data from
somewhere.
Not all the vendors of extractors lose money.
How useful NEE is for search is an ongoing questio
Verity sold a lot of features based on "we might need it at some point."
Very few people deployed the advanced features. They just didn't need them.
wunder
On 10/27/08 9:27 AM, "Charlie Jackson" <[EMAIL PROTECTED]> wrote:
> Yeah, when they first mentioned it, my initial thought was "cool, but we
Well... IMHO that depends. One of the services we provide is a "automatic
clipping" in which our client chooses 20~30 texts from the media he woud
like to be aware. With classification algorithms we then keep him aware of
every new text of his interest. We gained about 10% of precision just by
addi
Yeah, when they first mentioned it, my initial thought was "cool, but we don't
need it." However, some of the higher ups in the company are saying we might
want it at some point, so I've been asked to look into it. I'll be sure to let
them know about the flaws in the concept, thanks for that inf
Hi,
I would like to know if I have to do something special for Greek's
characters?
My schema is configurate like that:
It just stores documents which doesn't have greek characters
All every language are working fine.
Any idea ???
Thanks a lot,
--
The vendor mentioned entity extraction, but that doesn't mean you need it.
Entity extraction is a pretty specific technology, and it has been a
money-losing product at many companies for many years, going back to
Xerox ThingFinder well over ten years ago.
My guess is that very few people really ne
Hi Simon,
I came across your post to the solr users list about using facet
prefixes, shown below. I was wondering if you were still using your
modified version of SimpleFacets.java, and if so -- if you could send me
a copy. I'll need to implement something similar, and it never hurts to
star
True, though I may be able to convince the powers that be that it's worth the
investment.
There are a number of open source or free tools listed on the Wikipedia entry
for entity extraction
(http://en.wikipedia.org/wiki/Named_entity_recognition#Open_source_or_free) --
does anyone have any exp
Folks:
The replication handler works wonderfully! Thanks all! Now can someone
point me at a wiki so I can submit a jira issue lobbying for the inclusion
of this replication functionality in a 1.3 patch?
Thanks,
- Bill
--
From: "Noble Paul ??
For the record, LingPipe is not free. It's good, but it's not free.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Rafael Rossini <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Friday, October 24, 2008 6:08:14 PM
> Subject:
Hi,,
I'm using, solr1.3 tomcat55 and I've got this error, when I fire :
...8180/solr/video/select/?q=échelle
−
0
150
−
échelle
−
−
2007-10-31T10:48:34Z
5625531
FR
10
−
Régis pompier
http://www.nabble.com/solr1.3---tomcat-55-%3Cb%3E%3Cstr-name%3D%22q%22%3E%C3%83%C2%A9chelle%3C-str%3E%3C-b
Hi,
After fully reloading my index, using another field than a Data does not
help that much.
Using a warmup query avoids having the first request slow, but:
- Frequents commits means that the Searcher is reloaded frequently
and, as the warmup takes time, the clients must wait.
- Having
Hi,
I try to boost some language, I would like to know if it's necessary to
store them to be able to boost them using dismax?
Thanks a lot,
Sunny
--
View this message in context:
http://www.nabble.com/solr-1.3-multi-language---tp20188549p20188549.html
Sent from the Solr - User mailing list arch
I don't think delete-by-query supports purely negative queries, even
though they are supported for q and fq parameters for searches.
Try using:
*:* AND -deptId:[1 TO *]
Erik
On Oct 27, 2008, at 9:21 AM, Alexander Ramos Jardim wrote:
Hey pals,
I am trying to delete a couple docu
Alexander Ramos Jardim wrote:
Hey pals,
I am trying to delete a couple documents that don't have any value on a
given integer field. This is the command I am executing:
$curl http://:/solr/update -H 'Content-Type:text/xml' -d
'-(deptId:[1 TO *])'
$curl http://:/solr/update -H 'Content-Type:text
Hey pals,
I am trying to delete a couple documents that don't have any value on a
given integer field. This is the command I am executing:
$curl http://:/solr/update -H 'Content-Type:text/xml' -d
'-(deptId:[1 TO *])'
$curl http://:/solr/update -H 'Content-Type:text/xml' -d
''
But the documents d
29 matches
Mail list logo