If you need to search via the Hibernate API, then use hibernate search.
If you need a scaleable HTTP (REST) then solr may be the way to go.
Also, i don't think hibernate has anything like the faceting / complex
query stuff etc.
On Dec 29, 2009, at 3:25 PM, Márcio Paulino wrote:
Hey Every
The distributed binaries do not include the new spatial types, so the
.../trunk/example/ store app does not start.
Please either always check in the latest binaries (a pain), or edit
the README.txt to include "now first do an 'ant clean dist'". (And
maybe not include the binaries?)
http://svn.ap
hi,
hibernate search is only work with hibernate, while solr can use for
difference system other then hibernate (loose coupling)
current solr still not support complex POJO index like what hibernate did.
1) I think one way u can do is index on solr and retrieve the unique id and
get from databas
Hi all
I'm experimenting with Solr. I've successfully indexed some PDFs and
all looks good but now I want to index some PDFs with metadata pulled
from another source. I see this example in the docs.
curl
"http://localhost:8983/solr/update/extract?literal.id=doc4&captureAttr=true&defaultField=tex
On Dec 29, 2009, at 1:59 PM, Joe Calderon wrote:
> hello *, i want to boost documents that match the query better,
> currently i also index my field as a string an boost if i match the
> string field
>
> but im wondering if its possible to boost with bf parameter with a
> formula using the funct
Yonik Seeley-2 wrote:
>
> If you make further changes to the index and do a commit, you should
> see the space go down.
>
It worked. I added a bogus document using /update and then performed a
commit and now the files are down to 6MB.
http://.../core00/update?stream.body=%3Cadd%3E%3Cdoc%3E%3
Yonik Seeley-2 wrote:
>
> On Tue, Dec 29, 2009 at 1:23 PM, markwaddle wrote:
>> I have an index that used to have ~38M docs at 17.2GB. I deleted all but
>> 13K
>> docs using a delete by query, commit and then optimize. A "*:*" query now
>> returns 13K docs. The problem is that the files on dis
> >
> > > We do auto-complete through prefix searches on shingles.
> > >
> >
> > Just to confirm, do you mean using EdgeNgram filter to produce letter
> > ngrams
> > of the tokens in the chosen field?
> >
> >
>
>> No, I'm talking about prefix search on tokens produced by a ShingleFilter.
>>
>
> I d
Hi,
you could create an additional index field res_ranked_url that contains
the concatenated value of an url and its corresponding rank, e.g.,
res_rank + " " + res_url
Then, q=res_ranked_url:"1 url1" retrieves all documents with url1 as the
first url.
A drawback of this wor
Hey Everyone!
I was make a comparison of both technologies (SOLR AND Hibernate Search) and
i see many things are equals. Anyone could told me when i must use SOLR and
when i must use Hibernate Search?
Im my project i will have:
1. Queries for indexed fields (Strings) and for not indexed Fields (
On Tue, Dec 29, 2009 at 1:23 PM, markwaddle wrote:
> I have an index that used to have ~38M docs at 17.2GB. I deleted all but 13K
> docs using a delete by query, commit and then optimize. A "*:*" query now
> returns 13K docs. The problem is that the files on disk are still 17.1GB in
> size. I expe
> Use &zeroDateTimeBehavior=convertToNull parameter in you sql connection
> string.
>
That worked great!
Thanks!
--
A. Steven Anderson
Independent Consultant
A. S. Anderson & Associates LLC
P.O. Box 672
Forest Hill, MD 21050-0672
443-790-4269
st...@asanderson.com
> I'm trying to index a MySQL database that has some invalid
> dates (e.g.
> "-00-00") which is causing my DIH to abort.
>
> Ideally, I'd like DIH to skip this optional field but not
> the whole record.
>
> I don't see any way to do this currently, but is there any
> work-around?
Use &zeroDa
Greetings!
I'm trying to index a MySQL database that has some invalid dates (e.g.
"-00-00") which is causing my DIH to abort.
Ideally, I'd like DIH to skip this optional field but not the whole record.
I don't see any way to do this currently, but is there any work-around?
Should there be a
Greetings!
Is there any significant negative performance impact of using a
dynamicField?
Likewise for multivalued fields?
The reason why I ask is that our system basically aggregates data from many
disparate data sources (structured, unstructured, and semi-structured), and
the management of the
hello *, i want to boost documents that match the query better,
currently i also index my field as a string an boost if i match the
string field
but im wondering if its possible to boost with bf parameter with a
formula using the function strdist(), i know one of the columns would
be the field nam
Hi Mark,
I can't help with reducing filesizes, but I'm curious...
What sort of documents were you storing, number of fields, average document
size, many dynamic fields or mainly all static?
It would be good to hear about a real-world large-scale index in terms of
response times, did the serve
It looks like you are using the solr multicore.
How are you setting the solr home (meaning which like are u suisng to tell the
tomcat about ur solr home path)
Ankit
-Original Message-
From: Giovanni Fernandez-Kincade [mailto:gfernandez-kinc...@capitaliq.com]
Sent: Monday, December 2
Ditto. There should have been an DIH command to re-sync the Index with the
DB.
Right now it looks like one way street form DB to Index.
On Tue, Dec 29, 2009 at 3:07 AM, Ravi Gidwani wrote:
> Hi Shalin:
>
> > I get your point about not knowing what has been deleted from
> the database.
I have an index that used to have ~38M docs at 17.2GB. I deleted all but 13K
docs using a delete by query, commit and then optimize. A "*:*" query now
returns 13K docs. The problem is that the files on disk are still 17.1GB in
size. I expected the optimize to shrink the files. Is there a way I can
Hello everybody, i would like to know how to create index supporting a
parent/child mapping and then querying the child to get the results.
in other words; imagine that we have a database containing 2
tables:Keyword[id(int), value(string)] and Result[id(int), res_url(text),
res_text(tex), res_date
On Dec 29, 2009, at 8:59 AM, zoku wrote:
Hi there!
Is it possible, to limit the Solr Queries to predefined values e.g.:
If the User enters "/select?q=anyword&fq=anyfilter&rows=13" then the
filter
and rows arguments are ignored an overwritten by the predefined values
"specialfilter" and "6".
Hi there!
Is it possible, to limit the Solr Queries to predefined values e.g.:
If the User enters "/select?q=anyword&fq=anyfilter&rows=13" then the filter
and rows arguments are ignored an overwritten by the predefined values
"specialfilter" and "6".
The goal is to prevent users from getting part
Ok
My configuration is correct
I found the problem
Curl had problems with Greek chars
So I developed a application an passed my data with Http post
And it’s ok
Thanks
-Original Message-
From: Markus Jelsma [mailto:mar...@buyways.nl]
Sent: Monday, December 28, 2009 6:26 PM
To: solr-
if you wish to search on fields using wild-card you have to use a
copyField to copy all the values of "Bool_*" to another field and
search on that field.
On Tue, Dec 29, 2009 at 4:14 AM, Harsch, Timothy J. (ARC-TI)[PEROT
SYSTEMS] wrote:
> I use dynamic fields heavily in my SOLR config. I would
Hi Shalin:
> I get your point about not knowing what has been deleted from the
> database. So this is what even I am looking for:
>
> 0) A document (id=100) is currently part of solr index.(
> 1) Lets say the application deleted a record with id=100 from database.
>
> 2) Now I need to e
26 matches
Mail list logo